Simulated gearshifts for performance EVs are a good thing. Anyone who disagrees has either never driven a car with the tech or simply doesn’t enjoy driving. It’s the main reason why the Hyundai Ioniq 5 N is my favorite electric car right now. It might not be real, but it feels real. And from behind the wheel, that’s all that matters. Porsche, a longtime opponent of using fake gearshifts in its EVs, is finally coming around to the idea.
As a refresher, Porsche’s stance on fake gearshifts has, for the past two years, been credited to a 2024 interview given by Lars Kern, one of the company’s most senior development drivers, to Australian site drive.com.au. In that interview, Kern crushed my dreams of ever driving a 911-shaped EV with simulated PDK gearshifts.
Here’s what he said:
The electric engine is better than an ICE [internal combustion engine], so we figured there’s no reason to simulate what has been in the past.
We looked at it, but … I don’t see the point of using it to make it feel like a combustion engine because it’s not, so we don’t.
Pretty cut and dry, right? Porsche’s been mum on the idea since, so I always assumed we’d never see paddle shifters in a Stuttgart-developed electric car. But as it turns out, the company’s been hard at work developing its own “virtual gear shift” system. It’s even gone as far as building a working Cayenne EV prototype with the tech.
Imagine this, but with V-8 noises bellowing through the cabin. Source: Porsche
Sascha Niesen, the fleet director for Porsche’s prototypes, confirmed the existence of such a vehicle to The Drive, telling the publication it was developed by the same engineers responsible for the company’s dual-clutch and torque converter automatic transmissions.
I drove a concept vehicle in March. I wanted to hate it because it’s artificial and it’s fake and everything. I was afraid that the people that are doing it are just software geeks who have no idea how a transmission works and try to emulate it. [T]he [engineers] know what they’re doing. They were able to make it feel like a proper torque converter gearbox. I could not tell the difference.
Instead of using noises inspired by internal combustion engines, as with the Hyundai, Porsche’s system uses real, pre-recorded V8 engine sounds from the ICE-powered Cayenne. So in practice, driving a Cayenne EV with the tech should mirror the experience you’d have in, say, a Cayenne GTS or a Cayenne Turbo, right down to the silly exhaust farts with each downshift.
Pictured, one of the best-sounding cars in Porsche’s entire lineup, the Cayenne GTS. Source: Porsche
Whether the system will make it to production is another story. Niesen admitted to The Drive that it’s not as simple as delivering an over-the-air software update. The Cayenne EV was never designed to have paddle shifters, so there’s no hardware to pair to the tech. But the demand exists, at least from some customers.
That’s key. You’ve got to give the customer the option to be more engaged, but in an EV, it cannot be mandatory. From an engineering perspective, it doesn’t make any sense to introduce a gear shift. But then again, you have continuously variable transmissions that did introduce gear shifts because it felt more natural. You didn’t need it.
Conceptually, Porsche is finally on the right path. When these fake gearshifts arrive on the long-awaited 718 EV, I can finally rest.
[Writer’s note: As corny as fake gearshifts sound, they definitely have a place in electric performance cars, especially ones expected to see some track time. You’ll hear people call a bend a “second-gear corner” or a “third-gear corner” and having that reference point helps with orientation at a trackday. Also, everything’s kinda digital in most new cars anyway. Piped-in audio, strategically programmed drift modes, even clutch delay valves and anti-stall drive-by-wire mapping on new manual cars. As long as the sensation feels real, what’s the difference between an isolated and gadget-laden automatic ICE car and an EV that simulates its shifts? For now, I’m just glad to see that Porsche seems to have found the light. Remember the revolt when the 991.1 GT3 went PDK-only? –TH]
America once had a love affair with the coupé utility. These little trucks promised to be the best of both worlds. They were cars that you could take to town on the weekend and then use for work during the week. No ute is for sale in America today, and in their last gasp, they got weird. In the final years of the iconic Chevrolet El Camino and its brother, the GMC Caballero, General Motors sold a small car-based truck with a thrifty V8 diesel engine. There was only one problem: it came too late to make a difference.
For more than the past three decades, diesel engines have been known for their distinctive combination of power and fuel economy. Diesel trucks regularly return better fuel economy than their gasoline-powered siblings while also putting down more torque. Historically, these engines have also been known to survive hundreds of thousands of miles, often with only minimal repairs. Diesel engines may be losing their edge today thanks to high diesel prices, sometimes poorly implemented emissions equipment, and the rise of diesel-like gasoline engines, but most heavy-duty pickup trucks still ship with diesels.
However, the story was very different back in the 1970s. Back then, diesel engines were not applied to pickup trucks as a method of upping their firepower. Instead, America went through not just one, but two gas crunches. If that wasn’t bad enough, the nation’s economy was in a trough, too. Suddenly, everyone cared about saving fuel.
Mercedes Streeter
Automakers and inventors scrambled to find ways to ease the pain, from the rise of fascinating, but ultimately crappy electric cars, to developing vehicles powered by Wankel rotaries. The diesel engine was also seen as a bit of a magic cure. Several decades ago, diesel was usually cheaper than gasoline. Small diesel truck engines were not powerhouses in those days, but they sipped fuel compared to their thirsty gasoline counterparts.
General Motors often gets a bad rap for how it handled its diesel program of the late 1970s. Indeed, early examples of the 5.7-liter Oldsmobile diesel V8 were infamous for finding fascinating, yet infuriating ways to break. But what’s not often told is that General Motors fixed its diesel. GM’s diesel engines actually became pretty good for the era, just in time for everyone to stop caring. One of the victims was what would become one of the rarest models of GM’s famous coupé utility. Yep, the El Camino came in a diesel version, but its run was so short that it barely even existed at all.
The story of how General Motors screwed up the Oldsmobile diesel V8 has been told countless times since its inception. If you’ve missed those explainers, I’ll keep it short. Here’s what Lewin wrote about the Oldsmobile diesel V8’s development:
Oldsmobile engineers decided to start with what they knew, and based their work on the existing Oldsmobile 350 cubic-inch V8. It was this decision that played a role in the failures to come. That’s because a diesel engine typically runs at a far higher compression ratio than a typical gasoline engine. A gas engine might run at somewhere between 8:1 and 12:1, while diesels typically run from 14:1 to 22:1. This is mostly because gas engines are desperately trying to avoid compression ignition of the fuel, while diesel engines rely on that same effect.
The engine’s designers took this into account to some degree, designing a reinforced block for the diesel application. Other changes included hardened camshafts, larger main bearings, and tougher, thicker connecting rods and piston pins.
For all that the engineers did, they didn’t go far enough. The diesel engine’s heads used the same head bolts and 10-bolt pattern as the gas engine. This decision was made to allow the diesel engine and gasoline engine to share some of the same tooling. However, it meant that the head bolts were extremely overstressed in the diesel application. They were more than capable of handling the cylinder pressures of a gasoline engine, but they couldn’t take the additional strain of the high-compression Oldsmobile diesel design, which ran at a lofty 22.5:1. The design really needed more head bolts, and likely stronger ones too, but budget concerns won the day.
GM
Not often reported in these tales is the fact that General Motors controlled a 60 percent share of the diesel passenger car market.
The engine, which launched in 1978, was pushed hard into the GM product portfolio. Oldsmobile said that it sold 19 different models that were available with the engine. GM diesels were plopped into everything from coupés and sedans to wagons and extended wheelbase executive cars. These diesels sold exceptionally well, too, with General Motors selling hundreds of thousands of examples each year. Sales peaked in 1981 with more than 310,000 diesel cars sold. When all was said and done, GM had installed diesel power into over a million cars and pickup trucks by around 1985, with most examples actually being cars.
That’s phenomenal. To put that into comparison, in the modern day, Volkswagen was the king of diesel car sales in America with its TDI “Clean Diesel” cars. In 2014, right before the hit of Dieselgate scandal, Volkswagen sold 79,422 TDIs in America, a fraction of the diesel cars that GM used to sell in the 1980s. In 2014, these cars represented more than a fifth of all of Volkswagen of America’s sales.
GM
Despite its success in the marketplace, the Oldsmobile diesel V8 drove down a bumpy road, as I previously wrote:
The Oldsmobile diesel V8 had a knack for stretching or snapping its head bolts, leading to blown head gaskets at best or hydrolocking from coolant ingestion at worst. If your Oldsmobile diesel V8 didn’t blow its head, it could have also lost its injectors and internals to corrosion since Oldsmobile neglected to add a water separator to ensure your diesel fuel didn’t have water contamination. Yet, if you somehow lucked out on both counts, maybe the timing chain would stretch out.
The Oldsmobile diesel V8 was so infamously unreliable that it wasn’t certified for sale in California. Normally, something like this would happen because of emissions. In this case, it’s because all nine of the Olds diesel-equipped cars failed to complete the state’s emissions testing program. Every test vehicle had engine issues while seven of the vehicles had additional transmission issues on top of their bad engines.
GM
If you’re scratching your head about how engineers could make such a garbage engine, you should know that, reportedly, it wasn’t really their fault. As The New York Times reported, Oldsmobile diesel engineer Darrel R. Sand tried to blow the whistle, and his efforts were allegedly met by getting fired.
As the New York Times wrote, General Motors was hammered by lawsuits left and right. Individuals sued, consumer protection groups sued, groups of people sued in class actions, and even the New York Attorney General sued GM over the diesel debacle. When the dust settled, General Motors had to deal with the 10,000 people across 14 states who demanded a uniform redress program in addition to all of the other lawsuits.
Not Giving Up
Here is where many stories about GM’s diesel development of the 1980s end. Less often reported is that General Motors corrected the disaster.
GM
In 1981, Oldsmobile launched a new, fixed version of its 5.7-liter diesel. Engineers redesigned the diesel’s heads, used stronger head bolts, and upgraded the head gasket material. Further improvements, Curbside Classic notes, came to the Stanadyne injector pump. The previous iteration had a plastic collar that had a knack for failure, while the new version had a metal collar. GM also changed the engine’s glow plugs and transitioned from flat tappets, which wore quickly, to roller lifters and hardened cams. These upgraded engines, identified with “350 DX” on their blocks, didn’t suffer from nearly as many failures as the earlier engines.
As a result, the New York Times wrote in 1983, complaints about engine failures dropped precipitously in 1981 with the debut of the upgraded engine. Finally, General Motors built the Oldsmobile 5.7 diesel that it should have made in the first place, and as I noted, diesel sales hit their peak that year.
It wasn’t a fluke, either. The video above shows John Davis of MotorWeek praising an Oldsmobile 98 Regency for its quality improvements and good fuel economy.
The El Camino Goes Diesel
GM
The story of the El Camino is a great example of how being the first isn’t always the most important thing. In American coupé utility lore, the Ford Ranchero, which launched in the final days and hours of 1956, blew the public away. It wasn’t the first American coupé utility, but it was the first for what was then the modern era, and people were hooked. The Ranchero was a truck, a car, and a fashion statement all in one package.
As In The Garage Media writes, it’s possible that designer and executive Harley Earl might have pitched the creation of a GM coupé utility as early as 1952. General Motors took its time to wade into this market, first experimenting by building pickup trucks with car-like body details like the 1955 Chevrolet Cameo Carrier. Ultimately, it would take the Detroit giant until October 16, 1958, to deliver a proper coupé utility to market. The El Camino was born.
GM
The El Camino might have been nearly two years late, but General Motors did its homework. The El Camino bore GM’s freshest styling for 1959 and, importantly, rode on the new GM B platform that underpinned the Brookwood station wagon. This made the 210.9-inch El Camino about eight inches longer than a Ranchero, and the El Camino rocked rather splendid full-size style.
Underneath, GM said, the El Camino was pretty close to being a proper truck with a steel double-wall bed and protective steel skids in said bed for the loading of heavy items. Payload was 1,150 pounds, or 40 pounds short of a well-equipped Ranchero. GM’s wanting buyers to think of the El Camino as a type of truck was reflected in the marketing. The Ranchero wasn’t a member of Ford’s F-Series, but Chevrolet was happy to call the El Camino a Task-Force truck.
GM
It was also just pretty neat as a vehicle; buyers had a choice of one of 23 color combinations and engines ranging from a thrifty 235 cubic inch straight-six that made 135 HP gross to a rumbling 348 cubic inch V8 good for 315 HP gross. In its first year of sales, the El Camino sold 22,246 units, more than the 14,169 Rancheros sold in that same year. The El Camino would continue to eat the Ranchero’s lunch throughout its production run. Ford was first, but to many consumers, Chevrolet did it better.
Ford would give up on the Ranchero in 1979, but GM’s coupé utilities from Chevrolet and GMC managed to keep momentum into the 1980s. That’s where the diesel comes in.
GM
The El Camino entered its fifth generation in 1978. For this new El Camino, Chevrolet kept with the times and downsized its coupé utility. However, as Old Cars Weekly writes, engineers were concerned that making the El Camino too small would compromise its capabilities. Their solution was to make the El Camino’s body some seven inches shorter to 201 inches, but extend the wheelbase by one inch to 117 inches. This engineering trickery had the effect of lowering weight by up to 300 pounds, while still resulting in a larger cab than the fourth-generation model.
As Old Cars Weekly notes, the fifth-generation El Camino was a body-on-frame design, and the base engine had become a 200 cubic inch V6. In the earlier El Camino, the base engine was a 250 cubic inch straight-six. Quality improvements included 14 noise-insulating body mounts to quiet down the cab and double-walled metal for the doors, hood, bed, and tailgate.
A total of eight engines were offered during this generation. The weakest engine, the aforementioned V6, made 95 ponies, while the hottest gasoline engine, a 350 cubic inch small block V8, made 170 HP. The weird engine choice debuted in the 1983 model year, and it was the updated version of the Oldsmobile 5.7-liter diesel V8, which made 105 HP.
As noted earlier, the addition of a diesel here wasn’t for power like it would be in a modern truck. Instead, it was all about saving money. The diesel made 200 lb-ft of torque, which was bested by the 240 pounds of twist offered by the 305 V8 offered in the El Camino in the same year. This was also reflected in towing capacity, as diesels could pull 2,000 pounds, but El Caminos with the 305 could tow 5,000 pounds.
Future Classics LLCFuture Classics LLC
Something interesting is that, according to the brochure, if you bought the El Camino diesel, you could not get a sport suspension or the gauge package that included a trip odometer and a clock. But you were able to get the package with a trip odometer, clock, and tachometer.
This diesel engine was also sold in the El Camino’s twin, the GMC Caballero. GM never quoted fuel economy numbers in the brochure, but as Diesel World notes, the diesels were good for the mid-20 mpg range at 55 mph in other applications. That was great back then, especially considering that gas V8s got in the teens in the same conditions.
Unfortunately, the diesel El Camino never resonated with buyers. Diesel El Camino and Caballero sales were halted after 1984, after only a few examples were sold.
Diesel No More
Future Classics LLC
No official explanation exists for the short life of GM’s diesel coupé utilities, but MotorWeek has explained why American diesel cars failed in the mid-1980s to begin with. In one video, John Davis explains that, by the mid-1980s, the price of diesel had risen past the price of premium gasoline. This was a problem because diesel cars were already significantly more expensive than their gasoline counterparts, and few buyers were interested in paying more for the car just to also pay more for fuel. Davis also noted that by 1984, diesel cars accounted for less than four percent of all cars sold in America.
It was only a year later when GM canceled its diesel program, which by that point had also included a 4.3-liter diesel V6 and other variants. For nearly three decades after, General Motors would stay out of the diesel passenger car market, instead leaving that field to marques like Volkswagen. GM wasn’t alone, either. Even Japanese brands that had experimented with diesel in the 1980s in America, like Toyota, also gave up.
Future Classics LLC
Tradecraft Specialties, which claims to get its data from the GM Heritage Center, says that only 571 El Camino diesels were built in 1983, followed by 98 in 1984. GMC Caballero diesel sales are unknown, but they’re believed to be even rarer than their Chevy sibling. I couldn’t find original MSRP data for the diesels, either. Either way, the diesel El Camino is so rare that even Google’s wholly incompetent, useless AI thinks that it didn’t even exist.
Sure, Google.
Keep up the good work, Google. Thanks for the job security!
Sadly, the diesel version of the El Camino was so short-lived that I could not find any period review. That said, I’m not sure it matters. As we established, getting a diesel meant less power and less towing capacity. The only reason to buy one was to get V8-ish power with V6-ish fuel economy, but that advantage had been largely erased by the mid-1980s.
Still, I find myself in love. Imagine rolling up to a car show, people thinking that you diesel-swapped your El Camino, and then those people find out that it was a factory job. But that was just how General Motors was back then. It believed in diesel so much that it put diesels in darn near anything that moved. I wonder what might have happened had GM gotten these engines correct from the start?
As AI assistants become capable of controlling web browsers, a new security challenge has emerged: users must now trust that every website they visit won't try to hijack their AI agent with hidden malicious instructions. Experts voiced concerns about this emerging threat this week after testing from a leading AI chatbot vendor revealed that AI browser agents can be successfully tricked into harmful actions nearly a quarter of the time.
On Tuesday, Anthropic announced the launch of Claude for Chrome, a web browser-based AI agent that can take actions on behalf of users. Due to security concerns, the extension is only rolling out as a research preview to 1,000 subscribers on Anthropic's Max plan, which costs between $100 and $200 per month, with a waitlist available for other users.
The Claude for Chrome extension allows users to chat with the Claude AI model in a sidebar window that maintains the context of everything happening in their browser. Users can grant Claude permission to perform tasks such as managing calendars, scheduling meetings, drafting email responses, handling expense reports, and testing website features.
The browser extension builds on Anthropic's Computer Use capability, which the company released in October 2024. Computer Use is an experimental feature that allows Claude to take screenshots and control a user's mouse cursor to perform tasks, but the new Chrome extension provides more direct browser integration.
Claude for Chrome demo video by Anthropic.
Zooming out, it appears Anthropic's browser extension reflects a new phase of AI lab competition. In July, Perplexity launched its own browser, Comet, which features an AI agent that attempts to offload tasks for users. OpenAI recently released ChatGPT Agent, a bot that uses its own sandboxed browser to take actions on the web. Google has also launched Gemini integrations with Chrome in recent months.
But this rush to integrate AI into browsers has exposed a fundamental security flaw that could put users at serious risk.
Security challenges and safety measures
In preparation for the Chrome extension launch, Anthropic says it has conducted extensive testing that revealed browser-using AI models can face prompt injection attacks, where malicious actors embed hidden instructions into websites to trick AI systems into performing harmful actions without user knowledge.
The company tested 123 cases representing 29 different attack scenarios and found a 23.6 percent attack success rate when browser use operated without safety mitigations.
One example involved a malicious email that instructed Claude to delete a user's emails for "mailbox hygiene" purposes. Without safeguards, Claude followed these instructions and deleted the user's emails without confirmation.
Anthropic says it has implemented several defenses to address these vulnerabilities. Users can grant or revoke Claude's access to specific websites through site-level permissions. The system requires user confirmation before Claude takes high-risk actions like publishing, purchasing, or sharing personal data. The company has also blocked Claude from accessing websites offering financial services, adult content, and pirated content by default.
These safety measures reduced the attack success rate from 23.6 percent to 11.2 percent in autonomous mode. On a specialized test of four browser-specific attack types, the new mitigations reportedly reduced the success rate from 35.7 percent to 0 percent.
Independent AI researcher Simon Willison, who has extensively written about AI security risks and coined the term "prompt injection" in 2022, called the remaining 11.2 percent attack rate "catastrophic," writing on his blog that "in the absence of 100% reliable protection I have trouble imagining a world in which it's a good idea to unleash this pattern."
By "pattern," Willison is referring to the recent trend of integrating AI agents into web browsers. "I strongly expect that the entire concept of an agentic browser extension is fatally flawed and cannot be built safely," he wrote in an earlier post on similar prompt injection security issues recently found in Perplexity Comet.
The security risks are no longer theoretical. Last week, Brave's security team discovered that Perplexity's Comet browser could be tricked into accessing users' Gmail accounts and triggering password recovery flows through malicious instructions hidden in Reddit posts. When users asked Comet to summarize a Reddit thread, attackers could embed invisible commands that instructed the AI to open Gmail in another tab, extract the user's email address, and perform unauthorized actions. Although Perplexity attempted to fix the vulnerability, Brave later confirmed that its mitigations were defeated and the security hole remained.
For now, Anthropic plans to use its new research preview to identify and address attack patterns that emerge in real-world usage before making the Chrome extension more widely available. In the absence of good protections from AI vendors, the burden of security falls on the user, who is taking a large risk by using these tools on the open web. As Willison noted in his post about Claude for Chrome, "I don't think it's reasonable to expect end users to make good decisions about the security risks."
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
Scientists have made a major breakthrough in the mystery of how life first emerged on Earth by demonstrating how two essential biological ingredients could have spontaneously joined together on our planet some four billion years ago.
All life on Earth contains ribonucleic acid (RNA), a special molecule that helps build proteins from simpler amino acids. To kickstart this fundamental biological process, RNA and amino acids had to become attached at some point. But this key step, known as RNA aminoacylation, has never been experimentally observed in early Earth-like conditions despite the best efforts of many researchers over the decades.
Now, a team has achieved this milestone in the quest to unravel life’s origins. As they report in a study published on Wednesday in Nature, the researchers were able to link amino acids to RNA in water at a neutral pH with the aid of energetic chemical compounds called thioesters. The work revealed that two contrasting origin stories for life on Earth, known as “RNA world” and “thioester world,” may both be right.
“It unites two theories for the origin of life, which are totally separate,” said Matthew Powner, a professor of organic chemistry at University College London and an author of the study, in a call with 404 Media. “These were opposed theories—either you have thioesters or you have RNA.”
“What we found, which is kind of cool, is that if you put them both together, they're more than the sum of their parts,” he continued. “Both aspects—RNA world and thioester world—might be right and they’re not mutually exclusive. They can both work together to provide different aspects of things that are essential to building a cell.”
In the RNA world theory, which dates back to the 1960s, self-replicating RNA molecules served as the initial catalysts for life. The thioester world theory, which gained traction in the 1990s, posits that life first emerged from metabolic processes spurred on by energetic thioesters. Now, Powner said, the team has found a “missing link” between the two.
Powner and his colleagues didn’t initially set out to merge the two ideas. The breakthrough came almost as a surprise after the team synthesized pantetheine, a component of thioesters, in simulated conditions resembling early Earth. The team discovered that if amino acids are linked to pantetheine, they naturally attach themselves to RNA at molecular sites that are consistent with what is seen in living things. This act of RNA aminoacylation could eventually enable the complex protein synthesis all organisms now depend on to live.
Pantetheine “is totally universal,” Powner explained. “Every organism on Earth, every genome sequence, needs this molecule for some reason or other. You can't take it out of life and fully understand life.”
“That whole program of looking at pantetheine, and then finding this remarkable chemistry that pantetheine does, was all originally designed to just be a side study,” he added. “It was serendipity in the sense that we didn't expect it, but in a scientific way that we knew it would probably be interesting and we'd probably find uses for it. It’s just the uses we found were not necessarily the ones we expected.”
The researchers suggest that early instances of RNA aminoacylation on Earth would most likely have occurred in lakes and other small bodies of water, where nutrients could accumulate in concentrations that could up the odds of amino acids attaching to RNA.
“It's very difficult to envisage any origins of life chemistry in something as large as an ocean body because it's just too dilute for chemistry,” Powner said. For that reason, they suggest future studies of so-called “soda lakes” in polar environments that are rich in nutrients, like phosphate, and could serve as models for the first nurseries of life on Earth.
The finding could even have implications for extraterrestrial life. If life on Earth first emerged due, in part, to this newly identified process, it’s possible that similar prebiotic reactions can be set in motion elsewhere in the universe. Complex molecules like pantetheine and RNA have never been found off-Earth (yet), but amino acids are present in many extraterrestrial environments. This suggests that the ingredients of life are abundant in the universe, even if the conditions required to spark it are far more rare.
While the study sheds new light on the origin of life, there are plenty of other steps that must be reconstructed to understand how inorganic matter somehow found a way to self-replicate and start evolving, moving around, and in our case as humans, conducting experiments to figure out how it all got started.
“We get so focused on the details of what we're trying to do that we don't often step back and think, ‘Oh, wow, this is really important and existential for us,’” Powner concluded.
🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
Marine biologist and conservationist David Shiffman was an early power user and evangelist for science engagement on the social media platform formerly known as Twitter. Over the years, he trained more than 2,000 early career scientists on how to best use the platform for professional goals: networking with colleagues, sharing new scientific papers, and communicating with interested members of the public.
But when Elon Musk bought Twitter in 2022, renaming it X, changes to both the platform's algorithm and moderation policy soured Shiffman on the social media site. He started looking for a viable alternative among the fledgling platforms that had begun to pop up: most notably Threads, Post, Mastodon, and Bluesky. He was among the first wave of scientists to join Bluesky and found that, even in its infancy, it had many of the features he had valued in "golden age" Twitter.
Shiffman also noticed that he wasn't the only one in the scientific community having issues with Twitter. This impression was further bolstered by news stories in outlets like Nature, Science, and the Chronicle of Higher Education noting growing complaints about Twitter and increased migration over to Bluesky by science professionals. (Full disclosure: I joined Bluesky around the same time as Shiffman, for similar reasons: Twitter had ceased to be professionally useful, and many of the science types I'd been following were moving to Bluesky. I nuked my Twitter account in November 2024.)
A curious Shiffman decided to conduct a scientific survey, announcing the results in a new paper published in the journal Integrative and Comparative Biology. The findings confirm that, while Twitter was once the platform of choice for a majority of science communicators, those same people have since abandoned it in droves. And of the alternatives available, Bluesky seems to be their new platform of choice.
Shiffman, the author of Why Sharks Matter, described early Twitter recently on the blog Southern Fried Science as "the world's most interesting cocktail party."
"Then it stopped being useful," Shiffman told Ars. "I was worried for a while that this incredibly powerful way of changing the world using expertise was gone. It's not gone. It just moved. It's a little different now, and it's not as powerful as it was, but it's not gone. It was for me personally, immensely reassuring that so many other people were having the same experience that I was. But it was also important to document that scientifically."
Eager to gather solid data on the migration phenomenon to bolster his anecdotal observations, Shiffman turned to social scientist Julia Wester, one of the scientists who had joined Twitter at Shiffman's encouragement years before, before also becoming fed up and migrating to Bluesky. Despite being "much less online" than the indefatigable Shiffman, Wester was intrigued by the proposition. "I was interested not just in the anecdotal evidence, the conversations we were having, but also in identifying the real patterns," she told Ars. "As a social scientist, when we hear anecdotal evidence about people's experiences, I want to know what that looks like across the population."
Shiffman and Wester targeted scientists, science communicators, and science educators who used (or had used) both Twitter and Bluesky. Questions explored user attitudes toward, and experiences with, each platform in a professional capacity: when they joined, respective follower and post counts, which professional tasks they used each platform for, the usefulness of each platform for those purposes relative to 2021, how they first heard about Bluesky, and so forth.
The authors acknowledge that they are looking at a very specific demographic among social media users in general and that there is an inevitable self-selection effect. However, "You want to use the sample and the method that's appropriate to the phenomenon that you're looking at," said Wester. "For us, it wasn't just the experience of people using these platforms, but the phenomenon of migration. Why are people deciding to stay or move? How they're deciding to use both of these platforms? For that, I think we did get a pretty decent sample for looking at the dynamic tensions, the push and pull between staying on one platform or opting for another."
They ended up with a final sample size of 813 people. Over 90 percent of respondents said they had used Twitter for learning about new developments in their field; 85.5 percent for professional networking; and 77.3 percent for public outreach. Roughly three-quarters of respondents said that the platform had become significantly less useful for each of those professional uses since Musk took over. Nearly half still have Twitter accounts but use it much less frequently or not at all, while about 40 percent have deleted their accounts entirely in favor of Bluesky.
Making the switch
User complaints about Twitter included a noticeable increase in spam, porn, bots, and promoted posts from users who paid for a verification badge, many spreading extremist content. "I very quickly saw material that I did not want my posts to be posted next to or associated with," one respondent commented. There were also complaints about the rise in misinformation and a significant decline in both the quantity and quality of engagement, with respondents describing their experiences as "unpleasant," "negative," or "hostile."
The survey responses also revealed a clear push/pull dynamic when it came to the choice to abandon Twitter for Bluesky. That is, people felt they were being pushed away from Twitter and were actively looking for alternatives. As one respondent put it, "Twitter started to suck and all the cool people were moving to Bluesky."
Bluesky was user-friendly with no algorithm, a familiar format, and helpful tools like starter packs of who to follow in specific fields, which made the switch a bit easier for many newcomers daunted by the prospect of rebuilding their online audience. Bluesky users also appreciated the moderation on the platform and having the ability to block or mute people as a means of disengaging from more aggressive, unpleasant conversations. That said, "If Twitter was still great, then I don't think there's any combination of features that would've made this many people so excited about switching," said Shiffman.
Per Shiffman and Wester, an "overwhelming majority" of respondents said that Bluesky has a "vibrant and healthy online science community," while Twitter no longer does. And many Bluesky users reported getting more bang for their buck, so to speak, on Bluesky. They might have a lower follower count, but those followers are far more engaged: Someone with 50,000 Twitter/X followers, for example, might get five likes on a given post; but on Bluesky, they may only have 5,000 followers, but their posts will get 100 likes.
According to Shiffman, Twitter always used to be in the top three in terms of referral traffic for posts on Southern Fried Science. Then came the "Muskification," and suddenly Twitter referrals weren't even cracking the top 10. By contrast, in 2025 thus far, Bluesky has driven "a hundred times as many page views" to Southern Fried Science as Twitter. Ironically, "the blog post that's gotten the most page views from Twitter is the one about this paper," said Shiffman.
Ars social media manager Connor McInerney confirmed that Ars Technica has also seen a steady dip in Twitter referral traffic thus far in 2025. Furthermore, "I can say anecdotally that over the summer we’ve seen our Bluesky traffic start to surpass our Twitter traffic for the first time," McInerney said, attributing the growth to a combination of factors. "We’ve been posting to the platform more often and our audience there has grown significantly. By my estimate our audience has grown by 63 percent since January. The platform in general has grown a lot too—they had 10 million users in September of last year, and this month the latest numbers indicate they’re at 38 million users. Conversely, our Twitter audience has remained fairly static across the same period of time."
Bubble, schmubble
As for scientists looking to share scholarly papers online, Shiffman pulled the Altmetrics stats for his and Wester's new paper. "It's already one of the 10 most shared papers in the history of that journal on social media," he said, with 14 shares on Twitter/X vs over a thousand shares on Bluesky (as of 4 pm ET on August 20). "If the goal is showing there's a more active academic scholarly conversation on Bluesky—I mean, damn," he said.
And while there has been a steady drumbeat of op-eds of late in certain legacy media outlets accusing Bluesky of being trapped in its own liberal bubble, Shiffman, for one, has few concerns about that. "I don’t care about this, because I don’t use social media to argue with strangers about politics," he wrote in his accompanying blog post. "I use social media to talk about fish. When I talk about fish on Bluesky, people ask me questions about fish. When I talk about fish on Twitter, people threaten to murder my family because we’re Jewish." He compared the current incarnation of Twitter as no better than 4Chan or TruthSocial in terms of the percentage of "conspiracy-prone extremists" in the audience. "Even if you want to stay, the algorithm is working against you," he wrote.
"There have been a lot of opinion pieces about why Bluesky is not useful because the people there tend to be relatively left-leaning," Shiffman told Ars. "I haven't seen any of those same people say that Twitter is bad because it's relatively right-leaning. Twitter is not a representative sample of the public either." And given his focus on ocean conservation and science-based, data-driven environmental advocacy, he is likely to find a more engaged and persuadable audience at Bluesky.
The survey results show that at this point, Bluesky seems to have hit a critical mass for the online scientific community. That said, Shiffman, for one, laments that the powerful Black Science Twitter contingent, for example, has thus far not switched to Bluesky in significant numbers. He would like to conduct a follow-up study to look into how many still use Twitter vs those who may have left social media altogether, as well as Bluesky's demographic diversity—paving the way for possible solutions should that data reveal an unwelcoming environment for non-white scientists.
There are certainly limitations to the present survey. "Because this is such a dynamic system and it's changing every day, I think if we did this study now versus when we did it six months ago, we'd get slightly different answers and dynamics," said Wester. "It's still relevant because you can look at the factors that make people decide to stay or not on Bluesky, to switch to something else, to leave social media altogether. That can tell us something about what makes a healthy, vibrant conversation online. We're capturing one of the responses: 'I'll see you on Bluesky.' But that's not the only response. Public science communication is as important now as it's ever been, so looking at how scientists have pivoted is really important."
We recently reported on research indicating that social media as a system might well be doomed, since its very structure gives rise to the toxic dynamics that plague so much of social media: filter bubbles, algorithms that amplify the most extreme views to boost engagement, and a small number of influencers hogging the lion's share of attention. That paper concluded that any intervention strategies were likely to fail. Both Shiffman and Wester, while acknowledging the reality of those dynamics, are less pessimistic about social media's future.
"I think the problem is not with how social media works, it's with how any group of people work," said Shiffman. "Humans evolved in tiny social groupings where we helped each other and looked out for each other's interests. Now I have to have a fight with someone 10,000 miles away who has no common interest with me about whether or not vaccines are bad. We were not built for that. Social media definitely makes it a lot easier for people who are anti-social by nature and want to stir conflict to find those conflicts. Something that took me way too long to learn is that you don't have to participate in every fight you're invited to. There are people who are looking for a fight and you can simply say, 'No, thank you. Not today, Satan.'"
"The contrast that people are seeing between Bluesky and present-day Twitter highlights that these are social spaces, which means that you're going to get all of the good and bad of humanity entering into that space," said Wester. "But we have had new social spaces evolve over our whole history. Sometimes when there's something really new, we have to figure out the rules for that space. We're still figuring out the rules for these social media spaces. The contrast in moderation policies and the use (or not) of algorithms between those two platforms that are otherwise very similar in structure really highlights that you can shape those social spaces by creating rules and tools for how people interact with each other."
One of the fun things about Monterey Car Week – and yes, I’m still working through content from that – is that so many of the interesting cars aren’t even in the show itself, but are just prowling around town, and you encounter them on the sides of streets and in parking lots. That was the case with the car I want to show you today, which I spotted around town a few times, and then in a parking lot. This one is an exciting find because it’s a version of one of the most famous sleepers in The Driver’s Republic of Automotivistan. From the outside, it looks like a Volkswagen T3, which we knew as the Vanagon here in America, but underneath it’s all Porsche.
I bet most of you, or at least many of you, know about this car, at least in a general sense. The car I saw is based on one of the rarest Porsches ever made, and Porsche’s only minivan that technically anyone could buy. Well, anyone with a crapload of money and a certain healthy perversion when it comes to cars, at least.
The car I’m talking about is the Porsche B32, which was, essentially, a Volkswagen T3 Transporter with the 3.2-liter air-cooled flat-six from a Porsche 911 Carrera, along with a Porsche AWD system, Porsche suspension, brakes, the whole works. It even had a Porsche VIN so this thing was technically sold as a Porsche, even though it seems only around seven were actually made and sold.
These were first built in the 1980s as a way to have VW bus support vehicles for racing that weren’t so damn slow. The usual story is that these were built to support Porsche 959s in the Paris-Dakar rally, though I’ve also heard that there would have been other, more specialized trucks for that. Still, this does seem to have been the genesis for these wonderfully odd machines.
Because so few were actually built back in the day, and because humans are wonderfully irrational beings, there are now some companies making spiritual re-creations of these bonkers vans, and that’s what I saw in that parking lot. This one seems to be a Claer CT3:
The Claer CT3 essentially uses the same formula as those old B32s, but now a Porsche 996 3.6-liter flat-six making over 500 horsepower, mated to a Porsche six-speed manual, and with Porsche brakes, suspension, everything. It’s not technically a Porsche B32, but it’s essentially the same thing, just updated and I suppose less “official.” But who cares?
These seem to be incredibly well-made and thought out, looking like something that came from the factory as opposed to a tuner shop.
The only hints that you’re looking at something other than a very well-sorted Vanagon are these little vents at the rear, and, I suppose, this badge on the front upper grille:
I wonder what this thing is like to drive at speed? The sounds and feels and probably smells of a 993, but you’re sitting at the height of what would roughly be the 993’s roof, the front wheel is under your butt, and there’s no hood in front of you. I bet it’s pretty surreal. And fantastic.
Plus, it allows you to share that 911 experience with, what, six or so friends? These are amazing. Rare and expensive, sure, but there’s something cooler about a Porsche minivan than another 911, I think.
I don’t think there are many Claer CT3s running around out there, so this was a pretty exciting thing to see in the wild.