Code Monger, cyclist, sim racer and driving enthusiast.
9952 stories
·
6 followers

I Just Can’t With This Tesla FSD User Panicking About Actually, You Know, Driving

1 Share

Artificial Intelligence (AI) technology is a powerful tool, but like many powerful tools, it has the potential to allow humans to let our natural abilities atrophy. It’s the same way that the invention of the jackhammer pretty much caused humans to lose the ability to pound through feet of concrete and asphalt with our bare fists. We’re already seeing effects of this with the widespread use of ChatGPT seemingly causing cognitive decline and atrophying writing skills, and now I’m starting to think advanced driver’s aids, especially more comprehensive ones like Level 2 supervised semi-automated driving systems are doing the same thing: making people worse drivers.

I haven’t done studies to prove this in any comprehensive way, so at this point I’m still just speculating, like a speculum. I’m not entirely certain a full study is even needed at this point, though, because there are already some people just flat-out admitting to it online, for everyone to see, free of shame and, perhaps, any degree of self-reflection.

Specifically, I’m referring to this tweet that has garnered over two million views so far:

Oh my. If, for some reason, you’re not able to read the tweet, here’s the full text of it:

“The other night I was driving in pouring rain, fully dark, and the car randomly lost GPS. No location. No navigation. Which also meant no FSD. I tried two software resets while driving just to get GPS back. Nothing worked. So there I was, manually driving in terrible conditions, unsure of positioning, no assistance, no guidance. And it genuinely felt unsafe. For me and for the people in the car. Then it hit me. This feeling – the stress, the uncertainty, the margin for error – this is how most drivers feel every single day. No FSD. No constant awareness. No backup. We’ve normalised danger so much that we only notice it when the safety net disappears.”

Wow. Drunk Batman himself couldn’t have beaten an admission like this out of me. There’s so much here, I’m not even really sure where to start. First, it’s night, and it’s “fully dark?” That’s kind of how night works, champ. And, sure, pouring rain is hardly ideal, but it’s very much part of life here on Earth. It’s perfectly normal to feel some stress when driving in the dark, in bad weather, but it’s not “how most drivers feel every single day.” Most drivers are used to driving, and they deal with poor conditions with awareness and caution, but, ideally, not the sort of panic suggested in this tweet.

Also, my quote didn’t replicate the weird spacing and short, staccato paragraphs that made this whole thing read like one of those weird LinkedIn posts where some fake thing someone’s kid said because a revelation of B2B best practices, or some shit.

It seems that the reason this guy felt the way he did when the driver aids were removed is that he’s, frankly, not used to actually driving. In fact, if you look at his profile on eX-Twitter, he notes that he’s a Tesla supervisor, which is pretty significantly different than calling yourself a Tesla driver:

Oli Profile

This is an objectively terrible and deeply misguided way to view your relationship with your car for many reasons, not the least of which is the fact that even if you do consider yourself a “supervisor” – a deeply flawed premise to begin with – the very definition of Level 2 semi-autonomy is that the person “supervising” has to be ready to take over with zero warning, which means you need to be able to drive your damn car, no matter the situation it happens to be in.

If anything, you would think the takeaway here would have been, shit, I need to be a more competent driver and less of a candy-ass as opposed to coming away thinking, as stated in the tweet,

“We’ve normalised danger so much that we only notice it when the safety net disappears.”

This is so deeply and eye-rollingly misguided I almost don’t know where to start, except I absolutely do know where to start: the idea that the “safety net” is Tesla’s FSD software. Because that is exactly the opposite of how Level 2 systems are designed to work! You, the human, are the safety net! If you’ve already made the arguably lazy and questionable decision to farm out the majority of the driving task to a system that lacks redundant sensor backups and is still barely out of Beta status, then you better damn well be ready to take over when the system fails, because that’s how it’s designed to work.

To be fair, our Tesla Supervisor here did take over when his FSD went down due to loss of a GPS signal, but, based on what he said, he felt “unsafe” for himself and the passengers in the car. The lack of FSD isn’t the problem here; the problem is that the human driver didn’t feel safe operating their own motor vehicle.

Not only was he uncomfortable driving in the inclement weather and lack of light (again, that’s just nighttime, a recurring phenomenon), but the reason he had to debase himself so was because of a technical failure of FSD, which, it should be noted, can happen at any time, without warning. Hence the need to be able to drive a damn car, comfortably.

What does he mean when he says, referring to human driving, “no constant awareness?” Almost every driver I know is constantly aware that they are driving. That’s part of driving. Do people get distracted, look at phones, get lost in reveries, or whatever? Sure they do. That’s not ideal, but it doesn’t mean people aren’t aware.

Unsurprisingly, the poster of this admission has been getting a good bit of blowback in comments from people a little less likely to soil themselves when they have to drive in the rain. So, he provided a follow-up tweet:

I’m not really sure what this follow-up actually clarified, but he did describe the experience in a bit more detail:

“I knew the rough direction but not exactly. I never use my phone while driving, so 1 rely solely on the car nav. Unfortunately, it wasn’t working, and I had to pull over to double-check where I was going.”

That’s just…driving. This is how all driving was up until about 15 years ago or so. I have an abysmal sense of direction, so I feel like I spent most of my pre-GPS driving life lost at least a quarter of the time I was driving anywhere. But you figure it out. You take some wrong turns, you end up in places you didn’t originally plan to be in, you looked at maps or signs or asked someone and you eventually got there. It wasn’t perfect, but it was what you had, and when we could finally, say, print out MapQuest directions and clip them to the dash, oh man, that was a game changer.

I took plenty of long road trips in marginal cars with no phone and just signs and vague notions to guide me where I was going. If I had to do it today, sure, there would be some significant adapting to exhume my pre-GPS navigational skills – well, skills is too generous a word, so maybe we can just say ability – but I think it could be done. And every driver really should be able to do the same thing.

FSD (Supervised) is a tool, a crutch, and if you find yourself in a position where its absence is causing you fear instead of just a bit of annoyance, you’re no longer really qualified to drive a car. Teslas (and other mass-market cars with similar L2 driver assist systems) don’t have redundant sensors, most don’t have the means to clean camera lenses (or radar/lidar windows and domes), and none of them are rated for actually unsupervised driving. Which means that you, the person in the driver’s seat, need to actually live up to the name of that seat: you have to know how to drive a damn car.

This tweet should be taken as a warning, because while it’s fun to feel all smug because you can drive in the rain and ridicule this hapless fellow, I guarantee you he’s not alone. There are other people whose driving skills are atrophying because of reliance on systems like Tesla’s FSD, and this is a very bad path to go down. Our Tesla Supervisor here may actually have been unsafe when he had to take full control of the car and didn’t feel comfortable. And that’s not a technical problem, it’s a perception problem, and it’s not even the original poster’s fault entirely – there is a lot of encouragement from Tesla and the surrounding community to consider FSD to be far more capable than it actually is.

Roboadas Study Top

Driving is dangerous, and it’s good to feel that, sometimes! You should always be aware that when you’re driving, you’re in a metal-and-plastic, ton-and-a-half box hurtling down haphazardly maintained roads at a mile per minute. If that’s not a little scary to you, then you’re either a liar, a corpse, or one of those kids who started karting at four years old.

We all need to accept the reality of what driving is, and the inherent, wonderful madness behind it. I personally know myself well enough to realize how easily I can be lured into false senses of security by modern cars and start driving like a moron; to combat this, my preferred daily drivers are ridiculous, primitive machines incapable of hiding the fact that they’re just metal boxes with lots of sharp, poke-y bits that are whizzing along far too quickly. Which, in the case of my cars, can mean speeds of, oh, 45 mph.

The point is, everyone on the road should be able to capably drive, in pretty much any conditions, without the aid of some AI. Even when we have more advanced automated driving systems, this should still be the case, at least for vehicles capable of being driven by a human. But for right now, systems like FSD are not the safety net: the safety net is always us. We’re always responsible when we’re in the driver’s seat, and if we forget that, we could end up in far worse situations than just embarrassing ourselves online.

But that can happen too, of course.

The post I Just Can’t With This Tesla FSD User Panicking About Actually, You Know, Driving appeared first on The Autopian.

Read the whole story
LeMadChef
17 hours ago
reply
Denver, CO
Share this story
Delete

Banks Are Now Letting Rich People Use Their Insane Car Collections As Collateral To Get Even Richer

1 Share

One of the many ways wealthy people retain and grow their wealth is through the art of secured loans. Also known as collateral- or securities-based financing, it allows someone to put up some sort of asset—real estate, stocks, bonds, etc.—as collateral to secure a favorable loan from a bank.

There are three main benefits of taking a loan out on an asset rather than selling that asset to get the cash. The first is speed. Selling a piece of real estate could take months or years, but a bank can transfer cash into your account in no time at all. Rich people routinely do this to make big purchases or quick investments.

The second reason is to avoid taxes. If you sell a stock you’ve made money on, you’ll be subject to capital gains tax. But by using the stock as collateral, you get to keep it in your portfolio and avoid being taxed. The next benefit is lower interest rates. Securities-based interest rates are usually lower than traditional loans, since you’re putting up a specific piece of collateral that can be seized by the bank if you default on the loan.

In addition to more traditional securities, banks have also begun accepting more unusual assets as securities for loans. Art pieces, watches, jewelry, and wine collections have become popular collateral in recent years. Now, you can add car collections to that list.

To These People, Expensive Cars Are Basically Investments You Can Drive

Pxl 20240817 175621365
If you think these are cars that are meant to be driven and not appreciating assets that should be kept safe and secure, you’re not looking at cars like a wealthy person would. Source: Mercedes Streeter

JPMorgan Chase & Co., the biggest bank on the planet, today announced plans to expand car-collection-based lending services, which allow people to borrow against their rare, vintage, or custom vehicles, to Europe (it was already available in the U.S., unsurprisingly). According to Bloomberg, cars are a pretty important asset for younger, wealthy individuals:

The lending push comes as wealthy individuals use car collections — and other physical assets — as a way to diversify fortunes, building on the auto sector’s traditional status for passion projects. Classic cars from European brands such as Ferrari NV, Porsche AG and Mercedes-Benz Group AG have outperformed stock markets in recent years, and the overall market still grew in 2024, even amid a broader downturn for luxury assets.

High-end automobiles rank as the most popular luxury asset younger members of the world’s ultra-rich aspire to own personally besides real estate, according to Knight Frank’s 2025 Wealth Report. That puts cars ahead of demand for private jets, wine and art collections and superyachts.

This type of loan is, obviously, a lot different than the type of car-backed loans most people are familiar with, known as title loans. While these mega car collection-backed loans provide favorable terms and rates to their borrowers, title loans are often predatory in nature, trapping borrowers in a cycle of debt with massively high interest rates and fees. Title loans are usually tied to just one vehicle, and typically have far shorter terms (15 to 30 days, according to Experian). Title loans are so predatory, they’re actually banned in nearly half of U.S. states.

873785
“Hi, yes, I’d like to take a loan out against my collection, which includes a 1959 Goggomobile Dart Roadster.” – Whoever currently owns this car, probably. Source: Mecum Auctions

As an enthusiast who likes to see cars being driven, this is particularly sad to me. Sure, there are a handful of ultra-rare, 1-of-1 museum-piece vehicles that should probably stay off public roads. But the vast majority of “collector” cars deserve time on the street, not stashed away in a climate-controlled building at the back of some dude’s Hamptons estate. Taking out loans on these collections will, presumably, further discourage their owners from driving the cars in their collections, lest they risk their leveraged investment plunging in value thanks to a few extra miles on the odometer.

It’s in these moments that I wonder what I’d do if I were one of these rich people. Would I use my collection of old BMWs and weird French cars as collateral for my next big real estate acquisition? Or would I stick to my guns? Alas, I am not wealthy and likely never will be. It’s a whole different world out there. And now loans backed by your dream cars are helping fuel it.

Top graphic image: Mercedes Streeter

The post Banks Are Now Letting Rich People Use Their Insane Car Collections As Collateral To Get Even Richer appeared first on The Autopian.

Read the whole story
LeMadChef
17 hours ago
reply
Denver, CO
Share this story
Delete

Instead of fixing WoW’s new floating house exploit, Blizzard makes it official

1 Share

Long-time World of Warcraft players have been waiting 21 years for the new in-game housing features that Blizzard officially announced last year and which launched in early access last week. Shortly after that launch, though, players quickly discovered a way to make their houses float high above the ground by exploiting an unintended, invisible UI glitch.

Now, Blizzard says that the overwhelming response to that accidental house hovering has been so strong that it’s pivoting to integrate it as an official part of the game.

“We were going to fix flying houses to bring them back to terra firma, but you all made such awesome stuff, so we made it possible with the base UI instead.” WoW principal designer Jesse Kurlancheek posted on social media Tuesday. Lead producer Kyle Hartline followed up on that announcement with some behind-the-scenes gossip: “Like no joke we had an ops channel about how to roll out the float fix but folks shared like 5 of the dopest houses and we all kinda immediately agreed this was way too cool to change,” he wrote.

In a forum post formally announcing the official UI change, Community Manager Randy “Kaivax” Jordan noted that the team “quickly” got to work on enabling the floating house UI after seeing the community “almost immediately” embrace the glitch. But Kaivax also notes that the undersides of houses were never intended to be visible, and thus “aren’t modeled are textured.” Players who make floating houses “may decide to hide that part behind other things,” Kaivax suggests.

Players with houses that float too high may also have problems positioning the camera so they can click the door to enter the house. For this problem, Kaivax suggests that “you might want to consider building a ramp or a jumping puzzle or a mount landing spot, etc.”

WoW‘s floating houses join a long legacy of beloved game features that weren’t originally intended parts of a game’s design, from Street Fighter II‘s combo system to Doom‘s “rocket jump.” Now if we could only only we could convince Blizzard to make Diablo III gold duplication into an official feature.

Read full article

Comments



Read the whole story
LeMadChef
17 hours ago
reply
Denver, CO
Share this story
Delete

Why I’m Still Completely Obsessed With My BMW i3

1 Share

You’d think the honeymoon phase of car ownership would wear away after a while. That after a year or so, what had been framed in my mind as a larger-than-life “Carbon Fiber Wonder from Leipzig” would fall to earth and become simply my little commuter car. But no, this hasn’t happened at all. Every time I sit in my BMW i3 and go for a drive — even just a mundane commute — I think to myself, “Wow this is a great car.” Here’s why.

I’ve been wrenching hard on my WWII Jeep this past week, and before that, I was road-tripping my 1992 Jeep Comanche from LA to Portland and back. You’d think I’d be in Jeep-mode right now, and while I’m pretty much always in Jeep mode, for about an hour each day, I’m not. I’m in BMW i3 mode.

My commute should be miserable. I’m stuck in LA traffic far too often, the drivers here have road rage like you wouldn’t believe, parking is horrible, the heat is unbearable, fuel is expensive, the roads aren’t particularly scenic or well-maintained — pretty much everything about driving here should be awful. But it isn’t, because the tool I have for the job performs its task truly flawlessly.

I3 Rear Quarter

I say this a couple of days after I had to drop my friend Brandon off at the airport (LAX). “Oh crap, I forgot to charge my car,” I told him. “Not a worry,” I continued. “I have a gas generator in the back.” Brandon then told me about the time he forgot to charge a Jeep Wagoneer S; that was a pain in the ass.

After almost three years piloting BMW i3s (a 2014 then a 2021), I’ve come to realize that EREV technology is basically the perfect solution. So many folks call it a compromise — a pointless, heavy, expensive compromise that adds complexity to a vehicle that would otherwise be rather simple. But actually, a gasoline range extender is a compromise reducer. I drive the i3 however I want; I don’t have to change my behavior at all. I plug in most days, and drive 150 miles all-electric. This suits 99 percent of my driving needs. In the edge-case scenario where I need more range, instead of carrying around a $7,000 pound battery that weighs half a ton, I carry around a little 400-pound gas generator whose oil I change annually.

The refrain I hear all too often that the average person only needs 100 miles of electric range is silly. People have been purchasing vehicles for edge-case scenarios since the beginning of the automotive timeline. Whether that edge case is your kid’s friends needing a ride home from soccer practice (so you buy a three-row), or that annual camping trip you take (so you buy a 4×4), or that canyon road you like to hit every couple of years (so you buy a sports car), or that occasional refrigerator you have to carry (so you buy a truck), this is just how people buy cars. Ignoring the edge case is ignoring human nature, and automakers do so at their own peril. The truth is that people buy cars for what they’re capable of, even if those people rarely do whatever that is; people want 300+ miles of range, and to give that to them with a technology that costs less, weighs less, and helps eliminate infrastructure worries — it’s just awesome. And I say that as someone who doesn’t have a horse in the race: EREVs are incredible.

Gas I3

But it’s not just the range extender technology that I love about the i3; it’s everything about the car. The interior remains a simply fantastic place to spend time, with a gorgeous eucalyptus dashboard, olive leaf-died leather seats with wool inserts, “kenaf” fibers making up the door-cards and dash, and huge windows that let in lots of light:

Bmw I3 Interior

The carbon fiber chassis is just cool, and so are the plastic body panels. I park this thing wherever I want, and I don’t ever worry about someone opening their car door and dinging mine. I have XPEL PPF protecting the paint, and the plastic panels won’t ding.

What’s more, the car’s size and hilariously small turning circle make it an amazing city car. I can park it anywhere, I can do highly-illegal U-turns (known here in LA as “flipping a bitch”) without anyone noticing, and with the torquey electric motor, I can easily merge onto freeways as scary as the 110 and change lanes on the 405 even when there’s just a tiny gap in traffic.

The coach doors can be a challenge in parking lots, as you have to open the front door to open the rear door, and in tight spots, this traps you in a tiny space between the doors and the i3, but that’s a small compromise for such a small and practical footprint. There’s tons of room in the i3; even a child seat fits in it easily:

I3 Interior Wide

I bought my first BMW i3 back in early 2023, and after about a year and a half, I made a terrible financial decision and dropped $30,000 on what I consider the Holy Grail of i3s — a final model-year Galvanic Gold BMW i3S Rex with Giga World interior and Harman Kardon sound system. I thought I would regret this purchase, but in fact, I do not even one bit.

Not only is the car phenomenal at fulfilling its intended purpose, but I regularly receive inquiries from people asking if I could help them find a BMW i3 just like mine, because the inventory has simply dried up. BMW i3s are rare, and the ones you really want — 2019 and up models with the range extender and one of the two leather interiors — are basically impossible to find. I managed to snag mine just as the very last models were coming off lease, and my goodness, am I happy I did; the well is now dry.

Anyway, it has been far too long since I extolled the virtues of what I consider the greatest city car of all time, and with BMW’s new boss being one of the brains behind the i3, I figured I’d use that news peg to write an update. I still love my BMW i3. In fact, I think I love it more than ever.

All Photos: David Tracy

The post Why I’m Still Completely Obsessed With My BMW i3 appeared first on The Autopian.

Read the whole story
LeMadChef
17 hours ago
reply
Denver, CO
Share this story
Delete

Barnum's Law of CEOs

3 Shares

It should be fairly obvious to anyone who's been paying attention to the tech news that many companies are pushing the adoption of "AI" (large language models) among their own employees--from software developers to management--and the push is coming from the top down, as C-suite executives order their staff to use AI, Or Else. But we know that LLMs reduce programmer productivity-- one major study showed that "developers believed that using AI tools helped them perform 20% faster -- but they actually worked 19% slower." (Source.)

Another recent study found that 87% of executives are using AI on the job, compared with just 27% of employees: "AI adoption varies by seniority, with 87% of executives using it on the job, compared with 57% of managers and 27% of employees. It also finds that executives are 45% more likely to use the technology on the job than Gen Zers, the youngest members of today's workforce and the first generation to have grown up with the internet.

"The findings are based on a survey of roughly 7,000 professionals age 18 and older who work in the US, the UK, Australia, Canada, Germany, and New Zealand. It was commissioned by HR software company Dayforce and conducted online from July 22 to August 6."

Why are executives pushing the use of new and highly questionable tools on their subordinates, even when they reduce productivity?

I speculate that to understand this disconnect, you need to look at what executives do.

Gordon Moore, long-time co-founder and CEO of Intel, explained how he saw the CEO's job in his book on management: a CEO is a tie-breaker. Effective enterprises delegate decision making to the lowest level possible, because obviously decisions should be made by the people most closely involved in the work. But if a dispute arises, for example between two business units disagreeing on which of two projects to assign scarce resources to, the two units need to consult a higher level management team about where their projects fit into the enterprise's priorities. Then the argument can be settled ... or not, in which case it propagates up through the layers of the management tree until it lands in the CEO's in-tray. At which point, the buck can no longer be passed on and someone (the CEO) has to make a ruling.

So a lot of a CEO's job, aside from leading on strategic policy, is to arbitrate between conflicting sides in an argument. They're a referee, or maybe a judge.

Now, today's LLMs are not intelligent. But they're very good at generating plausible-sounding arguments, because they're language models. If you ask an LLM a question it does not answer the question, but it uses its probabilistic model of language to generate something that closely resembles the semantic structure of an answer.

LLMs are effectively optimized for bamboozling CEOs into mistaking them for intelligent activity, rather than autocomplete on steroids. And so the corporate leaders extrapolate from their own experience to that of their employees, and assume that anyone not sprinkling magic AI pixie dust on their work is obviously a dirty slacker or a luddite.

(And this false optimization serves the purposes of the AI companies very well indeed because CEOs make the big ticket buying decisions, and internally all corporations ultimately turn out to be Stalinist command economies.)

Anyway, this is my hypothesis: we're seeing an insane push for LLM adoption in all lines of work, however inappropriate, because they directly exploit a cognitive bias to which senior management is vulnerable.

Read the whole story
LeMadChef
17 hours ago
reply
Denver, CO
Share this story
Delete

US cyber defense chief accidentally uploaded secret government info to ChatGPT

1 Comment

Alarming critics, the acting director of the Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, accidentally uploaded sensitive information to a public version of ChatGPT last summer, Politico reported.

According to "four Department of Homeland Security officials with knowledge of the incident," Gottumukkala's uploads of sensitive CISA contracting documents triggered multiple internal cybersecurity warnings designed to "stop the theft or unintentional disclosure of government material from federal networks."

Gottumukkala's uploads happened soon after he joined the agency and sought special permission to use OpenAI's popular chatbot, which most DHS staffers are blocked from accessing, DHS confirmed to Ars. Instead, DHS staffers use approved AI-powered tools, like the agency's DHSChat, which "are configured to prevent queries or documents input into them from leaving federal networks," Politico reported.

It remains unclear why Gottumukkala needed to use ChatGPT. One official told Politico that, to staffers, it seemed like Gottumukkala "forced CISA’s hand into making them give him ChatGPT, and then he abused it."

The information Gottumukkala reportedly leaked was not confidential but marked "for official use only." That designation, a DHS document explained, is "used within DHS to identify unclassified information of a sensitive nature" that, if shared without authorization, "could adversely impact a person's privacy or welfare" or impede how federal and other programs "essential to the national interest" operate.

There's now a concern that the sensitive information could be used to answer prompts from any of ChatGPT's 700 million active users.

OpenAI did not respond to Ars' request to comment, but Cyber News reported that experts have warned "that using public AI tools poses real risks because uploaded data can be retained, breached, or used to inform responses to other users."

Sources told Politico that DHS investigated the incident for potentially harming government security—which could result in administrative or disciplinary actions, DHS officials told Politico. Possible consequences could range from a formal warning or mandatory retraining to "suspension or revocation of a security clearance," officials said.

However, CISA's director of public affairs, Marci McCarthy, declined Ars' request to confirm if that probe, launched in August, has concluded or remains ongoing. Instead, she seemed to emphasize that Gottumukkala's access to ChatGPT was only temporary, while suggesting that the ChatGPT use aligned with Donald Trump's order to deploy AI across government.

"Acting Director Dr. Madhu Gottumukkala was granted permission to use ChatGPT with DHS controls in place," McCarthy said. "This use was short-term and limited. CISA is unwavering in its commitment to harnessing AI and other cutting-edge technologies to drive government modernization and deliver" on Trump's order.

Scrutiny of cyber defense chief remains

Gottumukkala has not had a smooth run as acting director of the top US cyber defense agency after Trump's pick to helm the agency, Sean Plankey, was blocked by Sen. Rick Scott (R-Fla.) "over a Coast Guard shipbuilding contract," Politico noted.

DHS Secretary Kristi Noem chose Gottumukkala to fill in after he previously served as her chief information officer, overseeing statewide cybersecurity initiatives in South Dakota. CISA celebrated his appointment with a press release boasting that he had more than 24 years of experience in information technology and a "deep understanding of both the complexities and practical realities of infrastructure security."

However, critics "on both sides of the aisle" have questioned whether Gottumukkala knows what he's doing at CISA, Cyberscoop reported. That includes staffers who stayed on and staffers who prematurely left the agency due to uncertainty over its future, Politico reported.

At least 65 staffers have been curiously reassigned to other parts of DHS, Cyberscoop reported, inciting Democrats' fears that CISA staffers are possibly being pushed over to Immigration and Customs Enforcement (ICE).

The same fate almost befell Robert Costello, CISA's chief information officer, who was reportedly involved with meetings last August probing Gottumukkala's improper ChatGPT use and "the proper handling of for official use only material," Politico reported.

Earlier this month, staffers alleged that Gottumukkala took steps to remove Costello from his CIO position, which he has held for the past four years. But that plan was blocked after "other political appointees at the department objected," Politico reported. Until others intervened to permanently thwart the reassignment, Costello was supposedly given "roughly one week" to decide if he would take another position within DHS or resign, sources told Politico.

Gottumukkala has denied that he sought to reassign Costello over a personal spat that Politico's sources said sprang from "friction because Costello frequently pushed back against Gottumukkala on policy matters." He insisted that "senior personnel decisions are made at the highest levels at the Department of Homeland Security’s Headquarters and are not made in a vacuum, independently by one individual, or on a whim."

The reported move looked particularly shady, though, because Costello "is seen as one of the agency’s top remaining technical talents," Politico reported.

Congress questioned ongoing cybersecurity threats

This month, Congress grilled Gottumukkala about mass layoffs last year that shrank CISA from about 3,400 staffers to 2,400. The steep cuts seemed to threaten national security and election integrity, lawmakers warned, and potentially have left the agency unprepared for any potential conflicts with China.

At a hearing held by the House Homeland Security Committee, Gottumukkala said that CISA was "getting back on mission" and plans to reverse much of the damage done last year to the agency.

However, some of his responses did not inspire confidence, including a failure to forecast "how many cyber intrusions CISA expects from foreign adversaries as part of the 2026 midterm elections," the Federal News Network reported. In particular, Rep. Tony Gonzales (R-Texas) criticized Gottumukkala for not having "a specific number in mind."

"Well, we should have that number," Gonzales said. "It should first start by how many intrusions that we had last midterm and the midterm before that. I don’t want to wait. I don’t want us waiting until after the fact to be able to go, ‘Yeah, we got it wrong, and it turns out our adversaries influenced our election to that point.’"

Perhaps notably, Gottumukkala also dodged questions about reports that he failed a polygraph when attempting to seek access to other "highly sensitive cyber intelligence," Politico reported.

The acting director apparently blamed six career CISA staffers for requesting that he agree to the polygraph test, which the staffers said was typical protocol but Gottumukkala later claimed was misleading.

Failing the test isn't necessarily damning, since anxiety or technical errors could trigger a negative result. However, Gottumukkala appears touchy about the test that he now regrets sitting for, calling the test "unsanctioned" and refusing to discuss the results.

It seems that Gottumukkala felt misled after learning that he could have requested a waiver to skip the polygraph. In a letter suspending those staffers' security clearances, CISA accused staff of showing "deliberate or negligent failure to follow policies that protect government information." However, staffers may not have known that he had that option, which is considered a "highly unusual loophole that may not have been readily apparent to career staff," Politico noted.

Staffers told Politico that Gottumukkala's tenure has been a "nightmare"—potentially ruining the careers of longtime CISA staffers. It troubles some that it seems that Gottumukkala will remain in his post "for the foreseeable future," while seeming to politicize the agency and bungle protocols for accessing sensitive information.

According to Nextgov, Gottumukkala plans to right the ship with "a hiring spree in 2026 because its recent reductions have hampered some of the Trump administration’s national security goals."

In November, the trade publication Cybersecurity Dive reported that Gottumukkala sent a memo confirming the hiring spree was coming that month, while warning that CISA remains "hampered by an approximately 40 percent vacancy rate across key mission areas." All those cuts were "spurred by the administration’s animus toward CISA over its election security work," Cybersecurity Dive noted.

"CISA must immediately accelerate recruitment, workforce development, and retention initiatives to ensure mission readiness and operational continuity," Gottumukkala told staffers at that time, then later went on to reassure Congress this month that the agency has “the required staff” to protect election integrity and national security, Cyberscoop reported.

Read full article

Comments



Read the whole story
LeMadChef
17 hours ago
reply
"accidentally" - these damn chatboxes are omnipresent and in nearly every application. I've pasted somethig from my copy buffer in the wrong window more times than I can remember.

This is not "accidental" it's by design so these AI companies can hover up as much data as possible.
Denver, CO
Share this story
Delete
Next Page of Stories