Code Monger, cyclist, sim racer and driving enthusiast.
9896 stories
·
6 followers

Colorado pumps brakes on e-bikes as teens go full throttle with little federal oversight

1 Share

LOUISVILLE — E-bike of Colorado sales manager Perry Fletcher said his sales and repair shop saw an increase in back-to-school sales to young riders and families this fall as the popularity of the battery-powered bicycles revs up.

But the kids’ excitement for their new rides is tempered by a recurring question from worried parents: Are they safe?

That can be a difficult question to answer. The federal government’s e-bike regulations are sparse, and efforts to expand them have stalled, leaving states and even counties to fill the void with patchwork rules of their own. Meanwhile, the seemingly endless variety of e-bikes for sale vary in design, speed, and quality.

In that environment, retailers like Fletcher aim to educate consumers so they can make informed decisions.

“We’re super careful about what comes in the shop because there are hazards,” he said.

Federal rules requiring safety standards for batteries in e-bikes and other devices such as e-scooters are in limbo after the Consumer Product Safety Commission, the independent federal regulatory agency meant to protect people against death and injury from bicycles and other consumer products, withdrew proposed regulations in August.

The commission then sent the rules for review by the Office of Information and Regulatory Affairs inside the Office of Management and Budget, responding to President Donald Trump’s February executive order demanding that independent agencies like the CPSC be more aligned with White House priorities. In May, Trump fired three members of the commission who had been appointed by his predecessor, former President Joe Biden.

Meanwhile, separate proposed rules by the commission to address injuries from mechanical failings have languished. Shira Rawlinson, the CPSC’s communications director, said it plans to update the status of both proposed rules.

That leaves e-bikes subject to existing standards written for traditional bicycles and which the commission has said, based on a preliminary assessment, aren’t adequate to reduce the risk of e-bike injuries. ColoradoMinnesota, and Utah recently passed laws regulating e-bikes to fill the gap.

The laws address issues such as battery fire risks and rider safety and seek to distinguish lower-speed e-bikes from faster e-motos, or electric motorcycles, which can reach top speeds of 35 mph or faster. No federal law dictates the age at which someone may operate an e-bike, but more than half of states have age restrictions for who can operate Class 3 bikes, which reach a top speed of 28 mph, while two California counties recently set a minimum age to operate Class 2 bikes, with their 20 mph top speed.

“The biggest issue is e-bikes that switch from a power-assisted bike to essentially a motorized scooter,” said Colorado state Rep. Lesley Smith, a Democrat who co-sponsored Colorado’s bill.

Colorado’s e-bike law requires safety certification of lithium-ion batteries, which can explode when manufactured or used improperly. They caused 39 deaths and 181 injuries in people using micromobility devices such as e-bikes from 2019 to 2023, according to the CPSC.

A trail marker reminds riders that e-bikes are not allowed on the trails outside of Salida on Oct. 4, 2024, as well as a trail courtesy reminder. With the increase in mountain bike e-bikes, they are not allowed on single track in some areas. The Salida Mountain Trails group maintains trails on Tenderfoot and Methodist mountains in the area. (David Krause, The Colorado Sun)

Most dealers, importers, and distributors have agreed to use batteries that meet safety standards, but there will always be manufacturers who cut corners on safety to save money, said Ed Benjamin, chairman of the Light Electric Vehicle Association, whose hundreds of members supply light electric vehicles such as e-bikes, or their parts.

“There are some out there who don’t care what is the right thing to do. They just want to make the cheapest bike possible,” Benjamin said.

Amy Thompson, the Safe Routes to School program coordinator for the Boulder Valley School District, said education officials are scrambling to install more bike racks at several schools to meet the increase in e-bike usage.

Students use them to quickly get to school or activities and carry their sports equipment or instruments with ease, Thompson said. She said she’s seen some alarming behavior, such as students’ riding three to a bike, riding without helmets, or attempting power wheelies popularized by social media.

Thompson said kids are disabling the speed limiter on e-bikes to operate at higher speeds. “It’s super easy for kids to go on YouTube and find a video that will coach you how to override or disable the governor on a bicycle,” she said.

Thompson alerted parents to monitor their children’s e-bikes in September and described the blurred lines between e-bikes and e-motos last fall.

Those blurred lines bedevil an e-bike classification system adopted, in part or full, by nearly all states, in which e-bike motors generally must operate at 750 watts or lower. Class 1 e-bikes use pedal assist and must not exceed 20 mph; Class 2 e-bikes include a throttle and also must not exceed 20 mph; and Class 3 e-bikes use pedal assist that must not exceed 28 mph.

Some e-bikes easily switch between Class 2 and 3, sometimes unbeknownst to parents, said Smith, the Colorado lawmaker. A California parent sued an e-bike manufacturer last year, saying it falsely advertised as Class 2 an e-bike that could switch to Class 3.

The dangers of Class 2 e-bikes prompted California’s Marin County to ban children under 16 from operating them and require that anyone riding one wear a helmet. Youths ages 10 to 15 who crash their e-bikes require an ambulance at five times the rate of other age groups involved in e-bike crashes, according to county health officials.

A growing number of serious injuries on e-bikes, particularly among adolescents, is an emerging public safety problem, the American College of Surgeons said in June.

Talia Smith, Marin County’s legislative director, championed the California law that permits Marin County to impose age restrictions. After hearing from a dozen other counties experiencing similar problems, though, she said state legislators should move to a statewide law from piecemeal, county-by-county ordinances. San Diego County bans riders under 12 from operating Class 1 or 2 bikes.

Vehicles claiming to be both e-bikes and e-motos fall into the cracks between two regulatory agencies, the CPSC and the National Highway Traffic Safety Administration, said Matt Moore, general and policy counsel for PeopleForBikes, a trade association for bicycles, including e-bikes.

PeopleForBikes wants the traffic safety administration to stop shipments of or take other legal action against e-motos that are labeled as e-bikes and do not comply with federal standards, Moore said.

If the federal government won’t act, states should clarify their laws to define e-motos as off-road dirt bikes or motor vehicles that require licenses, he said. In October, California defined e-motos, which it requires to display an identification plate issued by the Department of Motor Vehicles for use off-highway.

In Boulder, Thompson said, the school district considers communication and education cornerstones of safety. Children and teens should learn and practice traffic rules, whether they’re powering two wheels with their own legs or a throttle, she said.

“E-bikes are fun, environmentally friendly, and relatively cheap transportation,” Thompson said. “So how can we make them safer and more viable for families?”


KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

Read the whole story
LeMadChef
16 hours ago
reply
Denver, CO
Share this story
Delete

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide

1 Share

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

“A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” OpenAI’s filing argued.

Allegedly, the logs also show that Raine “told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.” Additionally, Raine told ChatGPT that he’d increased his dose of a medication that “he stated worsened his depression and made him suicidal.” That medication, OpenAI argued, “has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults, especially during periods when, as here, the dosage is being changed.”

All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of “sensitive evidence” made available to the public, due to its intention to handle mental health-related cases with “care, transparency, and respect.”

The Raine family’s lead lawyer, however, did not describe the filing as respectful. In a statement to Ars, Jay Edelson called OpenAI’s response “disturbing.”

“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”

“Amazingly,” Edelson said, OpenAI instead argued that Raine “himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”

Edelson suggested that it’s telling that OpenAI did not file a motion to dismiss—seemingly accepting ” the reality that the legal arguments that they have—compelling arbitration, Section 230 immunity, and First Amendment—are paper-thin, if not non-existent.” The company’s filing—although it requested dismissal with prejudice to never face the lawsuit again—puts the Raine family’s case “on track for a jury trial in 2026. ”

“We know that OpenAI and Sam Altman will stop at nothing—including bullying the Raines and others who dare come forward—to avoid accountability,” Edelson said. “But, at the end of the day, they will have to explain to a jury why countless people have died by suicide or at the hands of ChatGPT users urged on by the artificial intelligence OpenAI and Sam Altman designed.”

Use ChatGPT “at your sole risk,” OpenAI says

To overcome the Raine case, OpenAI is leaning on its usage policies, emphasizing that Raine should never have been allowed to use ChatGPT without parental consent and shifting the blame onto Raine and his loved ones.

“ChatGPT users acknowledge their use of ChatGPT is ‘at your sole risk and you will not rely on output as a sole source of truth or factual information,'” the filing said, and users also “must agree to ‘protect people’ and ‘cannot use [the] services for,’ among other things, ‘suicide, self-harm,’ sexual violence, terrorism or violence.”

Although the family was shocked to see that ChatGPT never terminated Raine’s chats, OpenAI argued that it’s not the company’s responsibility to protect users who appear intent on pursuing violative uses of ChatGPT.

The company argued that ChatGPT warned Raine “more than 100 times” to seek help, but the teen “repeatedly expressed frustration with ChatGPT’s guardrails and its repeated efforts to direct him to reach out to loved ones, trusted persons, and crisis resources.”

Circumventing safety guardrails, Raine told ChatGPT that “his inquiries about self-harm were for fictional or academic purposes,” OpenAI noted. The company argued that it’s not responsible for users who ignore warnings.

Additionally, OpenAI argued that Raine told ChatGPT that he found information he was seeking on other websites, including allegedly consulting at least one other AI platform, as well as “at least one online forum dedicated to suicide-related information.” Raine apparently told ChatGPT that “he would spend most of the day” on a suicide forum website.

“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.

More ChatGPT-linked hospitalizations, deaths uncovered

OpenAI has sought to downplay risks to users, releasing data in October “estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent,” Ars reported.

While that may seem small, it amounts to about 1 million vulnerable users, and The New York Times this week cited studies that have suggested OpenAI may be “understating the risk.” Those studies found that “the people most vulnerable to the chatbot’s unceasing validation” were “those prone to delusional thinking,” which “could include 5 to 15 percent of the population,” NYT reported.

OpenAI’s filing came one day after a New York Times investigation revealed how the AI firm came to be involved in so many lawsuits. Speaking with more than 40 current and former OpenAI employees, including executives, safety engineers, researchers, NYT found that OpenAI’s model tweak that made ChatGPT more sycophantic seemed to make the chatbot more likely to help users craft problematic prompts, including those trying to “plan a suicide.”

Eventually, OpenAI rolled back that update, making the chatbot safer. However, as recently as October, the ChatGPT maker seemed to still be prioritizing user engagement over safety, NYT reported, after that tweak caused a dip in engagement. In a memo to OpenAI staff, ChatGPT head Nick Turley “declared a ‘Code Orange,” four employees told NYT, warning that “OpenAI was facing ‘the greatest competitive pressure we’ve ever seen.'” In response, Turley set a goal to increase the number of daily active users by 5 percent by the end of 2025.

Amid user complaints, OpenAI has continually updated its models, but that pattern of tightening safeguards, then seeking ways to increase engagement could continue to get OpenAI in trouble, as lawsuits advance and possibly others drop. NYT “uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT,” including nine hospitalized and three deaths.

Gretchen Krueger, a former OpenAI employee who worked on policy research, told NYT that early on, she was alarmed by evidence that came before ChatGPT’s release showing that vulnerable users frequently turn to chatbots for help. Later, other researchers found that such troubled users often become “power users.” She noted that “OpenAI’s large language model was not trained to provide therapy” and “sometimes responded with disturbing, detailed guidance,” confirming that she joined other safety experts who left OpenAI due to burnout in 2024.

“Training chatbots to engage with people and keep them coming back presented risks,” Krueger said, suggesting that OpenAI knew that some harm to users “was not only foreseeable, it was foreseen.”

For OpenAI, the scrutiny will likely continue until such reports cease. Although OpenAI officially unveiled an Expert Council on Wellness and AI in October to improve ChatGPT safety testing, there did not appear to be a suicide expert included on the team. That likely concerned suicide prevention experts who warned in a letter updated in September that “proven interventions should directly inform AI safety design,” since “the most acute, life-threatening crises are often temporary—typically resolving within 24–48 hours”—and chatbots could possibly provide more meaningful interventions in that brief window.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Read full article

Comments



Read the whole story
LeMadChef
16 hours ago
reply
Denver, CO
Share this story
Delete

HP plans to save millions by laying off thousands, ramping up AI use

1 Share

HP Inc. said that it will lay off 4,000 to 6,000 employees in favor of AI deployments, claiming it will help save $1 billion in annualized gross run rate by the end of its fiscal 2028.

HP expects to complete the layoffs by the end of that fiscal year. The reductions will largely hit product development, internal operations, and customer support, HP CEO Enrique Lores said during an earnings call on Tuesday.

Using AI, HP will “accelerate product innovation, improve customer satisfaction, and boost productivity,” Lores said.

In its fiscal 2025 earnings report released yesterday, HP said:

Structural cost savings represent gross reductions in costs driven by operational efficiency, digital transformation, and portfolio optimization. These initiatives include but are not limited to workforce reductions, platform simplification, programs consolidation and productivity measures undertaken by HP, which HP expects to be sustainable in the longer-term.

AI blamed for tech layoffs

HP’s announcement comes as workers everywhere try to decipher how AI will impact their future job statuses and job opportunities. Some industries, such as customer support, are expected to be more disrupted than others. But we’ve already seen many tech layoffs tied to AI.

Salesforce, for example, announced in October that it had let go of 4,000 customer support employees, with CEO Marc Benioff saying that AI meant “I need less heads.” In September, US senators accused Amazon of blaming its dismissal of “tens of thousands” of employees on the “adoption of generative AI tools” and then replacing the workers with over 10,000 foreign H-1B employees. Last month, Amazon announced it would lay off about 14,000 people to focus on its most promising projects, including generative AI. Last year, Intuit said it would lay off 1,800 people and replace them with AI-focused workers. Klarna and Duolingo have also replaced significant numbers of workers with AI. And in January, Meta announced plans to lay off 5 percent of its workforce as it looks to streamline operations and build its AI business.

That’s just a handful of layoffs by tech companies that have been outrightly or presumably connected to AI investments.

According to analysis from outplacement services and executive coaching firm Challenger, Gray & Christmas, as of October, technology firms had announced 141,159 job cuts since the year’s start, a 17 percent increase from the same period last year (120,470).

But some experts question whether or not AI is really driving corporate layoffs or if companies are using the buzzy technology as a scapegoat.

Peter Cappelli, a management professor and director of the Center for Human Resources at The Wharton School of the University of Pennsylvania, told CNBC this month that “there’s very little evidence that [AI] cuts jobs anywhere near like the level that we’re talking about.” He noted that effectively using AI to replace human workers is “enormously complicated and time-consuming.”

In September, Gartner analysts predicted that all IT work will involve AI by 2030, compared to 81 percent today. However, humans will remain essential, per VP analysts Alicia Mullery and Daryl Plummer, who said that 75 percent of IT workloads will still involve people.

More broadly, there’s hope that AI will actually lead to more jobs, not fewer. In January, the World Economic Forum released its Future of Jobs Report 2025, which predicted that AI would create 78 million more jobs than it eliminates by 2030. The report was based on data from 1,000 companies with 14 million employees worldwide.

It will be years before we comprehend AI’s impact on the workforce. In the meantime, we can expect AI to be at the center of more layoff announcements —whether people believe the job cuts are solely the results of AI or not.

Read full article

Comments



Read the whole story
LeMadChef
16 hours ago
reply
Denver, CO
Share this story
Delete

DOGE “doesn’t exist” anymore. But expert says it’s still not dead.

1 Share

After Donald Trump curiously started referring to the Department of Government Efficiency exclusively in the past tense, an official finally confirmed Sunday that DOGE “doesn’t exist.”

Talking to Reuters, Office of Personnel Management (OPM) Director Scott Kupor confirmed that DOGE—a government agency notoriously created by Elon Musk to rapidly and dramatically slash government agencies—was terminated more than eight months early. This may have come as a surprise to whoever runs the DOGE account on X, which continued posting up until two days before the Reuters report dropped.

As Kupor explained, a “centralized agency” was no longer necessary, since OPM had “taken over many of DOGE’s functions” after Musk left the agency last May. Around that time, DOGE staffers were embedded at various agencies, where they could ostensibly better coordinate with leadership on proposed cuts to staffing and funding.

Under Musk, DOGE was hyped as planning to save the government a trillion dollars. On X, Musk bragged frequently about the agency, posting in February that DOGE was “the one shot the American people have to defeat BUREAUcracy, rule of the bureaucrats, and restore DEMOcracy, rule of the people. We’re never going to get another chance like this.”

The reality fell far short of Musk’s goals, with DOGE ultimately reporting it saved $214 billion—an amount that may be overstated by nearly 40 percent, critics warned earlier this year.

How much talent was lost due to DOGE cuts?

Once Musk left, confidence in DOGE waned as lawsuits over suspected illegal firings piled up. By June, Congress was drawn, largely down party lines, on whether to codify the “DOGE process”—rapidly firing employees, then quickly hiring back whoever was needed—or declare DOGE a failure—perhaps costing taxpayers more in the long-term due to lost talent and services.

Because DOGE operated largely in secrecy, it may be months or even years before the public can assess the true cost of DOGE’s impact. However, in absence of a government tracker, the director of the Center for Effective Public Management at the Brookings Institute, Elaine Kamarck, put together what might be the best status report showing how badly DOGE rocked government agencies.

Back in June, Kamarck joined other critics flagging DOGE’s reported savings as “bogus.” In the days before DOGE’s abrupt ending was announced, she published a report grappling with a critical question many have pondered since DOGE launched: “How many people can the federal government lose before it crashes?”

In the report, Kamarck charted “26,511 occasions where the Trump administration abruptly fired people and then hired them back.” She concluded that “a quick review of the reversals makes clear that the negative stereotype of the ‘paper-pushing bureaucrat'” that DOGE was supposedly targeting “is largely inaccurate.”

Instead, many of the positions the government rehired were “engineers, doctors, and other professionals whose work is critical to national security and public health,” Kamarck reported.

About half of the rehires, Kamarck estimated, “appear to have been mandated by the courts.” However, in about a quarter of cases, the government moved to rehire staffers before the court could weigh in, Kamarck reported. That seemed to be “a tacit admission that the blanket firings that took place during the DOGE era placed the federal government in danger of not being able to accomplish some of its most important missions,” she said.

Perhaps the biggest downside of all of DOGE’s hasty downsizing, though, is a trend where many long-time government workers simply decided to leave or retire, rather than wait for DOGE to eliminate their roles.

During the first six months of Trump’s term, 154,000 federal employees signed up for the deferred resignation program, Reuters reported, while more than 70,000 retired. Both numbers were clear increases (tens of thousands) over exits from government in prior years, Kamarck’s report noted.

“A lot of people said, ‘the hell with this’ and left,” Kamarck told Ars.

Kamarck told Ars that her report makes it obvious that DOGE “cut muscle, not fat,” because “they didn’t really know what they were doing.”

As a result, agencies are now scrambling to assess the damage and rehire lost talent. However, her report documented that agencies aligned with Trump’s policies appear to have an easier time getting new hires approved, despite Kupor telling Reuters that the government-wide hiring freeze is “over.” As of mid-November 2025, “of the over 73,000 posted jobs, a candidate was selected for only about 14,400 of them,” Kamarck reported, noting that it was impossible to confirm how many selected candidates have officially started working.

“Agencies are having to do a lot of reassessments in terms of what happened,” Kamarck told Ars, concluding that DOGE “was basically a disaster.”

A decentralized DOGE may be more powerful

“DOGE is not dead,” though, Kamarck said, noting that “the cutting effort is definitely” continuing under the Office of Management and Budget, which “has a lot more power than DOGE ever had.”

However, the termination of DOGE does mean that “the way it operated is dead,” and that will likely come as a relief to government workers who expected that DOGE would continue slashing agencies through July 2026 at least, if not beyond.

Many government workers are still fighting terminations, as court cases drag on that even Kamarck has given up on tracking due to inconsistencies in outcomes.

“It’s still like one day the court says, ‘no, you can’t do that,'” Kamarck explained. “Then the next day another court says, ‘yes you can.'” Other times, the courts “change their minds,” or the Trump administration just doesn’t “listen to the courts, which is fairly terrifying,” Kamarck said.

Americans likely won’t get a clear picture of DOGE’s impact until power shifts in Washington. That could mean waiting for the next presidential election, or possibly if Democrats win a majority in midterm elections, DOGE investigations could start as early as 2027, Kamarck suggested.

OMB will likely continue with cuts that Americans appear to want, as White House spokeswoman Liz Huston told Reuters that “President Trump was given a clear mandate to reduce waste, fraud and abuse across the federal government, and he continues to actively deliver on that commitment.”

However, Kamarck’s report noted polls showing that most Americans disapprove of how Trump is managing government and its workforce, perhaps indicating that OMB will be pressured to slow down and avoid roiling public opinion ahead of the midterms.

“The fact that ordinary Americans have come to question the downsizing is, most likely, the result of its rapid unfolding, with large cuts done quickly regardless of their impact on the government’s functioning,” Kamarck suggested. Even Musk began to question DOGE. After Trump announced plans to appeal an electrical vehicle mandate that the Tesla founder relied on, Musk posted on X, “What the heck was the point of DOGE, if he’s just going to increase the debt by $5 trillion??”

Facing “blowback” over the most unpopular cuts, agencies sometimes rehired cut staffers within 24 hours, Kamarck noted, pointing to the Department of Energy as one of the “most dramatic” earliest examples. In that case, Americans were alarmed to see engineers cut who were responsible for keeping the nation’s nuclear arsenal “safe and ready.” Retention for those posts was already a challenge, due to “high demand in the private sector,” and the number of engineers was considered “too low” ahead of DOGE’s cuts. Everyone was reinstated within a day, Kamarck reported.

Alarm bells rang across federal government, and it wasn’t just about doctors and engineers being cut or entire agencies dismantled, like USAID. Even staffers DOGE viewed as having seemingly less critical duties—like travel bookers and customer service reps—were proven key to government functioning. Arbitrary cuts risked hurting Americans in myriad ways, hitting their pocketbooks, throttling community services, and limiting disease and disaster responses, Kamarck documented.

Now that the hiring freeze is lifted and OMB will be managing DOGE-like cuts moving forward, Kamarck suggested that Trump will face ongoing scrutiny over Musk’s controversial agency, despite its dissolution.

“In order to prove that the downsizing was worth the pain, the Trump administration will have to show that the government is still operating effectively,” Kamarck wrote. “But much could go wrong,” she reported, spouting a list of nightmare scenarios:

“Nuclear mismanagement or airline accidents would be catastrophic. Late disaster warnings from agencies monitoring weather patterns, such as the National Oceanic and Atmospheric Administration (NOAA), and inadequate responses from bodies such as the Federal Emergency Management Administration (FEMA), could put people in danger. Inadequate staffing at the FBI could result in counter-terrorism failures. Reductions in vaccine uptake could lead to the resurgence of diseases such as polio and measles. Inadequate funding and staffing for research could cause scientists to move their talents abroad. Social Security databases could be compromised, throwing millions into chaos as they seek to prove their earnings records, and persistent customer service problems will reverberate through the senior and disability communities.”

The good news is that federal agencies recovering from DOGE cuts are “aware of the time bombs and trying to fix them,” Kamarck told Ars. But with so much brain drain from DOGE’s first six months ripping so many agencies apart at their seams, the government may struggle to provide key services until lost talent can be effectively replaced, she said.

“I don’t know how quickly they can put Humpty Dumpty back together again,” Kamarck said.

Read full article

Comments



Read the whole story
LeMadChef
16 hours ago
reply
Denver, CO
Share this story
Delete

“Hey Google, did you upgrade your AI in my Android Auto?”

1 Share

Google’s platform for casting audio and navigation apps from a smartphone to a car’s infotainment system beat Apple’s to market by a good while, but that head start has not always kept Android Auto in the lead ahead of CarPlay. But an upgrade rolls out today—provided you already have Gemini on your phone, now it can interact with you while you drive.

What has sometimes felt like a hands-off approach by Google toward Android Auto didn’t reflect an indifference to making inroads into the automotive world. Apple might have its flashy CarPlay Ultra that lets the company take over the look and feel of a car’s digital UI, but outside of an Aston Martin, where will any of us encounter that?

Meanwhile the confusingly similarly named Android Automotive OS—a version of Android developed to run with the kind of stability required in a vehicle as opposed to a handheld—has made solid inroads with automakers, and you’ll find AAOS running in dozens of makes from OEMs like General Motors, Volkswagen Group, Stellantis, Geely, and more, although not always with the Google Automotive Services—Google Maps, Google Play, and Google Assistant—that impressed us in 2021 when we drove the original Polestar 2.

An android auto screenshot asking it to make a three-hour audio playlist. Can you imagine the playlist you’d get if you asked Grok to do the same thing in a Tesla? Credit: Google

In fact, Android Auto hasn’t been that neglected by Google. It got a big redesign in 2019, then support for a much wider array of infotainment screen sizes and shapes in 2023. But it took a couple of years after its appearance on smartphones for the hands-free “OK Google” assistant to be able to cast itself to your dashboard—arguably one of its most useful applications, since it allows the driver to keep their eyes on the road and their hands on the wheel while changing the temperature or setting the navigation or playing media.

Now it’s “Hey Google” not “OK Google” to trigger the assistant, which had started feeling a little left behind in terms of natural language processing and conversational AI to other OEM systems—sometimes even AAOS-based ones—that used solutions like those from Cerence running on their own private clouds.

Gemini

Going forward, “Hey Google” will fire up Gemini, as long as it’s running on the Android device being cast to the car’s infotainment system. In fact, we learned of its impending, unspecified arrival a couple of weeks ago, but today is the day, according to Google.

Now, instead of needing to know precise trigger phrases to get Google Assistant to do what you’d like it to do, Gemini should be able to answer the kinds of normal speech questions that so often frustrate me when I try them with Siri or most built-in in-car AI helpers.

For example, you could ask if there are any well-rated restaurants along a particular route, with the ability to have Gemini drill down into search results like menu options. (Whether these are as trustworthy as the AI suggestions that confront us when we use Google as a search engine will need to be determined.) Sending messages should supposedly be easier, with translation into 40 different languages should the need arise, and it sounds like making playlists and even finding info on one’s destination have both become more powerful.

There’s even the dreaded intrusion of productivity, as Gemini can access your Gmail, calendars, tasks, and so on.

A polestar interior Google Gemini is coming to all Polestar models. Credit: Polestar

Gemini is also making its way into built-in Google automotive environments. Just yesterday, Polestar announced that Gemini will replace Google Assistant in all its models, from the entry-level Polestar 2 through to soon-to-arrive machines like the Polestar 5 four-door grand tourer.

“Our collaboration with Google is a great example of how we continue to evolve the digital experience in our cars. Gemini brings the next generation of AI voice interaction into the car, and we’re excited to give a first look at how it will enhance the driving experience,” said Polestar’s head of UI/UX, Sid Odedra.

Read full article

Comments



Read the whole story
LeMadChef
17 hours ago
reply
Denver, CO
Share this story
Delete

Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song

1 Share
Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song

The Ukrainian Army is knocking a once-hyped Russian superweapon out of the sky by jamming it with a song and tricking it into thinking it’s in Lima, Peru. The Kremlin once called its Kh-47M2 Kinzhal ballistic missiles “invincible.” Joe Biden said the missile was “almost impossible to stop.” Now Ukrainian electronic warfare experts say they can counter the Kinzhal with some music and a re-direction order.

As winter begins in Ukraine, Russia has ramped up attacks on power and water infrastructure using the hypersonic Kinzhal missile. Russia has come to rely on massive long-range barrages that include drones and missiles. An overnight attack in early October included 496 drones and 53 missiles, including the Kinzhal. Another attack at the end of October involved more than 700 mixed missiles and drones, according to the Ukrainian Air Force.

“Only one type of system in Ukraine was able to intercept those kinds of missiles. It was the Patriot system, which the United States provided to Ukraine. But, because of the limits of those systems and the shortage of ammunition, Ukraine defense are unable to intercept most of those Kijnhals,” a member of Night Watch—a Ukrainian electronic warfare team—told 404 Media. The representative from Night Watch spoke to me on the condition of anonymity to discuss war tactics.

Kinzhals and other guided munitions navigate by communicating with Russian satellites that are part of the GLONASS system, a GPS-style navigation network. Night Watch uses a jamming system called Lima EW to generate a disruption field that prevents anything in the area from communicating with a satellite. Many traditional jamming systems work by blasting receivers on munitions and aircraft with radio noise. Lima does that, but also sends along a digital signal and spoofs navigation signals. It “hacks” the receiver it's communicating with to throw it off course.

Night Watch shared pictures of the downed Kinzhals with 404 Media that showed a missile with a controlled reception pattern antenna (CRPA), an active antenna that’s meant to resist jamming and spoofing. “We discovered that this missile had pretty old type of technology,” Night Watch said. “They had the same type of receivers as old Soviet missiles used to have. So there is nothing special, there is nothing new in those types of missiles.”

Night Watch told 404 Media that it used this Lima to take down 19 Kinzhals in the past two weeks. First, it replaces the missile’s satellite navigation signals with the Ukrainian song “Our Father Is Bandera.” 

Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song
A downed Kinzhal. Night Watch photo.

Any digital noise or random signal would work to jam the navigation system, but Night Watch wanted to use the song because they think it’s funny. “We just send a song…we just make it into binary code, you know, like 010101, and just send it to the Russian navigation system,” Night Watch said. “It’s just kind of a joke. [Bandera] is a Ukrainian nationalist and Russia tries to use this person in their propaganda to say all Ukrainians are Nazis. They always try to scare the Russian people that Ukrainians are, culturally, all the same as Bandera.”

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Once the song hits, Night Watch uses Lima to spoof a navigation signal to the missiles and make them think they’re in Lima, Peru. Once the missile’s confused about its location, it attempts to change direction. These missiles are fast—launched from a MiG-31 they can hit speeds of up to Mach 5.7 or more than 4,000 miles per hour—and an object moving that fast doesn’t fare well with sudden changes of direction.

“The airframe cannot withstand the excessive stress and the missile naturally fails,” Night Watch said. “When the Kinzhal missile tried to quickly change navigation, the fuselage of this missile was unable to handle the speed…and, yeah., it was just cut into two parts…the biggest advantage of those missiles, speed, was used against them. So that’s why we have intercepted 19 missiles for the last two weeks.”

Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song
Electronics in a downed Kinzhal. Night Watch photo.

Night Watch told 404 Media that Russia is attempting to defeat the Lima system by loading the missiles with more of the old tech. The goal seems to be to use the different receivers to hop frequencies and avoid Lima’s signal. 

“What is Russia trying to do? Increase the amount of receivers on those missiles. They used to have eight receivers and right now they increase it up to 12, but it will not help,” Night Watch said. “The last one we intercepted, they already used 16 receivers. It’s pretty useless, that type of modification.” 

According to Night Watch, countering Lima by increasing the number of receivers on the missile is a profound misunderstanding of its tech. “They think we make the attack on each receiver and as soon as one receiver attacks, they try to swap in another receiver and get a signal from another satellite. But when the missile enters the range of our system, we cover all types of receivers,” they said. “It’s physically impossible to connect with another satellite, but they think that it’s possible. That’s why they started with four receivers and right now it’s 16. I guess in the future we’ll see 24, but it’s pretty useless.”

Read the whole story
LeMadChef
17 hours ago
reply
Denver, CO
Share this story
Delete
Next Page of Stories