Code Monger, cyclist, sim racer and driving enthusiast.
8017 stories
·
5 followers

Microsoft’s new “Recall” feature will record everything you do on your PC

1 Share
A screenshot of Microsoft's new

Enlarge / A screenshot of Microsoft's new "Recall" feature in action. (credit: Microsoft)

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called "Recall" for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.

"Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds," Microsoft says on its website. "The snapshots are encrypted and saved on your PC’s hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots."

By performing a Recall action, users can access a snapshot from a specific time period, providing context for the event or moment they are searching for. It also allows users to search through teleconference meetings they've participated in and videos watched using an AI-powered feature that transcribes and translates speech.

At first glance, the Recall feature seems like it may set the stage for potential gross violations of user privacy. Despite reassurances from Microsoft, that impression persists for second and third glances as well. For example, someone with access to your Windows account could potentially use Recall to see everything you've been doing recently on your PC, which might extend beyond the embarrassing implications of pornography viewing and actually threaten the lives of journalists or perceived enemies of the state.

Despite the privacy concerns, Microsoft says that the Recall index remains local and private on-device, encrypted in a way that is linked to a particular user's account. "Recall screenshots are only linked to a specific user profile and Recall does not share them with other users, make them available for Microsoft to view, or use them for targeting advertisements. Screenshots are only available to the person whose profile was used to sign in to the device," Microsoft says.

Users can pause, stop, or delete captured content and can exclude specific apps or websites. Recall won't take snapshots of InPrivate web browsing sessions in Microsoft Edge or DRM-protected content. However, Recall won't actively hide sensitive information like passwords and financial account numbers that appear on-screen.

Microsoft previously explored a somewhat similar functionality with the Timeline feature in Windows 10, which the company discontinued in 2021, but it didn't take continuous snapshots. Recall also shares some obvious similarities to Rewind, a third-party app for Mac we covered in 2022 that logs user activities for later playback.

As you might imagine, all this snapshot recording comes at a hardware penalty. To use Recall, users will need to purchase one of the new "Copilot Plus PCs" powered by Qualcomm's Snapdragon X Elite chips, which include the necessary neural processing unit (NPU). There are also minimum storage requirements for running Recall, with a minimum of 256GB of hard drive space and 50GB of available space. The default allocation for Recall on a 256GB device is 25GB, which can store approximately three months of snapshots. Users can adjust the allocation in their PC settings, with old snapshots being deleted once the allocated storage is full.

As far as availability goes, Microsoft says that Recall is still undergoing testing. "Recall is currently in preview status," Microsoft says on its website. "During this phase, we will collect customer feedback, develop more controls for enterprise customers to manage and govern Recall data, and improve the overall experience for users."

Read Comments

Read the whole story
LeMadChef
1 hour ago
reply
Denver, CO
Share this story
Delete

Neuralink to implant 2nd human with brain chip as 75% of threads retract in 1st

1 Comment
A person's hand holidng a brain implant device that is about the size of a coin.

Enlarge / A Neuralink implant. (credit: Neuralink)

Only 15 percent of the electrode-bearing threads implanted in the brain of Neuralink's first human brain-chip patient continue to work properly, according to a report from The Wall Street Journal. The remaining 75 percent of the threads became displaced, and many of the threads that were left receiving little to no signals have been shut off.

In a May 8 blog post, Neuralink had disclosed that "a number" of the chip's 64 thinner-than-hair threads had retracted. Each thread carries multiple electrodes, totaling 1,024 across the threads, which are surgically implanted near neurons of interest to record signals that can be decoded into intended actions.

Neuralink was quick to note that it was able to adjust the algorithm used for decoding those neuronal signals to compensate for the lost electrode data. The adjustments were effective enough to regain and then exceed performance on at least one metric—the bits-per-second (BPS) rate used to measure how quickly and accurately a patient with an implant can control a computer cursor.

In an interview with the Journal, Neuralink's first patient, 29-year-old Noland Arbaugh, opened up about the roller-coaster experience. "I was on such a high and then to be brought down that low. It was very, very hard," Arbaugh said. "I cried." He initially asked if Neuralink would perform another surgery to fix or replace the implant, but the company declined, telling him it wanted to wait for more information. Arbaugh went on to say that he has since recovered from the initial disappointment and continues to have hope for the technology.

"I thought that I had just gotten to, you know, scratch the surface of this amazing technology, and then it was all going to be taken away," he added. "But it only took me a few days to really recover from that and realize that everything I’ve done up to that point was going to benefit everyone who came after me.” He also said that "it seems like we’ve learned a lot and it seems like things are going in the right direction."

The Journal's report adds more detail about the thread retraction as Neuralink gears up to surgically implant its chip into a second trial participant. According to the report, the company hopes to perform the second surgery sometime in June and has gained a green light to do so from the Food and Drug Administration, which oversees clinical trials.

Neuralink, owned by controversial billionaire Elon Musk, believes it can prevent thread movement in the next patient by simply implanting the fine wires deeper into brain tissue. The company is planning on—and the FDA has reportedly signed off on—implanting the threads 8 millimeters into the brain of the second trial participant rather than the 3 mm to 5 mm depth used in Arbaugh's implantation.

Brain-computer interface chips have been around for many years. In 2006, researchers reported the first case of a brain chip allowing a tetraplegic patient to control a "neural cursor" that could be used to open email, operate devices, and control a prosthetic hand and a robotic arm. The chip used was a Utah Array containing 96 electrodes, which can penetrate up to 1.5 mm into brain tissue.

Read Comments

Read the whole story
LeMadChef
3 hours ago
reply
"Move fast and break things" is the worst thing to come out of the software discipline.
Denver, CO
Share this story
Delete

Electric Trucks Are Literally Saving Texas’s Bacon As Storms Cut Power Again

1 Comment

The people of Houston have been going through it, lately, as countless storms plague the city. Hurricane-force winds struck the city last week, claiming multiple lives and cutting power to almost 1 million homes. Electric trucks have been serving Texans well amidst the chaos by helping to restore power while the lines are down.

This weekend spawned multiple stories that run quite contrary to the usual jokes levied against EV owners. “What will you do when the power’s out?” goes the usual refrain. “My gas truck’ll still be running while you’re EV’s out of juice!”

This time, that was anything but the case. Not for the first time, electric trucks proved they were able to come through in a crisis.

Cybertruck to the Rescue

As reported by Teslarati, one Tesla Cybertruck owner was able to flip the script on its head. The electric truck was apparently able to get a gas station at least nominally back online using the power sockets in the bed.

@misssbaaah

A #cybertruck coming through after houstontornado. #fyp #xyzbcafypシ #houstontx #crazyweather #tornado #storm #outage #houstonweather #thunderstorm #tesla @ABC13 Houston

♬ original sound – Misbaah

The video from TikTok user misssbaaah suggests that the whole store was running off the Cybertruck, including pumps, cash machine, and all. We’re treated to a view of the bed, which has two extension cords plugged into the two 110-volt sockets. We also see what appear to be loose wires threaded into the 240-volt socket underneath.

It’s difficult to verify that the Cybertruck was indeed able to get the gas station fully back online, including the machinery to actually pump gas. But it’s not totally out of the question.

Vlcsnap 00056
That’s not how you should use a NEMA 14-50 socket, but it would conceivably work in a pinch.

The twin 120-volt outlets can deliver a total of 20 amps combined (or around 2,400 watts). There are two more in the cabin that can independently offer a further 20 amps combined, too. For reference, 2,400 watts is enough to run multiple cash registers and maybe even a fridge or two. However, starting up a whole gas station’s worth of fridges would probably trip the breaker quite easily due to the high inrush current. To say nothing of the gas pumps.

Guid 60c9fbe4 Deb7 431b B186 3f22351f038c Online En Us
The Cybertruck has two 110-volt ports in the bed, and a 240-volt port beneath that.

The Cybertruck’s 240-volt port is much more capable, offering up to 40 amps, or 9,600 watts. However, that doesn’t stack on top of the 110-volt ports. The truck can only deliver a maximum of 9,600 watts in total through its bed ports. The Cybertruck can offer up to 11.5 kW of AC output, but only through a Vehicle-to-Home (V2H) charger setup, not the bed ports.

Given the high output of the 240-volt port, it’s plausible that the Cybertruck may have been able to get much of the equipment online, including the gas pumps. From the wiring seen in the video, it appears that the bed ports are being used together, with both the 110-volt and 240-volt port hooked up. It’s conceivable that the 110-volt ports are being used to get the registers back online while the 240-volt port was used to run the pumps. Even at a continuous maximum draw of 9.6 kilowatts, a fully-charged Cybertruck would be able to theoretically run the store for over 10 hours, thanks to its 123-kilowatt-hour pack.

In any case, the video shows a ton of cars visiting the store for gas. The lights are also off inside, which would make sense if you’re trying to minimize unnecessary power draw. Assuming it’s legit, it’s almost ironic that gas cars are relying on an EV in a crisis, and not the other way around.

Vlcsnap 00054
We see a rats nest of wires and extension cords in the video.
Vlcsnap 00052
The gas station appeared to be doing a roaring trade.

Lighting Strikes Again

Ford made the news in a big way in 2021 when an F-150 Hybrid with a Pro Power generator was able to save a wedding from a major power outage. The storms in Houston spawned a very similar story, with F-150 Lightning owners relying on their EVs to get them through.

Michael Kaler was able to power his home using his F-150 Lightning, catching the notice of Ford CEO Jim Farley in the process. Kaler noted the Lightning was even able to power his microwave during the storm. He also posted a screenshot showing that the Pro Power Onboard was delivering 3,000 watts to his home.

Similarly to the Cybertruck, the F-150 Lightning is able to deliver up to 9.6 kW via its Pro Power Onboard sockets. Kaler noted his household needs only used about 10% of the Lightning’s battery capacity overnight. He was able to drive to a fast charger to top up in the morning, and then returned to run his home and a neighbor’s house via extension cables.


On a smaller scale, F-150 Lightning owner Evgeny Kashtanov relied on his truck for just a few hours. He was able to run an extension cord to power his fridge while the grid was dark.

It’s true that houses are wired for much higher power draws than the 9.6 kW output from a F-150 Lightning or Cybertruck. However, if you’re careful with what you use, that’s more than enough power to keep a few fridges, computers, and lights on. If you are drawing more than 10 kW continuously at your home, you’re either running your oven or HVAC system at a pretty good clip, or you’ve forgotten to turn your pool heater off.

Ultimately, EV trucks can prove very useful in trials like these. They’re able to efficiently provide electricity, whether you want to keep your fridge running or just recharge a few phones. In contrast, trying to do the same with a gas-powered truck can be very wasteful in comparison.

If you live in a disaster-prone area and you’re buying an EV, consider getting the best vehicle-to-home or AC-output option you can afford. It could give you plenty of comfort the next time you’re suffering through an extended power outage. You might even be the hero of your neighborhood if you can keep a TV running for the big game. Stay safe out there!

Image credits: Tesla, misssbaaah via TikTok screenshot

The post Electric Trucks Are Literally Saving Texas’s Bacon As Storms Cut Power Again appeared first on The Autopian.

Read the whole story
LeMadChef
7 hours ago
reply
We live in the worst fucking timeline.
Denver, CO
Share this story
Delete

Evil

1 Comment

Every day it becomes ever clearer. All the talk of super-intelligent AI destroying humanity was and is nonsense. It’s the CEOs running the Big Tech firms pushing AI into every aspect of our lives in increasingly irresponsible and dangerous ways who are the villains in this saga, not the machines themselves. The machines are just tools. Like hammers, they can be used to build a home or smash in a skull.

For evil, you have to look to humans and their corporate greed.

–Lauren–

Read the whole story
LeMadChef
21 hours ago
reply
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Reverend Mother Gaius Helen Mohiam
Denver, CO
Share this story
Delete

AI hype is over. AI exhaustion is setting in.

1 Share
AI hype is over. AI exhaustion is setting in.

At the end of Google’s snorefest of an I/O presentation on Tuesday, CEO Sundar Pichai took the stage to inform the crowd his team had used the term “AI” 120 times over the preceding two hours. He then said it one more time for good measure, so the counter would tick over to 121 and the audience could clap like a bunch of trained seals. The tired display wasn’t just an illustration of how fully Silicon Valley has embraced the latest hype cycle, but also how exhausting the whole thing has become a year and a half after the release of ChatGPT.

Throughout the keynote, Google executives showed off a ton of pretty standard features that were supposed to seem far more impressive because they had some link to Gemini, the name for its current suite of large language models (LLMs). The lemmings in the audience celebrated such groundbreaking features as getting Google Photos to find your license plate number, having a chatbot process the return for some shoes you ordered online, and getting Gemini to throw together some spreadsheets for you. The revolutionary nature of the AI future just continues to astound.

The company also brought DeepMind’s Demis Hassabis — excuse me, Sir Demis Hassabis — up to do some AI boosting of his own. That including repeating a debunked claim that Google’s AI tools had discovered a ton of “new materials” last year. Researchers who reviewed a subset of Google’s data concluded “we have yet to find any strikingly novel compounds,” with one telling 404 Media, “the Google paper falls way short in terms of it being a useful, practical contribution to the experimental materials scientists.” A little later, they got Donald Glover to lend some credibility to Google’s AI video generation efforts, making the dubious claim that it will let anyone become a director — just like we were told with home video cameras, smartphones, and other technologies that did little to break down Hollywood’s gates.

How Hollywood used the digital transition against workers
The challenge facing striking workers goes much deeper than streaming and AI
AI hype is over. AI exhaustion is setting in.

Gone are the days when tech executives could reasonably make us believe artificial general intelligence (AGI) was on the horizon because they’d thrown so much capital and compute behind getting their models to do slightly more advanced work than they’d done before. They certainly can’t scare us any longer with the idea that sentient AIs are on the cusp of enslaving us, if not just killing us all. Beyond the massively oversold new features that might not even need an LLM in the first place, the leaders of this AI push are getting lost in their own fantasies.

Take OpenAI CEO Sam Altman. He’s recently been making the podcast rounds sounding as though the past year of AI proselytizing has made him lose his remaining grasp on reality. Earlier this week, he appeared on the Logan Bartlett Show where he claimed AI will create a huge market for “human, in-person, fantastic experiences” — something that definitely doesn’t exist in the metaverses Altman must think we currently live in. A few days earlier, he joined the increasingly radicalized bros at All-In to explain that universal basic income is dead; instead we should strive for a universal basic compute where everyone would get a share in an LLM to use or sell as they see fit. It had big “let them eat compute” energy.

OpenAI had a showcase of its own this week that was similarly underwhelming, despite how much the company’s executives and parts of the media tried to pretend otherwise. As a few members of the OpenAI team showed off a ChatGPT voice bot doing math equations, responding to basic requests, and doing a very short voice translation demo with a flirty, female tone that got commentators obsessively comparing it to the movie Her, the tool itself kept making mistakes and unrelated comments that the presenters had to awkwardly laugh off. Over at Intelligencer, John Herrman astutely observed that AGI is no closer to being achieved, so OpenAI is getting its tools to “perform the part of an intelligent machine.”

Going back to the 1960s, we know humans are suckers for computers that pretend to be thinking machines, even when they’re doing nothing of the sort. Leading the update, OpenAI CTO Mira Murati said its updated chatbot just “feels so magical.” But as we know, magic is an illusion that always has an explanation. OpenAI just doesn’t want us to know what’s happening below the hood of its tools and keep buying the fantasy as long as possible. Murati is the same woman who became a meme back in March for the expression she pulled when asked in a Wall Street Journal interview if YouTube videos had been used to train OpenAI’s Sora video generator.

The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window.

AI is fueling a data center boom. It must be stopped.
Silicon Valley believes more computation is essential for progress. But they ignore the resource burden and don’t care if the benefits materialize.
AI hype is over. AI exhaustion is setting in.

This whole AI cycle was fueled by fantasies, and when people stop falling for them the bubble starts to deflate. In The Guardian, John Naughton recently laid out the five stages of financial bubbles, noting AI is between stages three and four: euphoria and profit-taking. Tech companies like Microsoft and Google are still spending big to maintain the illusion, but it’s hard to deny that savvy investors see the writing on the wall and are planning their exits, if they haven’t already begun them to avoid being wiped out. The fifth stage — panic — is where we’re headed next.

That doesn’t mean generative AI will disappear. Think back to the last AI cycle in the mid-2010s when robots and AI were supposed to take all our jobs and make us destitute. The fantasy of self-driving cars is still limping along and some of those tools became entrenched, particularly the algorithmic management techniques used to carve workers out of labor protections and make it harder for them to put up a fight against bosses like Amazon and Uber.

Even though the excitement around generative AI is giving way to exhaustion, that doesn’t mean the companies behind these tools aren’t still trying to expand their power over how we used digital technology. It’s quite clear that Google is trying to further sideline the open web by ingesting it into its model then expecting people to spend even more time on its platforms than anywhere else. Things aren’t so different with OpenAI, where they’re hoping to revive the failed voice assistant push that followed the last moment of AI hype and get people used to depending on ChatGPT for virtually everything they do.

Google wants to take over the web
Its new plan for search shows how AI hype hides the real threat of increased corporate power
AI hype is over. AI exhaustion is setting in.

Between those visions, the Google one feels far more threatening because of the structural transformation it hopes to carry out that will further platformize our online experience, at a moment when people are feeling increasingly frustrated with the state of the internet as the services we’ve come to depend on further erode under pressure to maximize profits. But that doesn’t mean OpenAI’s efforts should be ignored. With the backing of Microsoft, it wants to sell people an illusion of intelligence to get them to take their guards down for a power play of its own.

Each of these companies present a threat in their own way, but there might be some solace in the recognition that the return to AI winter is inevitable — and the crash that’s coming could be unlike one we’ve seen in quite some time. The question is how long companies will keep spending increasingly vast sums to pump a little more hot air into the bubble to delay its total deflation. Seeing a CEO count the number of times his underlings said “AI” during a keynote suggests they’re already running on fumes.

Read the whole story
LeMadChef
21 hours ago
reply
Denver, CO
Share this story
Delete

Using vague language about scientific facts misleads readers

1 Share
Using vague language about scientific facts misleads readers

Enlarge

Anyone can do a simple experiment. Navigate to a search engine that offers suggested completions for what you type, and start typing "scientists believe." When I did it, I got suggestions about the origin of whales, the evolution of animals, the root cause of narcolepsy, and more. The search results contained a long list of topics, like "How scientists believe the loss of Arctic sea ice will impact US weather patterns" or "Scientists believe Moon is 40 million years older than first thought."

What do these all have in common? They're misleading, at least in terms of how most people understand the word "believe." In all these examples, scientists have become convinced via compelling evidence; these are more than just hunches or emotional compulsions. Given that difference, using "believe" isn't really an accurate description. Yet all these examples come from searching Google News, and so are likely to come from journalistic outlets that care about accuracy.

Does the difference matter? A recent study suggests that it does. People who were shown headlines that used subjective verbs like "believe" tended to view the issue being described as a matter of opinion—even if that issue was solidly grounded in fact.

Fact vs. opinion

The new work was done by three researchers at Stanford University: Aaron Chueya, Yiwei Luob, and Ellen Markman. "Media consumption is central to how we form, maintain, and spread beliefs in the modern world," they write. "Moreover, how content is presented may be as important as the content itself." The presentation they're interested in involves what they term "epistemic verbs," or those that convey information about our certainty regarding information. To put that in concrete terms, “'Know' presents [a statement] as a fact by presup­posing that it is true, 'believe' does not," they argue.

So, while it's accurate to say, "Scientists know the Earth is warming, and that warming is driven by human activity," replacing "know" with "believe" presents an inaccurate picture of the state of our knowledge. Yet, as noted above, "scientists believe" is heavily used in the popular press. Chueya, Luob, and Markman decided to see whether this makes a difference.

They were interested in two related questions. One is whether the use of verbs like believe and think influences how readers view whether the concepts they're associated with are subjective issues rather than objective, factual ones. The second is whether using that phrasing undercuts the readers' willingness to accept something as a fact.

To answer those questions, the researchers used a subject-recruiting service called Prolific to recruit over 2,700 participants who took part in a number of individual experiments focused on these issues. In each experiment, participants were given a series of headlines and asked about what inferences they drew about the information presented in them.

Beliefs vs. facts

All the experiments were variations on a basic procedure. Participants were given headlines about topics like climate change that differed in terms of their wording. Some of them used wording that implied factual content, like "know" or "understand." Others used terms that implied subjective opinion, like "believe" or "think." In some cases, the concepts were presented without attribution, using verbs like "are" (i.e., instead of "scientists think drought conditions are worsening," these sentences simply stated "drought conditions are worsening").

In the first experiment, the researchers asked participants to rate the factual truth of the statement in the headline and also assess whether the issue in question was a matter of opinion or a statement of fact. Both were rated on a 0–100 scale.

In the first experiment, participants were asked to rate both truthfulness and fact versus opinion for each headline. This showed two effects. One, using terms that didn't imply facts, like "believe," led to people rating the information as less likely to be true. Statements without attribution were rated as the most likely to be factual.

In addition, the participants rated issues in statements that implied facts, like "know" and "understand," as more likely to be objective conclusions rather than matters of opinion.

However, the design of the experiment made a difference to one of those outcomes. When participants were asked only one of these questions, the phrasing of the statements no longer had an impact on whether people rated the statements as true. Yet it still mattered in terms of whether they felt the issue was one of fact or opinion. So, it appeared that asking people to think about whether something is being stated as a fact influenced their rating of the statement's truthfulness.

In the remaining experiments, which used real headlines and examined the effect of preexisting ideas on the subject at issue, the impact of phrasing on people's ratings of truthfulness varied considerably. So, there's no indication that using terminology like "scientists believe" causes problems in understanding whether something is true. But it consistently caused people to rate the issue to be more likely to be a matter of opinion.

Opinionated

Overall, the researchers conclude that the use of fact-implying terminology had a limited effect on whether people actually did consider something a fact—the effect was "weak and varied between studies." So, using something like "scientists believe" doesn't consistently influence whether people think that those beliefs are true. But it does influence whether people view a subject as a matter where different opinions are reasonable, or one where facts limit what can be considered reasonable.

While this seems to be a minor issue here, it could be a problem in the long term. The more people feel that they can reject evidence as a matter of opinion, the more it opens the door to what the authors describe as "the rise of 'post-truth' politics and the dissemination of 'alternative facts.'" And that has the potential to undercut the acceptance of science in a wide variety of contexts.

Perhaps the worst part is that the press as a whole is an active participant, as reading science reporting regularly will expose you to countless instances of evidence-based conclusions being presented as beliefs.

PNAS, 2024.  DOI: 10.1073/pnas.2314091121

Read Comments

Read the whole story
LeMadChef
21 hours ago
reply
Denver, CO
Share this story
Delete
Next Page of Stories