Code Monger, cyclist, sim racer and driving enthusiast.
8014 stories
·
5 followers

Evil

1 Comment

Every day it becomes ever clearer. All the talk of super-intelligent AI destroying humanity was and is nonsense. It’s the CEOs running the Big Tech firms pushing AI into every aspect of our lives in increasingly irresponsible and dangerous ways who are the villains in this saga, not the machines themselves. The machines are just tools. Like hammers, they can be used to build a home or smash in a skull.

For evil, you have to look to humans and their corporate greed.

–Lauren–

Read the whole story
LeMadChef
11 hours ago
reply
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Reverend Mother Gaius Helen Mohiam
Denver, CO
Share this story
Delete

AI hype is over. AI exhaustion is setting in.

1 Share
AI hype is over. AI exhaustion is setting in.

At the end of Google’s snorefest of an I/O presentation on Tuesday, CEO Sundar Pichai took the stage to inform the crowd his team had used the term “AI” 120 times over the preceding two hours. He then said it one more time for good measure, so the counter would tick over to 121 and the audience could clap like a bunch of trained seals. The tired display wasn’t just an illustration of how fully Silicon Valley has embraced the latest hype cycle, but also how exhausting the whole thing has become a year and a half after the release of ChatGPT.

Throughout the keynote, Google executives showed off a ton of pretty standard features that were supposed to seem far more impressive because they had some link to Gemini, the name for its current suite of large language models (LLMs). The lemmings in the audience celebrated such groundbreaking features as getting Google Photos to find your license plate number, having a chatbot process the return for some shoes you ordered online, and getting Gemini to throw together some spreadsheets for you. The revolutionary nature of the AI future just continues to astound.

The company also brought DeepMind’s Demis Hassabis — excuse me, Sir Demis Hassabis — up to do some AI boosting of his own. That including repeating a debunked claim that Google’s AI tools had discovered a ton of “new materials” last year. Researchers who reviewed a subset of Google’s data concluded “we have yet to find any strikingly novel compounds,” with one telling 404 Media, “the Google paper falls way short in terms of it being a useful, practical contribution to the experimental materials scientists.” A little later, they got Donald Glover to lend some credibility to Google’s AI video generation efforts, making the dubious claim that it will let anyone become a director — just like we were told with home video cameras, smartphones, and other technologies that did little to break down Hollywood’s gates.

How Hollywood used the digital transition against workers
The challenge facing striking workers goes much deeper than streaming and AI
AI hype is over. AI exhaustion is setting in.

Gone are the days when tech executives could reasonably make us believe artificial general intelligence (AGI) was on the horizon because they’d thrown so much capital and compute behind getting their models to do slightly more advanced work than they’d done before. They certainly can’t scare us any longer with the idea that sentient AIs are on the cusp of enslaving us, if not just killing us all. Beyond the massively oversold new features that might not even need an LLM in the first place, the leaders of this AI push are getting lost in their own fantasies.

Take OpenAI CEO Sam Altman. He’s recently been making the podcast rounds sounding as though the past year of AI proselytizing has made him lose his remaining grasp on reality. Earlier this week, he appeared on the Logan Bartlett Show where he claimed AI will create a huge market for “human, in-person, fantastic experiences” — something that definitely doesn’t exist in the metaverses Altman must think we currently live in. A few days earlier, he joined the increasingly radicalized bros at All-In to explain that universal basic income is dead; instead we should strive for a universal basic compute where everyone would get a share in an LLM to use or sell as they see fit. It had big “let them eat compute” energy.

OpenAI had a showcase of its own this week that was similarly underwhelming, despite how much the company’s executives and parts of the media tried to pretend otherwise. As a few members of the OpenAI team showed off a ChatGPT voice bot doing math equations, responding to basic requests, and doing a very short voice translation demo with a flirty, female tone that got commentators obsessively comparing it to the movie Her, the tool itself kept making mistakes and unrelated comments that the presenters had to awkwardly laugh off. Over at Intelligencer, John Herrman astutely observed that AGI is no closer to being achieved, so OpenAI is getting its tools to “perform the part of an intelligent machine.”

Going back to the 1960s, we know humans are suckers for computers that pretend to be thinking machines, even when they’re doing nothing of the sort. Leading the update, OpenAI CTO Mira Murati said its updated chatbot just “feels so magical.” But as we know, magic is an illusion that always has an explanation. OpenAI just doesn’t want us to know what’s happening below the hood of its tools and keep buying the fantasy as long as possible. Murati is the same woman who became a meme back in March for the expression she pulled when asked in a Wall Street Journal interview if YouTube videos had been used to train OpenAI’s Sora video generator.

The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window.

AI is fueling a data center boom. It must be stopped.
Silicon Valley believes more computation is essential for progress. But they ignore the resource burden and don’t care if the benefits materialize.
AI hype is over. AI exhaustion is setting in.

This whole AI cycle was fueled by fantasies, and when people stop falling for them the bubble starts to deflate. In The Guardian, John Naughton recently laid out the five stages of financial bubbles, noting AI is between stages three and four: euphoria and profit-taking. Tech companies like Microsoft and Google are still spending big to maintain the illusion, but it’s hard to deny that savvy investors see the writing on the wall and are planning their exits, if they haven’t already begun them to avoid being wiped out. The fifth stage — panic — is where we’re headed next.

That doesn’t mean generative AI will disappear. Think back to the last AI cycle in the mid-2010s when robots and AI were supposed to take all our jobs and make us destitute. The fantasy of self-driving cars is still limping along and some of those tools became entrenched, particularly the algorithmic management techniques used to carve workers out of labor protections and make it harder for them to put up a fight against bosses like Amazon and Uber.

Even though the excitement around generative AI is giving way to exhaustion, that doesn’t mean the companies behind these tools aren’t still trying to expand their power over how we used digital technology. It’s quite clear that Google is trying to further sideline the open web by ingesting it into its model then expecting people to spend even more time on its platforms than anywhere else. Things aren’t so different with OpenAI, where they’re hoping to revive the failed voice assistant push that followed the last moment of AI hype and get people used to depending on ChatGPT for virtually everything they do.

Google wants to take over the web
Its new plan for search shows how AI hype hides the real threat of increased corporate power
AI hype is over. AI exhaustion is setting in.

Between those visions, the Google one feels far more threatening because of the structural transformation it hopes to carry out that will further platformize our online experience, at a moment when people are feeling increasingly frustrated with the state of the internet as the services we’ve come to depend on further erode under pressure to maximize profits. But that doesn’t mean OpenAI’s efforts should be ignored. With the backing of Microsoft, it wants to sell people an illusion of intelligence to get them to take their guards down for a power play of its own.

Each of these companies present a threat in their own way, but there might be some solace in the recognition that the return to AI winter is inevitable — and the crash that’s coming could be unlike one we’ve seen in quite some time. The question is how long companies will keep spending increasingly vast sums to pump a little more hot air into the bubble to delay its total deflation. Seeing a CEO count the number of times his underlings said “AI” during a keynote suggests they’re already running on fumes.

Read the whole story
LeMadChef
11 hours ago
reply
Denver, CO
Share this story
Delete

Using vague language about scientific facts misleads readers

1 Share
Using vague language about scientific facts misleads readers

Enlarge

Anyone can do a simple experiment. Navigate to a search engine that offers suggested completions for what you type, and start typing "scientists believe." When I did it, I got suggestions about the origin of whales, the evolution of animals, the root cause of narcolepsy, and more. The search results contained a long list of topics, like "How scientists believe the loss of Arctic sea ice will impact US weather patterns" or "Scientists believe Moon is 40 million years older than first thought."

What do these all have in common? They're misleading, at least in terms of how most people understand the word "believe." In all these examples, scientists have become convinced via compelling evidence; these are more than just hunches or emotional compulsions. Given that difference, using "believe" isn't really an accurate description. Yet all these examples come from searching Google News, and so are likely to come from journalistic outlets that care about accuracy.

Does the difference matter? A recent study suggests that it does. People who were shown headlines that used subjective verbs like "believe" tended to view the issue being described as a matter of opinion—even if that issue was solidly grounded in fact.

Fact vs. opinion

The new work was done by three researchers at Stanford University: Aaron Chueya, Yiwei Luob, and Ellen Markman. "Media consumption is central to how we form, maintain, and spread beliefs in the modern world," they write. "Moreover, how content is presented may be as important as the content itself." The presentation they're interested in involves what they term "epistemic verbs," or those that convey information about our certainty regarding information. To put that in concrete terms, “'Know' presents [a statement] as a fact by presup­posing that it is true, 'believe' does not," they argue.

So, while it's accurate to say, "Scientists know the Earth is warming, and that warming is driven by human activity," replacing "know" with "believe" presents an inaccurate picture of the state of our knowledge. Yet, as noted above, "scientists believe" is heavily used in the popular press. Chueya, Luob, and Markman decided to see whether this makes a difference.

They were interested in two related questions. One is whether the use of verbs like believe and think influences how readers view whether the concepts they're associated with are subjective issues rather than objective, factual ones. The second is whether using that phrasing undercuts the readers' willingness to accept something as a fact.

To answer those questions, the researchers used a subject-recruiting service called Prolific to recruit over 2,700 participants who took part in a number of individual experiments focused on these issues. In each experiment, participants were given a series of headlines and asked about what inferences they drew about the information presented in them.

Beliefs vs. facts

All the experiments were variations on a basic procedure. Participants were given headlines about topics like climate change that differed in terms of their wording. Some of them used wording that implied factual content, like "know" or "understand." Others used terms that implied subjective opinion, like "believe" or "think." In some cases, the concepts were presented without attribution, using verbs like "are" (i.e., instead of "scientists think drought conditions are worsening," these sentences simply stated "drought conditions are worsening").

In the first experiment, the researchers asked participants to rate the factual truth of the statement in the headline and also assess whether the issue in question was a matter of opinion or a statement of fact. Both were rated on a 0–100 scale.

In the first experiment, participants were asked to rate both truthfulness and fact versus opinion for each headline. This showed two effects. One, using terms that didn't imply facts, like "believe," led to people rating the information as less likely to be true. Statements without attribution were rated as the most likely to be factual.

In addition, the participants rated issues in statements that implied facts, like "know" and "understand," as more likely to be objective conclusions rather than matters of opinion.

However, the design of the experiment made a difference to one of those outcomes. When participants were asked only one of these questions, the phrasing of the statements no longer had an impact on whether people rated the statements as true. Yet it still mattered in terms of whether they felt the issue was one of fact or opinion. So, it appeared that asking people to think about whether something is being stated as a fact influenced their rating of the statement's truthfulness.

In the remaining experiments, which used real headlines and examined the effect of preexisting ideas on the subject at issue, the impact of phrasing on people's ratings of truthfulness varied considerably. So, there's no indication that using terminology like "scientists believe" causes problems in understanding whether something is true. But it consistently caused people to rate the issue to be more likely to be a matter of opinion.

Opinionated

Overall, the researchers conclude that the use of fact-implying terminology had a limited effect on whether people actually did consider something a fact—the effect was "weak and varied between studies." So, using something like "scientists believe" doesn't consistently influence whether people think that those beliefs are true. But it does influence whether people view a subject as a matter where different opinions are reasonable, or one where facts limit what can be considered reasonable.

While this seems to be a minor issue here, it could be a problem in the long term. The more people feel that they can reject evidence as a matter of opinion, the more it opens the door to what the authors describe as "the rise of 'post-truth' politics and the dissemination of 'alternative facts.'" And that has the potential to undercut the acceptance of science in a wide variety of contexts.

Perhaps the worst part is that the press as a whole is an active participant, as reading science reporting regularly will expose you to countless instances of evidence-based conclusions being presented as beliefs.

PNAS, 2024.  DOI: 10.1073/pnas.2314091121

Read Comments

Read the whole story
LeMadChef
11 hours ago
reply
Denver, CO
Share this story
Delete

Slack users horrified to discover messages used for AI training

1 Share
Slack users horrified to discover messages used for AI training

Enlarge (credit: Tim Robberts | DigitalVision)

After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers' data—including messages, content, and files—to train Slack's global AI models.

According to Slack engineer Aaron Maurer, Slack has explained in a blog that the Salesforce-owned chat service does not train its large language models (LLMs) on customer data. But Slack's policy may need updating "to explain more carefully how these privacy principles play with Slack AI," Maurer wrote on Threads, partly because the policy "was originally written about the search/recommendation work we've been doing for years prior to Slack AI."

Maurer was responding to a Threads post from engineer and writer Gergely Orosz, who called for companies to opt out of data sharing until the policy is clarified, not by a blog, but in the actual policy language.

"An ML engineer at Slack says they don’t use messages to train LLM models," Orosz wrote. "My response is that the current terms allow them to do so. I’ll believe this is the policy when it’s in the policy. A blog post is not the privacy policy: every serious company knows this."

The tension for users becomes clearer if you compare Slack's privacy principles with how the company touts Slack AI.

Slack's privacy principles specifically say that "Machine Learning (ML) and Artificial Intelligence (AI) are useful tools that we use in limited ways to enhance our product mission. To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as other information (including usage information) as defined in our privacy policy and in your customer agreement."

Meanwhile, Slack AI's page says, "Work without worry. Your data is your data. We don't use it to train Slack AI."

Because of this incongruity, users called on Slack to update the privacy principles to make it clear how data is used for Slack AI or any future AI updates. According to a Salesforce spokesperson, the company has agreed an update is needed.

"Yesterday, some Slack community members asked for more clarity regarding our privacy principles," Salesforce's spokesperson told Ars. "We’ll be updating those principles today to better explain the relationship between customer data and generative AI in Slack."

The spokesperson told Ars that the policy updates will clarify that Slack does not "develop LLMs or other generative models using customer data," "use customer data to train third-party LLMs" or "build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data." The update will also clarify that "Slack AI uses off-the-shelf LLMs where the models don't retain customer data," ensuring that "customer data never leaves Slack's trust boundary, and the providers of the LLM never have any access to the customer data."

These changes, however, do not seem to address a key concern for users who never explicitly consented to sharing chats and other Slack content for use in AI training.

Users opting out of sharing chats with Slack

This controversial policy is not new. Wired warned about it in April, and TechCrunch reported that the policy has been in place since at least September 2023.

But widespread backlash began swelling last night on Hacker News, where Slack users called out the chat service for seemingly failing to notify users about the policy change, instead quietly opting them in by default. To critics, it felt like there was no benefit to opting in for anyone but Slack.

From there, the backlash spread to social media, where SlackHQ hastened to clarify Slack's terms with explanations that did not seem to address all the criticism.

"I'm sorry Slack, you're doing fucking WHAT with user DMs, messages, files, etc?" Corey Quinn, the chief cloud economist for a cost management company called Duckbill Group, posted on X. "I'm positive I'm not reading this correctly."

SlackHQ responded to Quinn after the economist declared, "I hate this so much," and confirmed that he had opted out of data sharing in his paid workspace.

"To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results," SlackHQ posted. "And yes, customers can exclude their data from helping train those (non-generative) ML models. Customer data belongs to the customer."

Later in the thread, SlackHQ noted, "Slack AI—which is our generative AI experience natively built in Slack—[and] is a separately purchased add-on that uses Large Language Models (LLMs) but does not train those LLMs on customer data."

Opting out is not necessarily straightforward, and individuals currently cannot opt out unless their entire organization opts out.

"You can always quit your job, right?" a Hacker News commenter joked.

And rather than adding a button to immediately turn off the firehose, Slack instructs customers to use a very specific subject line and contact Slack directly to stop sharing data:

Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line ‘Slack global model opt-out request’. We will process your request and respond once the opt-out has been completed.

"Where is the opt-out button?" one Threads user asked Maurer.

Many commenters on Hacker News, Threads, and X confirmed that they were opting out after reading Slack's policy, as well as urging their organizations to consider using other chat services. Ars also chose to opt out today.

However, it remains unclear what exactly happens when users opt out. Commenters on Hacker News slammed Slack for failing to explain whether opting out deletes data from the models or "what exactly does the customer support rep do on their end to opt you out."

"You can't exactly go into the model and 'erase' parts of the corpus post-hoc," one commenter suggested.

All Slack's privacy principles state that "if you opt out, Customer Data on your workspace will only be used to improve the experience on your own workspace and you will still enjoy all of the benefits of our globally trained AI/ML models without contributing to the underlying models."

Slack’s consent model seems to conflict with GDPR

Slack's privacy policy, terms, and security documentation supposedly spell out how it uses customer data. However, The Stack reported that none of those legal documents mention AI or machine learning, despite Slack debuting machine-learning features in 2016.

There's no telling yet if Slack will make any additional changes as more customers opt out. What is clear from Slack's documents is that Slack knows that its customers "have high expectations around data ownership" and that it has "an existential interest in protecting" that data.

It's possible that lawmakers will force Slack to be more transparent about changes in its data collection as the chat service continues experimenting with AI.

It's also possible that Slack already doesn't default some customers to opt into data collection for ML training. The European Union's General Data Protection Regulation (GDPR) requires informed and specific consent before companies can collect data.

"Consent cannot be implied and must always be given through an opt-in," the strict privacy law says. And companies must be prepared to demonstrate that they've received consent through opt-ins, the law says.

In the United Kingdom, the Information Commissioner's Office (ICO) requires explicit consent, specifically directing companies to note that "consent requires a positive opt-in."

"Don’t use pre-ticked boxes or any other method of default consent," ICO said. "Keep your consent requests separate from other terms and conditions."

Salesforce's spokesperson declined to comment on how Slack's policy complies with the GDPR. But Slack has said that it's committed to complying with the GDPR, promising to "update our product features and contractual commitments accordingly." That did not seem to happen when Slack AI was launched in February.

Orosz warned that any chief technology officer (CTO) or chief information officer (CIO) letting Slack slide for defaulting customers into AI training data sharing should recognize that Slack setting that precedent could quickly become a slippery slope that other companies take advantage of.

"If you are a CTO or a CIO at your company and paying for Slack: why are you still opted in?" Orosz asked on Threads. "This is the type of thing where Slack should collect this data from free customers. Paying would be the perk that your messages don’t end up in AI training data. What company will try to pull this next with customers trusting them with confidential information/data?"

This post was updated on May 17 to correct quotes from SlackHQ's posts on X.

Read Comments

Read the whole story
LeMadChef
11 hours ago
reply
Denver, CO
Share this story
Delete

A Car Company Just Revealed A Big 8-Cylinder Boxer Motorcycle And It’s All Kinds Of Silly

1 Share

Electric motorcycles have been stealing headlines lately, but manufacturers aren’t yet done playing around with ICE technology. A car company has a new motorcycle engine and it’s a strange one. Great Wall Motor has unveiled a chunky 2.0-liter 8-cylinder boxer engine. This weirdo is slated to go into the equally massive Great Wall Souo S 2000 ST motorcycle that’s supposed to be China’s answer to the Honda Gold Wing. Not only is that engine large, but it’s also the only one of its kind. What in the world is going over there?

I’ve been following the Chinese motorcycle industry for years, far longer than you’ve seen my byline on any website. It wasn’t even a full decade ago when so many Chinese motorcycles were still cheap rips of popular bikes from Japan. Now, many Chinese motorcycle brands are trying to carve out their own paths. This has been exciting to see. Some brands seem to be digging into the past, scooping up ideas abandoned decades ago by brands like Honda. You can now find tiny V4s putting around China and the old motorcycles of the 1980s have been reborn as modern steeds. For some reason, one brand seems rather enamored by girder forks, or at least the appearance of them.

Lately, some Chinese brands have been seemingly obsessed with motorcycles of a lot of girth. Felo seems to be trying to build the world’s largest production electric motorcycle and now Great Wall Motor wants to build a sizable unit, too. The brand has launched a motorcycle sub-brand, Great Wall Souo, a reference to “soul.” The brand’s launch bike is called the S 2000 ST and it’s like a bizzaro universe Gold Wing, but somehow even bigger.

1841207 115

We’ve written a bit about Great Wall Motor, but it’s worth noting where this is coming from. Great Wall Motor began vehicle production in 1984 with the CC130, a very basic utility truck with a tray on the back. Then came the CC513, an eight-passenger SUV based on what was originally a military design.

Weirdly, GWM’s recounting of its own history begins in 1990. That year, the nephew of founder Wei Deliang, Wei Jianjun “Jack Wei,” became General Manager of the company. GWM says it finally started turning a profit a few years later. Under Jianjun’s early control, GWM made its first car in 1993. The CC1020 rolled out and to many observers it looks like a clone of a Nissan Cedric. Its subsequent early cars would resemble a Toyota Crown and even the Rolls-Royce Silver Spur.

Great Wall Coolbear 2009 2013 Ha

Fast-forward a lot of years and GWM has grown into a large corporation and Jianjun is a billionaire. GWM has splintered itself off as a variety of brands. Haval builds crossovers, Ora makes cute electric cat-themed cars, Wey is the electric luxury brand, Tank is the off-road brand, and GWM is the truck brand.

Some of Great Wall’s modern cars don’t help stereotypes. I mean, the Great Wall Coolbear above seems more than inspired by the first-generation Scion xB.

1841211 115 长城汽车董事长魏建军与幸福250
GWM General Manager Jack Wei

According to Great Wall, Jianjun had a long interest in motorcycles, but his company has never built any. That appears to be changing this year as the company says it’s taking car technology and is distilling it down to motorcycle size.

The Great Wall Souo company name and patents started appearing last month, now we get to see what’s been brewing.

The S 2000 ST

1841210 115

This news comes to us fresh from contributor Tycho de Feijter, then I found the company’s press release on the bike. Great Wall Motor unveiled this motorcycle on May 17 in China and it’s hot off of the press.

Now, the English version of GWM’s website is a mess and describes the motorcycle as a “Great Wall Soul Station Wagon” and that it represents “Search Own, Unlimited Outlook” and “self-pursuit, unlimited vision.” It’s pretty garbled, but I can parse it out for us.

Kg56d4p54fc6jn33zws7rtqxjq (1)

Great Wall starts off by saying this absolute unit of a motorcycle is equipped with the world’s only horizontally opposed 8-cylinder engine in current production. This part is true. Subaru ended its H6 production in recent years and there’s a boxer six currently in the Honda Gold Wing, but nobody is going as far as making an 8-cylinder boxer. This engine comes in at 2,000cc and it’s a boxer in the true sense. The engine is built with separate crank pins for each piston. The pistons move in the opposite direction of their neighbor on the other side of the engine.

This engine is hooked up to an 8-speed DCT with a reverse gear. It takes on a layout similar to the Gold Wing where the transmission is mounted down low to save on space.

1841204 115 H8 8dct

All of this is held up with what Great Wall says is the world’s first three-layer stepped front suspension, which also has a multi-stage adjustable electronic shock absorber. Again, that’s broken English, but it just sounds like the motorcycle has an electrically adjustable suspension on both corners, which is about as you’d expect on a flagship motorcycle like this.

Great Wall also says the motorcycle is built with a welded aluminum frame devoid of any screws. Brembo 4-piston calipers stop the show in the front and the rear.

1841208 115 全球首创3

Saving space was necessary, too. This motorycle already measures 104 inches long with a wheelbase of 71 inches and a seat height of 29.1 inches. To put that into perspective, a Honda Gold Wing is 97 inches long with a wheelbase of 66.7 inches and a seat height of 29.3 inches. Great Wall doesn’t say how heavy the S 2000 ST is, but I wouldn’t expect it to be lighter than a loaded, 847-pound Honda Gold Wing.

As for the design, the S 2000 ST looks a little bit like a Gold Wing, but with far more curves. I’ll just let Great Wall take the mic:

1841209 115

Great Wall Soul Station Wagon draws design inspiration from the “Chinese Lion Dance”, and every detail exudes the charm of Eastern aesthetics, reflecting the aesthetic characteristics of “meaning in things” in Eastern culture. The headlights are designed with “Smart Light Language” as the source of their design. They are smart and spirited, symbolizing wisdom and courage. The posture of the vehicle is based on the design direction of “ready to go” before a lion jumps, and every turn of the side lines of the vehicle is The curves and bends are just right, outlining the visual perception of a low center of gravity, like a crawling lion about to dance, showing a profound understanding of “agility” and “majesty” in Eastern aesthetics, which is powerful and shocking. The through-type taillights echoing the headlights reveal a kind of elegance and sophistication between light and shadow, like a work of art, fully demonstrating the unique taste and pursuit of the car owner.

Like many Chinese products, the Great Wall Sono S 2000 ST is loaded down with tech. There’s a 12.3-inch touch screen controlling the motorcycle that’s powered by a Qualcomm SA8155P Snapdragon SoC. It features over the air updates, but Great Wall otherwise doesn’t talk about everything the screen can do.

1841203 115 12.3英寸lcd

1841205 115

What I can tell you is that the motorcycle has an automatic parking brake, electrically adjustable windshield, cruise control, heated grips, an eight-speaker sound system, a heated seat, and what appears to be a navigation system. Of course, like any good touring bike, Great Wall says you get storage cases. The side cases hold 118L each. Some of these functions are carried out through a dial just behind the handlebars. Safety systems include a collision warning system and a blind spot monitoring system.

Great Wall Souo is thus far quiet on exact pricing, sales markets, and the engine’s output. That said, the company expects to get the first S 2000 STs on the road in China soon.

1841206 115

Honestly, I haven’t stopped laughing since I started writing this. Some might say that Great Wall is copying Honda, but I’d say they were more inspired by the likes of Ferdinand Piëch. It seems the theme of this motorcycle is “more is more,” which is exactly how Volkswagen used to be. This motorcycle has a bigger engine than a Gold Wing with a bigger screen and a substantially bigger body. Great Wall’s strategy to beat Honda seems to be “go bigger,” which is just silly when you’re talking about motorcycles.

Maybe I’m a masochist, but I’d love to see maybe just one of these come to America just so I could see what riding a two-wheeled tank would be like. At the very least, that engine sounds pretty awesome. I’m already thinking about the other vehicles I’d love to see it put into. An H8 Golf GTI, anyone?

Images: Great Wall Motor

Popular Stories

The post A Car Company Just Revealed A Big 8-Cylinder Boxer Motorcycle And It’s All Kinds Of Silly appeared first on The Autopian.

Read the whole story
LeMadChef
11 hours ago
reply
Denver, CO
Share this story
Delete

Quick-Lube Oil Change Math: How A $20 Job Becomes A $100 Job

1 Share

One of the most basic parts of car ownership is taking care of basic maintenance. The most frequent item that should be addressed is changing your vehicle’s oil , and that’s a task that comes down to a simple choice: Do you do it yourself or take it somewhere?

As we’ve established, I was a grease monkey in a prior life. But even with those skills and know-how, I elect to have someone else handle the job. Why? Well, time is money, and the money you save by doing it yourself, you might have to reinvest many times over in labor, risk, and cleanup. I was trained to work on cars standing up; crawling under is a hard adjustment. But I’m cheap and picky, so I’m selective on where I get an oil change done. 

Let’s run the numbers.

As Matt has recently shown us, oil, through the right retailer, is cheap. And based on lab testing as David discusses in “Expensive Oil Is Waste Of Money,” there is little difference between the performance of the cheap stuff versus pricey oils. [Ed Note: I hated the headline “Why Expensive Oil Is A Waste Of Money.” I didn’t write it. That headline, without the word ‘sometimes’ after ‘is,’ is misleading. -DT]

However, if you’re going to a shop for an instant oil change or even the dealership, you might feel like you’ve just lost an arm and a leg. It just depends on what the shop is prioritizing and what you’re willing to accept. 

1024px Valvoline Instant Oil Change Outlet On Baseline Road Hillsboro, Oregon (2015)
Photo: Steve Morgan, CC BY-SA 3.0, via Wikimedia Commons

For this demonstration, I’ll be using my Ford Maverick hybrid. It takes 5.7 quarts of 0W-20. When I called the nearest Valvoline, they quoted me:

  • $107.68 for full synthetic oil: $99.99 for 5 quarts and a filter, and an additional $7.69 for the remaining 0.7 quart
  • $75.99 for synthetic blend oil: $70.40 for 5 quarts and a filter, and an additional $5.59 for the remaining 0.7 quart
  • $55.88 for conventional oil: $50.99 for 5 quarts and a filter, and an additional $4.89 for the remaining 0.7 quart

Now let’s look at the nearby Ford dealership where I took my Maverick to address four recalls.

  • $98.47 for 5.7 quarts of Full Synthetic Oil and a filter

Welp, that’s the last time I ask someone to do an oil change without asking the costs ahead of time. And now the Chevrolet dealership in Mishawaka that I took the truck to last May.

  • $55 for 5.7 quarts of full synthetic oil and a filter

Between the highest and lowest prices, for the same type of oil, that’s a 95% difference in price! So what are the differences between these places?

Let’s Talk Labor Costs

Pxl 20211219 220620544
Me hiding my face from the camera, presumably in shame at the prices.

At Valvoline in 2016, I was making $10.50/hour as a Senior Tech. I was one or two steps above entry level and was a keyholder, a shade below Assistant Store Manager. Then in my second stint with Valvoline in 2021, I picked it up as a second job in an attempt to pay off bills, and my primary source of income was just enough to break even; I was making $15/hour. Based on this trajectory of increase, I’ll use $18/hour as the going rate for labor in 2024.

For a two-bay store, there’d typically be between four to five staffers at a time; one or two customer service advisors, two toppers, and one pitter. Each car would take at most 15 minutes, barring any complications or additional services. Based on this, on a busy day, the store could get through at least eight cars in an hour.  

Labor cost per car with four staffers:
$18/hour x 4 staffers ÷ 8 cars = $9 in labor/car

Labor cost per car with five staffers:
$18/hour x 5 staffers ÷ 8 cars = $11.25 in labor/car

Seems straightforward enough! Yet, while eight cars in an hour would be ideal for profit, like any business, there are rushes and dead times throughout the day. A possibly more exact calculation would be to look at the average number of cars a quick lube shop sees in a day and the total hours paid out for that day.

Using my primary store in Midland, we would see roughly 32 cars a day and hopefully have at least five people scheduled for eight-hour shifts.

$18/hour x 5 staffers x 8 hours ÷ 32 cars =  $22.5 in labor/car

Cost Of Oil And Filter

1024px Motor Oil For Cars And Motorcycles From Various Producers In German Hardware Store
Photo: Pittigrilli, CC0, via Wikimedia Commons

Oil is cheap! While that might run counter to current prices at a quick lube, you can get 5 quarts of full synthetic for $18.98 at Walmart, $21.99 at Meijer, or if you’re getting more outdoors supplies, $24.99 at Tractor Supply. I’ll stick with using full synthetic oil for the rest of this exercise because if you need to use 0W-20 oil, it’s probably the best option. And the Maverick is my queen and deserves the best. 

Walmart Supertech Oil X

Looking for cheap but acceptable filters, Bosch filters for my Maverick start at $2.60 a piece at Rock Auto. 

Img 4236x

Obviously, places like dealerships and Valvoline get fluids at wholesale prices. A quick search for bulk quantities and I learned you could order a 55-gallon drum of 0W-20 full synthetic oil from the Miami Oil Company for $654.75. That works out to $2.98/quart or $14.90/5 quarts. I’d assume there are even better rates for Valvoline and the like but I do not have access to their books.

Img 938b63e7258f 1 Copy

Ticket Price, Or ‘How Did I Just Come In For Just One Thing And Leave With Five?’

The ticket price is the hidden number customers don’t see. It’s a target corporate sets as the desired revenue per car. At Valvoline, if your store did well on the number of customers you saw per month, the average “ticket,” and customer reviews, you’d get a bonus. This obviously encouraged CSAs to upsell when possible.

That could be from suggesting customers try a full synthetic oil instead of a mix, or attempting to sell a coolant or transmission service. Heck, air filters were/are a big part of this too. As of now, Valvoline charges $24.99 for an engine filter and $54.99 (!!!) for a cabin filter. While researching prices for my significant other’s 2012 Honda Fit, I saw Rock Auto sells Bosch engine and cabinet filters for $4 and $9 respectively. This is one of the more uncomfortable aspects associated with working in quick lube.

I was texting one of my closest friends whom I met while working at Valvoline. I had expressed my shock after discovering Valvoline’s new prices. He gave me permission to share his response. 

J Text 1 Comp

Adding It All Up

So working with limited data, my best estimate is that it costs at least $22.50 in labor/car and at most $17.68 in materials costs for my Maverick, for a grand total of $40.18 for a quick lube oil change. Subtracting that from the original quote of $107.68 leaves $67.50 to go towards overhead costs and net profits.

Pxl 20230531 173120361
The dealership was three blocks from my apartment, so I could pedal back and enjoy the pool while they were working on the truck.

Now looking at the Mishawaka dealership, labor costs are slightly different. The best-going rate I can find when calling around is a $25 labor charge when I supply my own oil and filter. If an oil change takes 15 minutes, that’s $100/hour in labor costs, not an unheard-of number. $25 in labor + $17.68 in materials = $42.68. Subtracting that from the $55 overall cost leaves $12.32 for revenue, a much more reasonable number.

I once asked one of the dealership’s service managers why the cost was so low, relatively speaking, compared to their competition. He said it’s an intentional loss leader. They didn’t care about squeezing the money out of an oil change because that’s not their main business – they want to sell cars. By offering cheaper oil changes, they establish a positive reputation in the community and increase the number of people who visit their dealership. And boy, did it work. It wasn’t uncommon to see the dealership’s driveway filled with cars in line for oil changes.

For me, my go-to is grabbing oil and a filter for $29 while I’m grocery shopping at Meijer. I then take it to whichever dealership has the best quote for labor. I’m okay with that because I don’t have jacks or stands (yet) and my apartment complex frowns upon wrenching in their parking lot. But one day, I’ll have a house with a pole barn and a lift. Then just watch me. I’m sure I’ll find new, unexpected ways to make mistakes and get messy. [Ed Note: And write about them!]

Topshot: oil container via Valvoline; background image by methaphum/stock.adobe.com

The post Quick-Lube Oil Change Math: How A $20 Job Becomes A $100 Job appeared first on The Autopian.

Read the whole story
LeMadChef
16 hours ago
reply
Denver, CO
Share this story
Delete
Next Page of Stories