Code Monger, cyclist, sim racer and driving enthusiast.
9428 stories
·
6 followers

Teachers Are Not OK

1 Comment and 2 Shares
Teachers Are Not OK

Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses. 

One thing is clear: teachers are not OK. 

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

Below, I have compiled some of the responses I got. Some of the teachers were comfortable with their responses being used on the record along with their names. Others asked that I keep them anonymous because their school or school district forbids them from speaking to the press. The responses have been edited by 404 Media for length and clarity, but they are still really long. These are teachers, after all. 

Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice at York University in Toronto

Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.

I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you. 

"Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased."

We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we've been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for "idea generation" but then prohibit using it for writing text. Where's the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear "don't use generative AI" from a prof but then log on to the university's Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It's inconsistent and confusing.

I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so? 

I am sick to my stomach as I write this because I've spent 20 years developing a pedagogy that's about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It's demoralizing.

It has made my job much, much harder. I do not allow genAI in my classes. However, because genAI is so good at producing plausible-sounding text, that ban puts me in a really awkward spot. If I want to enforce my ban, I would have to do hours of detective work (since there are no reliable ways to detect genAI use), call students into my office to confront them, fill out paperwork, and attend many disciplinary hearings. All of that work is done to ferret out cheating students, so we have less time to spend helping honest ones who are there to learn and grow. And I would only be able to find a small percentage of the cases, anyway.

Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.

Kaci Juge, high school English teacher

I personally haven't incorporated AI into my teaching yet. It has, however, added some stress to my workload as an English teacher. How do I remain ethical in creating policies? How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point.

Ben Prytherch, Statistics professor

LLM use is rampant, but I don't think it's ubiquitous. While I can never know with certainty if someone used AI, it's pretty easy to tell when they didn't, unless they're devious enough to intentionally add in grammatical and spelling errors or awkward phrasings. There are plenty of students who don't use it, and plenty who do. 

LLMs have changed how I give assignments, but I haven't adapted as quickly as I'd like and I know some students are able to cheat. The most obvious change is that I've moved to in-class writing for assignments that are strictly writing-based. Now the essays are written in-class, and treated like mid-term exams. My quizzes are also in-class. This requires more grading work, but I'm glad I did it, and a bit embarrassed that it took ChatGPT to force me into what I now consider a positive change. Reasons I consider it positive:

  • I am much more motivated to write detailed personal feedback for students when I know with certainty that I'm responding to something they wrote themselves.
  • It turns out most of them can write after all. For all the talk about how kids can't write anymore, I don't see it. This is totally subjective on my part, of course. But I've been pleasantly surprised with the quality of what they write in-class. 

Switching to in-class writing has got me contemplating giving oral examinations, something I've never done. It would be a big step, but likely a positive and humanizing one. 

There's also the problem of academic integrity and fairness. I don't want students who don't use LLMs to be placed at a disadvantage. And I don't want to give good grades to students who are doing effectively nothing. LLM use is difficult to police. 

Lastly, I have no patience for the whole "AI is the future so you must incorporate it into your classroom" push, even when it's not coming from self-interested people in tech. No one knows what "the future" holds, and even if it were a good idea to teach students how to incorporate AI into this-or-that, by what measure are us teachers qualified? 

Kate Conroy 

I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded. 

I refuse to use AI on principle except for one time last year when I wanted to test it, to see what it could and could not do so that I could structure my prompts to thwart it. I learned that at least as of this time last year, on questions of literary analysis, ChatGPT will make up quotes that sound like they go with the themes of the books, and it can’t get page numbers correct. Luckily I have taught the same books for many years in a row and can instantly identify an incorrect quote and an incorrect page number. There’s something a little bit satisfying about handing a student back their essay and saying, “I can’t find this quote in the book, can you find it for me?” Meanwhile I know perfectly well they cannot. 

I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that. 

I am only 30 years old. I have a long road ahead of me to retirement. But it is so hard to ask kids to learn, read, and write, when so many adults are no longer doing the work it takes to ensure they are really learning, reading, and writing. And I get it. That work has suddenly become so challenging. It’s really not fair to us. But if we’re not willing to do it, we shouldn’t be in the classroom. 

Jeffrey Fischer

The biggest thing for us is the teaching of writing itself, never mind even the content. And really the only way to be sure that your students are learning anything about writing is to have them write in class. But then what to do about longer-form writing, like research papers, for example, or even just analytical/exegetical papers that put multiple primary sources into conversation and read them together? I've started watching for the voices of my students in their in-class writing and trying to pay attention to gaps between that voice and the voice in their out-of-class writing, but when I've got 100 to 130 or 140 students (including a fully online asynchronous class), that's just not really reliable. And for the online asynch class, it's just impossible because there's no way of doing old-school, low-tech, in-class writing at all.

"I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit."

You may be familiar with David Graeber's article-turned-book on Bullshit Jobs. This is a recent paper looking specifically at bullshit jobs in academia. No surprise, the people who see their jobs as bullshit jobs are mostly administrators. The people who overwhelmingly do NOT see their jobs as bullshit jobs are faculty.

But that is what I see AI in general and LLMs in particular as changing. The situations I'm describing above are exactly the things that turn what is so meaningful to us as teachers into bullshit. The more we think that we are unable to actually teach them, the less meaningful our jobs are. 

I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I'm going through the motions of teaching. I'm putting a lot of time and emotional effort into it, as well as the intellectual effort, and it's getting flushed into the void. 

Post-grad educator

Last year, I taught a class as part of a doctoral program in responsible AI development and use. I don’t want to share too many specifics, but the course goal was for students to think critically about the adverse impacts of AI on people who are already marginalized and discriminated against.

When the final projects came in, my co-instructor and I were underwhelmed, to say the least. When I started digging into the projects, I realized that the students had used AI in some incredibly irresponsible ways—shallow, misleading, and inaccurate analysis of data, pointless and meaningless visualizations. The real kicker, though, was that we got two projects where the students had submitted a “podcast.” What they had done, apparently, was give their paper (which already had extremely flawed AI-based data analysis) to a gen AI tool and asked it to create an audio podcast. And the results were predictably awful. Full of random meaningless vocalizations at bizarre times, the “female” character was incredibly dumb and vapid (sounded like the “manic pixie dream girl” trope from those awful movies), and the “analysis” in the podcast exacerbated the problems that were already in the paper, so it was even more wrong than the paper itself. 

In short, there is nothing particularly surprising in how badly the AI worked here—but these students were in a *doctoral* program on *responsible AI*. In my career as a teacher, I’m hard pressed to think of more blatantly irresponsible work by students. 

Nathan Schmidt, University Lecturer, managing editor at Gamers With Glasses

When ChatGPT first entered the scene, I honestly did not think it was that big of a deal. I saw some plagiarism; it was easy to catch. Its voice was stilted and obtuse, and it avoided making any specific critical judgments as if it were speaking on behalf of some cult of ambiguity. Students didn't really understand what it did or how to use it, and when the occasional cheating would happen, it was usually just a sign that the student needed some extra help that they were too exhausted or embarrassed to ask for, so we'd have that conversation and move on.

I think it is the responsibility of academics to maintain an open mind about new technologies and to react to them in an evidence-based way, driven by intellectual curiosity. I was, indeed, curious about ChatGPT, and I played with it myself a few times, even using it on the projector in class to help students think about the limits and affordances of such a technology. I had a couple semesters where I thought, "Let's just do this above board." Borrowing an idea from one of my fellow instructors, I gave students instructions for how I wanted them to acknowledge the use of ChatGPT or other predictive text models in their work, and I also made it clear that I expected them to articulate both where they had used it and, more importantly, the reason why they found this to be a useful tool. I thought this might provoke some useful, critical conversation. I also took a self-directed course provided by my university that encouraged a similar curiosity, inviting instructors to view predictive text as a tool that had both problematic and beneficial uses.

"ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo"

However, this approach quickly became frustrating, for two reasons. First, because even with the acknowledgments pages, I started getting hybrid essays that sounded like they were half written by students and half written by robots, which made every grading comment a miniature Turing test. I didn't know when to praise students, because I didn't want to write feedback like, "I love how thoughtfully you've worded this," only to be putting my stamp of approval on predictively generated text. What if the majority of the things that I responded to positively were things that had actually been generated by ChatGPT? How would that make a student feel about their personal writing competencies? What lesson would that implicitly reinforce about how to use this tool? The other problem was that students were utterly unprepared to think about their usage of this tool in a critically engaged way. Despite my clear instructions and expectation-setting, most students used their acknowledgments pages to make the vaguest possible statements, like, "Used ChatGPT for ideas" or "ChatGPT fixed grammar" (comments like these also always conflated grammar with vocabulary and tone). I think there was a strong element of selection bias here, because the students who didn't feel like they needed to use ChatGPT were also the students who would have been most prepared to articulate their reasons for usage with the degree of specificity I was looking for. 

This brings us to last semester, when I said, "Okay, if you must use ChatGPT, you can use it for brainstorming and outlining, but if you turn something in that actually includes text that was generated predictively, I'm sending it back to you." This went a little bit better. For most students, the writing started to sound human again, but I suspect this is more because students are unlikely to outline their essays in the first place, not because they were putting the tool to the allowable use I had designated. 

ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It's a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the 'content creators,' casting everyone else into the creatively bereft role of the content “consumer." And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order. So that's the long story about how I adopted an absolute zero tolerance policy on any use of ChatGPT or any similar tool in my course, working my way down the funnel of progressive acceptance to outright conservative, Luddite rejection. 

John Dowd

I’m in higher edu, and LLMs have absolutely blown up what I try to accomplish with my teaching (I’m in the humanities and social sciences). 

Given the widespread use of LLMs by college students I now have an ongoing and seemingly unresolvable tension, which is how to evaluate student work. Often I can spot when students have used the technology between both having thousands of samples of student writing over time, and cross referencing my experience with one or more AI use detection tools. I know those detection tools are unreliable, but depending on the confidence level they return, it may help with the confirmation. This creates an atmosphere of mistrust that is destructive to the instructor/student relationship. 

"LLMs have absolutely blown up what I try to accomplish with my teaching"

I try to appeal to students and explain that by offloading the work of thinking to these technologies, they’re rapidly making themselves replaceable. Students (and I think even many faculty across academia) fancy themselves as “Big Idea” people. Everyone’s a “Big Idea” person now, or so they think. “They’re all my ideas,” people say, “I’m just using the technology to save time; organize them more quickly; bounce them back and forth”, etc. I think this is more plausible for people who have already put in the work and have the experience of articulating and understanding ideas. However, for people who are still learning to think or problem solve in more sophisticated/creative ways, they will be poor evaluators of information and less likely to produce relevant and credible versions of it. 

I don’t want to be overly dramatic, but AI has negatively complicated my work life so much. I’ve opted to attempt to understand it, but to not use it for my work. I’m too concerned about being seduced by its convenience and believability (despite knowing its propensity for making shit up). Students are using the technology in ways we’d expect, to complete work, take tests, seek information (scary), etc. Some of this use occurs in violation of course policy, while some is used with the consent of the instructor. Students are also, I’m sure, using it in ways I can’t even imagine at the moment. 

Sorry, bit of a rant, I’m just so preoccupied and vexed by the irresponsible manner in which the tech bros threw all of this at us with no concern, consent, or collaboration. 

High school Spanish teacher, Oklahoma

I am a high school Spanish teacher in Oklahoma and kids here have shocked me with the ways they try to use AI for assignments I give them. In several cases I have caught them because they can’t read what they submit to me and so don’t know to delete the sentence that says something to the effect of “This summary meets the requirements of the prompt, I hope it is helpful to you!” 

"Even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning"

Some of my students openly talk about using AI for all their assignments and I agree with those who say the technology—along with gaps in their education due to the long term effects of COVID—has gotten us to a point where a lot of young GenZ and Gen Alpha are functionally illiterate. I have been shocked at their lack of vocabulary and reading comprehension skills even in English. Teaching cognates, even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning. Trying to determine if and how a student used AI to cheat has wasted countless hours of my time this year, even in my class where there are relatively few opportunities to use it because I do so much on paper (and they hate me for it!). 

A lot of teachers have had to throw out entire assessment methods to try to create assignments that are not cheatable, which at least for me, always involves huge amounts of labor. 

It keeps me up at night and gives me existential dread about my profession but it’s so critical to address!!! 

[Article continues after wall]

Read the whole story
LeMadChef
8 hours ago
reply
This is infuriating
Denver, CO
Share this story
Delete

PSA

3 Shares

A drawing of a sparkly raven sitting on a branch. The caption reads, "Hey facing unknowns absolutely causes anxiety. It's not weird to be struggling right now."

from https://mastodon.art/@thelate...
Read the whole story
LeMadChef
21 hours ago
reply
Denver, CO
jhamill
23 hours ago
reply
California
Share this story
Delete

shrinkrants:

2 Shares

shrinkrants:

Read the whole story
LeMadChef
21 hours ago
reply
Denver, CO
jhamill
23 hours ago
reply
California
Share this story
Delete

RFK Jr. cancels millions in funding for pandemic bird flu vaccine

1 Share

The Department of Health and Human Services—under the control of anti-vaccine advocate Robert F. Kennedy Jr.—has canceled millions of dollars in federal funding awarded to Moderna to produce an mRNA vaccine against influenza viruses with pandemic potential, including the H5N1 bird flu currently sweeping US poultry and dairy cows.

Last July, the Biden administration's HHS awarded Moderna $176 million to "accelerate the development of mRNA-based pandemic influenza vaccines." In the administration's final days in January, HHS awarded the vaccine maker an additional $590 million to support "late-stage development and licensure of pre-pandemic mRNA-based vaccines." The funding would also go to the development of five additional subtypes of pandemic influenza.

On Wednesday, as news broke that the Trump administration was reneging on the contract, Moderna reported positive results from an early trial of a vaccine targeting H5 influenza viruses. In a preliminary trial of 300 healthy adults, the vaccine candidate appeared safe and boosted antibody levels against the virus by 44.5-fold.

An HHS spokesperson said that decision to cancel the funding—which would support thorough safety and efficacy testing of the vaccines—was because the vaccines needed more testing.

"This is not simply about efficacy—it’s about safety, integrity, and trust." HHS spokesperson Andrew Nixon told The Washington Post. "The reality is that mRNA technology remains under-tested."

Nixon went on to claim that the Trump administration wouldn't repeat the "mistakes of the last administration, which concealed legitimate safety concerns." The accusation refers to mRNA-based COVID-19 vaccines, which were developed and initially released under the first Trump administration. They have since been proven safe and effective against the deadly virus.

Kennedy, a staunch anti-vaccine advocate, has unflaggingly made false claims about the safety and efficacy of mRNA COVID-19 vaccines. In 2021, Kennedy petitioned the Food and Drug Administration to revoke authorization for COVID-19 vaccines and refrain from issuing future approvals. In recent days, Kennedy has also restricted access to COVID-19 vaccines and unilaterally revoked recommendations for healthy children and pregnant people to get the vaccines.

The federal funding for pandemic influenza vaccines was awarded as health officials around the country, including federal officials, were closely monitoring the swift and unprecedented spread of H5N1 bird flu through US dairy cows, which also spread to 70 people and killed one. Under the Trump administration, regular updates on the outbreak have ceased, and experts fear that cases are going undocumented.

In a statement Wednesday, Moderna CEO Stéphane Bancel struck an optimistic note, saying the company would pursue other funding sources to continue moving forward.

"While the termination of funding from HHS adds uncertainty, we are pleased by the robust immune response and safety profile observed in this interim analysis of the Phase 1/2 study of our H5 avian flu vaccine and we will explore alternative paths forward for the program," he said. "These clinical data in pandemic influenza underscore the critical role mRNA technology has played as a countermeasure to emerging health threats."

Read full article

Comments



Read the whole story
LeMadChef
1 day ago
reply
Denver, CO
Share this story
Delete

A Texas Cop Searched License Plate Cameras Nationwide for a Woman Who Got an Abortion

2 Shares
A Texas Cop Searched License Plate Cameras Nationwide for a Woman Who Got an Abortion

Earlier this month authorities in Texas performed a nationwide search of more than 83,000 automatic license plate reader (ALPR) cameras while looking for a woman who they said had a self-administered abortion, including cameras in states where abortion is legal such as Washington and Illinois, according to multiple datasets obtained by 404 Media.

The news shows in stark terms how police in one state are able to take the ALPR technology, made by a company called Flock and usually marketed to individual communities to stop carjackings or find missing people, and turn it into a tool for finding people who have had abortions. In this case, the sheriff told 404 Media the family was worried for the woman’s safety and so authorities used Flock in an attempt to locate her. But health surveillance experts said they still had issues with the nationwide search. 

“You have this extraterritorial reach into other states, and Flock has decided to create a technology that breaks through the barriers, where police in one state can investigate what is a human right in another state because it is a crime in another,” Kate Bertash of the Digital Defense Fund, who researches both ALPR systems and abortion surveillance, told 404 Media. 

Read the whole story
LeMadChef
1 day ago
reply
Denver, CO
Share this story
Delete

A Startup Is Launching A Surprisingly Good Electric Motorcycle That Costs Just $1,000

1 Share

One of the biggest problems limiting the adoption of electric motorcycles is just the fact that they cost so much for what you get. An electric motorcycle with range even somewhat similar to your gasoline bike can run you $15,000 or more in America. Even harder is getting poorer nations into EVs, and one company is producing some of the cheapest, most practical motorcycles I’ve seen yet. This is the Zeno Emara, a new electric motorcycle promising up to 62 miles of real-world range. Depending on where you live, you can score one of these for the equivalent of just $1,000.

Electric motorcycles have simultaneously been one of the best and most disappointing EV developments in recent years. Today’s ‘lectro bikes make mountains of torque and produce infinite wheelies with just the twist of the throttle. They’re also whisper quiet, which might not even be something you thought you wanted in a motorcycle until you experience it for yourself. Electric motorcycles even handle beautifully.

But then you start digging into the details, and these bikes start looking a whole lot less attractive. Electric motorcycles in the mid-teens for pricing still go fewer than 100 miles on a charge. If you spend $20,000 or over, you can go over 100 miles, but only if you take slow roads. Some of these very expensive machines can’t even benefit from fast-charging technology, so you’re stuck topping up for hours.

Persona 2x
Zeno

Truth be told, there’s nothing wrong with the range. The problem is how much you’re paying to go so few miles. Yet, if you look outside of the United States or Europe, there’s a whole world of possibilities out there. The Zeno Emara doesn’t go very fast or very far, but it doesn’t cost much money, either, so it all makes sense! Then there’s the weird part: This bike’s origin story involves Tesla.

From Model Ys To Motorcycles

Zeno was founded in 2022 by Tesla alum Michael Spencer. He built a team including former Apple TV engineer Rob Newberry, former Lucid powertrain engineer Swaroop Bhushan, and other engineers from the likes of Gogoro and Tesla. Spencer’s vision for Zeno is to do for emerging markets what Tesla did for EVs in America.

Zenotaxibike
Zeno

As TechCrunch notes, bodaboda, or motorcycle taxis, are a huge deal in East Africa. Riding on a motorcycle taxi helps commuters cut through dense urban traffic and get to their destinations for less than riding in a car-based taxi. Of course, this is also cheaper than owning a car, too. Sadly, the motorcycle taxi riders get the short end of the stick as they end up spending around 50 percent of their income on fuel alone.

The team at Zeno sought to solve this issue and more. If motorcycle taxi owners could spend less on fuel, they could keep more of their pay.

Now, Zeno could have just done what a lot of startups in America and Europe do and just slapped a battery and an electric motor into a motorcycle frame. However, there’s a huge limitation there. Once you drain that battery, such as when you’re working as a delivery driver or taxi driver, you have to park and wait for the battery to recharge, which means losing income.

Gogoro Swap Girl L
Gogoro

The solution for this has largely come from the likes of Taiwan’s Gogoro (above), which has pioneered a battery swapping system. If you’re riding a Gogoro-branded scooter and run out of juice, you just ride to a station, pull out your depleted battery, and drop in a charged battery. Boom, you get back on the road in an even shorter time than it would take to fill a gas tank.

Zeno had operated silently for the first couple of years of its existence. As TechCrunch notes, Zeno tested over a few dozen electric Chinese bikes in Kenya and found out that they just weren’t tough enough for the environment. Locals were also disappointed that they had these battery-powered motorcycles, but the batteries couldn’t be taken out and used as a power station. So, Zeno decided to take a different path.

Startups in Africa have embraced battery-swapping technology. Back in 2022, I wrote about the Roam Air, an electric motorcycle with 56 miles of range and easily swappable batteries for just $1,500. Well, Roam isn’t alone. Joining in on this African EV moto revolution are Ampersand Solar, Arc Ride, Spiro, Zembo, and now, Zeno.

The Emara

La Moto Electrica De 1.000 Euros
Zeno

Zeno is targeting markets in Africa and India with an ambitious goal. Not only are Zeno’s motorcycles supposed to get up to 60 miles of range, but Zeno says that they’ll cost less than a comparable gasoline motorcycle. There’s an asterisk to that, which we’ll get to in a bit.

Zeno says that the Emara is supposed to be like the equivalent of a 150cc gasoline motorcycle, but it’s designed to do almost everything better. It’s designed to carry 551 pounds, climb a 30-degree slope, have a top speed of 56 mph, and have a motor that punches out 10.7 HP. Now, some of those are impressive.

Footer Bg.49caaaf5
Zeno

A common 150cc motorbike might have a weight limit of 350 pounds or above 400 pounds, depending on the model, so the Emara sounds pretty rugged in that regard. That said, the Emara comes a few ponies and some mph shy of the capabilities of a good 150cc gasoline motorcycle, but it’s not far off.

The real magic is what’s underneath. The Zeno Emara is powered by at least one 2 kWh lithium-iron-phosphate (LFP) battery, and there’s a slot on the bike to carry a second. This is supposed to be good for up to 60 miles of real-world range. These batteries are removable, which allows them to be brought inside and charged at home. Or, you can take the batteries out and use them for emergency power or as a campsite power station.

Persona 1 X
Zeno

Zeno also took a look at all of the other electric motorcycle projects going on in Africa and took note that a lot of them look like high school shop class projects. Because of this, Zeno decided to do extra work to create a more finished product. Basically, the idea is that part of the reason you’d buy an Emara is because you think it looks cool, not just because it can save you money.

Even the marketing tries to be hip and cool. I mean, check this out:

Zenoseatx
Zeno

The bike even has four riding modes, a color LCD display, LED lighting, and disc brakes on both ends. Sure, there are no fancy radars or anything like that here, but there’s some pretty modern tech at play here. This sounds like something I would be glad to zip around town on.

The last two mechanisms in the Zeno plan are the battery swapping stations and the purchasing scheme. A buyer has three options for their Emara. The ideal customer is someone who buys the motorcycle and then pays a subscription for the battery. In doing this, the initial purchase price is said to be lower than an equivalent gasoline motorcycle. According to Zeno, the subscription is $17 per month for up to 48 kWh of energy or $29 per month for up to 120 kWh of energy.

Zenoswapstation
Zeno

Subscription customers will gain access to Zeno’s network of battery swap stations, where you just ride up to the station, pull out your depleted battery, and then drop in a charged battery. Alternatively, Zeno offers an option that works like a pre-paid phone where you buy a bike without batteries, and then pay by the kWh consumed to use loaned batteries. That option costs 61 cents per kWh consumed, and these customers can also use the swapping stations.

Finally, if you’re the kind of person who doesn’t want to bother with subscriptions or anything like that, you can just buy the motorcycle and its two batteries outright. That version costs $1,500, depending on the market and currency.

As of now, Zeno is targeting Africa and India with pricing roughly equivalent to $1,000 for the model with the battery subscription. Zeno says it’s selling the first 5,000 battery subscription-based units in India for about $750 before the price goes up to $1,000.

Emara Bg Nnew.33980e78
Zeno

This price is definitely competitive. As of right now, a brand-new, roughly equivalent 150cc motorcycle costs around $1,500 in East Africa. So, technically, if a buyer gets the subscription version of the Emara, they can save a little wad of cash on the initial purchase compared to buying a new gas bike. But what’s also cool is that the full price of an Emara is similar to a 150cc motorcycle.

Zeno says it already has a waitlist that’s thousands of people long, and the list has been getting longer since the order books opened up last week. The company hopes to deliver the Emara later this year, and with luck, it’ll make a dent in the highly competitive low-cost motorcycle markets in Africa and India.

Honestly, the part of this story that gets me is how great these bikes would be for Americans. The Zeno Emara’s spec sheet sounds similar to what you’d get with a Honda Grom or a CFMoto Papio SS, but in all-electric flavor. Even at $3,000, I bet the Emara could find a small market here. There are some American riders out there who are okay with a lower range so long as it comes with a lower price, and this hits that spot. Sadly, like all of the other super cheap electric bikes I’ve written about, this one will remain forbidden fruit.

The post A Startup Is Launching A Surprisingly Good Electric Motorcycle That Costs Just $1,000 appeared first on The Autopian.

Read the whole story
LeMadChef
1 day ago
reply
Denver, CO
Share this story
Delete
Next Page of Stories