CNET.com: This AI Tool Doesn’t Help With Homework. It Does It for You
A new AI tool called Einstein is pushing the boundaries of what automation in education looks like. Created by the startup Companion, Einstein does more than generate answers to homework questions. It logs directly into a student’s Canvas account and completes coursework on the student’s behalf.
According to its creators, Einstein operates through its own virtual computer. It can open a browser, navigate class pages, watch lecture videos, read PDFs and essays, write papers, complete quizzes and post replies in discussion boards. Once connected to a student’s account, the system can monitor deadlines and automatically submit assignments.
Unlike chatbots that respond when prompted, Einstein functions more like a digital stand-in for a human student. After setup, it can run in the background with little ongoing input.
The CNET article on Companion’s Einstein AI treats what is essentially an industrialized academic fraud engine with unsettling lightness — framing it largely as an edgy innovation story rather than reckoning with its sweeping implications. Here is what the piece fails to adequately address:
Academic Fraud at an Industrial Scale
Einstein doesn’t just help students cheat — it assumes their identity to do so. By logging directly into a student’s Canvas account, submitting assignments, posting on discussion boards, and taking quizzes as that student, it constitutes impersonation — which at most institutions is a violation not just of academic honor codes, but potentially of the Computer Fraud and Abuse Act, which prohibits unauthorized or fraudulent use of computer systems. The CEO’s dismissive line — “Students are already using AI, we’re just giving them a better version” — is a false equivalence that conflates a spell-checker with a hired ghostwriter sitting your final exam.
Credential Fraud and Real-World Harm
A degree is a social contract. Employers, hospitals, law firms, and engineering companies hire graduates on the assumption that their credentials reflect actual competency. When Einstein completes coursework for a nursing student, a future engineer, or an accountant, the downstream consequences fall on third parties — patients, clients, and the public — who have no idea they’re relying on someone whose skills were never actually developed. The article never asks: what happens when the graduate who never learned the material is now practicing?
Total Credential Devaluation
Einstein is commercially available and scalable, meaning an entire generation of graduates from a given institution could theoretically have had their coursework automated. This systematically devalues the degrees of every honest student who did the work — a silent tax on integrity that the article doesn’t even acknowledge.
Account Security and Data Privacy
To function, Einstein requires access to a student’s Canvas login credentials and likely monitors their academic record, assignment history, course materials, and communications. The JED Foundation has documented that AI companion platforms routinely store sensitive user data, use it to train future AI models, and share it without clear disclosure. Students handing over institutional login credentials to a third-party startup create massive vectors for data breaches, credential theft, and institutional liability — none of which are discussed.
The “Autonomous Agent” Risk Beyond Academics
Einstein operates via its own virtual machine with persistent file access and full internet connectivity. The CEO boasted it “makes ChatGPT look simple.” An autonomous agent with persistent system access, browser control, and the ability to act on a user’s behalf without ongoing input is categorically different from a chatbot — it’s closer to a digital employee with no accountability, ethical training, or legal liability. The article treats this as a feature rather than a fundamental concern.
The Vulnerability of the Students Themselves
The article frames students as rational actors making informed choices. But the population most likely to use Einstein — academically overwhelmed, anxious, or financially stressed students — are precisely those most vulnerable to the promises of an “AI that just handles everything.” Research on AI companion products from Stanford and the APA shows that AI tools marketed to vulnerable populations routinely reinforce maladaptive behaviors, creating dependency rather than capability. Einstein could plausibly leave struggling students more helpless, having completed their degree with zero foundational skills.
No Regulatory Framework Exists
Brown University researchers found that AI tools operating in sensitive domains — education, mental health, professional development — “systematically violate ethical standards” with zero regulatory accountability because no governing framework currently applies to them. Unlike a human tutor caught doing someone’s homework, Companion faces no professional licensing board, no accreditation body, and no clear legal exposure. The article’s brief nod to academic integrity debates completely sidesteps the question of whether this product should even be legal to sell.
The Educator’s False Consolation
The article closes by quoting a PhD teaching assistant who suggests Einstein will force educators to “redesign courses” away from virtual assignments. This is a dangerously naive deflection. It places the entire burden of reform on already-overworked faculty, ignores the millions of students currently enrolled in courses that can’t be instantly redesigned, and treats academic fraud as a useful forcing function rather than a harm to be stopped. It’s the equivalent of suggesting that car theft is good because it encourages people to use better locks.
The educator response to Einstein has been swift, visceral, and practically unanimous — and it’s generating reactions well beyond the usual AI-in-education hand-wringing.
The Immediate Emotional Reaction
The r/Professors subreddit erupted almost immediately after Einstein went public. The top-voted response was simply “Get me off this rock” — a sentence that captures the exhaustion of faculty who have been fighting AI cheating since ChatGPT launched and now face a tool that doesn’t even require a student to copy-paste. One professor shared this moment from his own classroom: “I literally just told a class, ‘If AI can do it, why would anyone hire YOU?’ Half the class seemed to have a realization, while the other half was using it to do the assignment.”
The Uphill Battle Acknowledgment
Educators are widely acknowledging they are losing ground — not just on Einstein specifically, but structurally. Futurism noted that “teachers and professors are hopeless to keep up with all the latest ways AI can be used for cheating,” especially as their own institutions simultaneously push AI tools on students through partnerships with big tech companies. The contradiction — universities condemning cheating while licensing AI platforms — is not lost on faculty.
Technical Countermeasures Being Raised
Some professors are pointing to detection approaches, but with muted confidence:
-
Engagement-based grading: Restructuring courses around oral exams, live presentations, and in-class work where Einstein cannot intervene
-
Citation and writing style analysis: Flagging work that lacks the imperfections of authentic student voice — one professor noted it’s “a breath of fresh air to see grammatical mistakes and a shaky grasp of the subject matter”
-
Canvas-side monitoring: Some instructors are calling on institutions to flag third-party logins or unusual submission patterns within the LMS itself
A Hidden Danger Professors Are Naming
Beyond cheating, tech-savvy faculty have identified a deeply troubling security risk that goes beyond academic integrity. Because Einstein reads discussion boards to function, a malicious student could embed a hidden prompt injection attack — concealed text in a forum post that hijacks Einstein and instructs it to delete files or execute harmful actions on the cheating student’s own machine. Additionally, Einstein scrapes the names, ideas, and intellectual property of every other student in the course and transmits that data to third-party servers — a likely FERPA violation that no one at Companion appears to have addressed.
The “Redesign the Course” Deflection
A vocal minority of educators — represented in the CNET piece by a University of Chicago TA — argue that Einstein will beneficially force professors to stop relying on virtual assignments. Most faculty on Reddit and LinkedIn are rejecting this framing as naive, noting it punishes honest students, places the entire burden of reform on faculty, and ignores the millions of students already enrolled in courses that can’t be instantly overhauled.
The Bigger Fear: A Broken Credential System
The deepest concern professors are voicing is not about any single assignment — it’s about whether degrees will mean anything at all. As one Futurism headline put it bluntly: “It’s Starting to Look Like AI Has Killed the Entire Model of College.” Einstein isn’t just a cheating shortcut; it’s a direct challenge to whether the four-year credential system can survive as a meaningful signal of human competency.
The legal exposure for Einstein users spans several overlapping categories — from institutional discipline all the way up to potential federal criminal liability — and the law is catching up faster than most students realize.
Institutional Consequences (The Most Immediate Risk)
Courts have already sided with universities on AI cheating discipline, even when school policies didn’t explicitly name AI at the time of the violation. A federal judge in Massachusetts upheld a school’s punishment of a student for AI cheating, ruling that basic academic integrity standards are broad enough to encompass AI misuse — students should reasonably know that submitting AI-generated work as their own violates honesty expectations regardless of specific policy language. Consequences universities can and do impose include:
-
Failing grades on assignments or entire courses
-
Academic probation or transcript notation
-
Loss of scholarships and financial aid eligibility
-
Suspension or expulsion
-
Disqualification from honor societies, graduate programs, and professional certifications
The FERPA Complication
Because Einstein accesses Canvas — which contains names, grades, discussion posts, and academic records of all students in a course — every user potentially exposes their classmates’ protected data to a third-party server without consent. This is a likely violation of the Family Educational Rights and Privacy Act (FERPA), and the student doing the cheating, not just the company, could be implicated in that breach.
The Computer Fraud and Abuse Act (CFAA)
This is where consequences get genuinely serious. Einstein logs into institutional systems using a student’s credentials to impersonate that student and submit work — a form of unauthorized computer access under the CFAA. While the CFAA has historically been applied to hackers and corporate espionage, legal scholars note that using automated tools to access systems in ways that violate the system’s terms of service falls squarely within its scope. Institutional network access agreements explicitly prohibit third-party automated agents acting on a user’s behalf — meaning Einstein users may be in federal criminal territory without knowing it.
Emerging Federal Legislation
Congress is actively moving to increase penalties specifically for AI-assisted fraud and impersonation. The AI Fraud Deterrence Act, introduced in late 2025, would classify AI-facilitated fraud under both mail and wire fraud statutes, carrying fines up to $1–2 million and prison sentences up to 20–30 years for the most serious violations. While the bill targets commercial scammers primarily, its language is broad enough to encompass AI-assisted academic impersonation, particularly for students in federally funded programs.
The Long-Tail Career Risk
Beyond formal legal consequences, a single academic misconduct finding can permanently damage professional prospects. Graduate school applications, bar exams, medical licensing boards, CPA certifications, and federal employment background checks all ask about academic dishonesty findings. Some professional licensing bodies — nursing, law, engineering — treat academic fraud findings as character and fitness disqualifiers. The degree may be earned, but the license to use it may never come.
A Growing “Ghost Student” Parallel
Organized crime rings are already using AI bots to impersonate students, enroll in online courses, and fraudulently collect federal financial aid — $11.1 million in unrecoverable aid from fake enrollments in California alone in 2024. Einstein operates on the same technical architecture as these fraud schemes, and there is no legal firewall distinguishing “cheating for grades” from “AI-assisted identity fraud” in federal statute. A student using Einstein is, functionally, operating the same kind of automated impersonation system — just for a grade rather than a check.
Einstein is not an educational tool — it is an academic identity theft machine dressed in the language of innovation, and every student, educator, administrator, and lawmaker should treat it as such. For students, the allure of a completed assignment is a Faustian bargain: you risk expulsion, federal criminal liability under the Computer Fraud and Abuse Act, permanent transcript notations that close the doors to graduate school and professional licensing, and — most devastatingly — the emergence into a career field you are wholly unprepared for, carrying a credential you never earned.
For educators and campus administrators, silence is complicity; every institution that fails to explicitly prohibit Einstein by name, flag third-party Canvas logins at the system level, and pursue disciplinary action aggressively is quietly ratifying the fraud. For state and local education officials, the moment has arrived to stop treating AI cheating as a pedagogical inconvenience and start treating it as the institutional emergency it is — one that demands binding policy, mandatory reporting frameworks, and direct engagement with federal prosecutors and the Department of Education. Einstein’s CEO can dress this up as disruption, but there is nothing disruptive about fraud; history has another word for a person who presents someone else’s work, knowledge, and identity as their own and collects payment for it.
The technology is new. The crime is ancient. And the damage it will do — to the credibility of degrees, to the safety of the public served by unqualified professionals, and to the students whose intellectual development was quietly outsourced to a machine — will be neither brief nor easily undone.