Rendered at 09:47:10 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
throwatdem12311 12 hours ago [-]
When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.
We already had AI proof education.
phatskat 4 hours ago [-]
My networking final in high school was probably my favorite test taking experience - there was a small written portion on eg subnets, but the bulk of our grade was setting up a physical network, testing it, and leaving the room. Our teacher sabotaged three parts of our network - could be hardware, router misconfiguration, etc. when we came in we had iirc 20-ish minutes to diagnose and fix it.
The best was when she barely unscrewed one of this big DIN connectors so at quick glance it looked fine, but wasn’t fully connected.
onionisafruit 3 hours ago [-]
That does sound like fun, but it would probably be even more fun. to be the instructor giving this test.
thunfischtoast 3 hours ago [-]
> The best was when she barely unscrewed one of this big DIN connectors so at quick glance it looked fine, but wasn’t fully connected.
That's evil haha. It's the case where you unplug and plug again everything, changing seemingly nothing, but then it works
zoom6628 7 hours ago [-]
When I did tertiary studies in programming there wasn't AI but we did our programming exams in pencil and paper. The "beneficial" prep we had and I had since high school was using punch cards. And 24h turnaround time for compiles. That really makes you think. And you learn how to desk check even thousand line programs. Intense focus, structuring for readability (to catch typos) and simplicity (catch logic errors) helped enormously. Was not unusual to change hundred lines of code and submit knowing that it wouldn't compile but will throw up the other errors I couldn't find. Our exams would give us 4-6 attempts for clean compile AND correct output. The only space where I experience same challenge now (40+ yrs later) is embedded code. Desktops and web stuff have LSPs and dynamic reloads and interpreted code (not a thing for me when learning) with instant feedback.
Lots of skills from those old days that have been lost/ignored in the pretence of productivity.
malux85 7 hours ago [-]
Yeah i really valued learning to code when I didn't have the internet available, if taught me patience and deep thinking, problem decomposition and organic (brain) execution
phatfish 50 minutes ago [-]
That is great for core principles. But languages and development environments have since assumed everyone has access to then internet. Meaning more "stuff" is the solution to problems (massive standard libraries or community created ones) rather that elegent language solutions.
The internet enabled all the complexity we have today. LLMs will have a similar effect, but instead of engineers actually having to understand the system (even in it's complexity) they will just be querying the oracle to build things or solve problems.
When the oracle can't help (or maybe refuses to) is when it gets interesting.
synack 6 hours ago [-]
I learned HTML/CSS from a book on a computer with no internet access. Seemed reasonable at the time but in hindsight was absolutely ridiculous.
nsyne 11 hours ago [-]
I personally dislike placing a heavy emphasis on exams. Assignments/projects have been consistently the most enjoyable and rewarding parts of the courses I've taken so far in university.
It's a shame that they are also way more susceptible to cheating with AI.
kcexn 8 hours ago [-]
The problem with exams is that everyone has a bad experience with a poorly written one. Well-written exams will have questions that test students at different levels of understanding across the whole curriculum.
So a student who only understands the basics should be able to answer most of the easy questions and students who have a deeper understanding can answer the harder ones.
Well-written exams should feel pretty fair and leave students feeling like the result they got is proportional to the effort they put into studying the material (or at least how well they personally felt they understood the material).
CSSer 6 hours ago [-]
Exams almost filtered me out of this industry before I even got started. I later went on to be a lead developer! I went to a rural high school with a poor math curriculum. I understood the concepts, but I was slow. When I got to undergrad, my first calc professor gave us a 60 question test with 50 minutes of allotted time. He told us if we couldn't do the problems fast enough we weren't cut out for the work and it would be better if we quit now. I've never felt more inadequate in my life. It's one of my only Ws.
sudahtigabulan 1 hours ago [-]
> a 60 question test with 50 minutes of allotted time
Is this kind of test - many short questions - a standard thing for math in your country?
My university exams were pretty much all "2-question", in 90 minutes.
The first half was an essay where you have to reproduce a lesson from the curriculum, in your own words.
The second half was "the formulas" - you have to develop one or two formulas from first principles.
I once got an A- even though I got "the formulas" half very wrong. As the teacher explained later, I simply chose the coordinate system beginning at not the same place the textbook did. And this was supposed to be a bad teacher - he actually gave Ds to almost all of us (180 people). This was a makeup exam.
hawaiianbrah 6 hours ago [-]
What does W mean? Withdraw?
CSSer 5 hours ago [-]
Yes. You can do it a set number of times, opting to either retake the course for a better grade or simply not take it again. Either way it shows on your transcript.
bawolff 5 hours ago [-]
Exams are kind of like interviews, at some level they are always at least a bit artificial and sometimes very artificial. I don't really think they do a good job of proving understanding but we also don't really have anything better.
ransom1538 5 hours ago [-]
Every class. Teachers think: "We want a curve, not everyone is getting an A". This pushes the teacher to ask questions not covered in the material. If they stick to what is covered it is hard to get a decent curve. So, in the end, teachers just hand out grades based on IQ, and there isn't a point of grades.
bawolff 5 hours ago [-]
Courses aren't supposed to be pure memorization. If you can only answer questions directly covered in the material but can't apply it to new situations, you should not get an A.
4 hours ago [-]
thaumasiotes 51 minutes ago [-]
> Teachers think: "We want a curve, not everyone is getting an A". This pushes the teacher to ask questions not covered in the material. If they stick to what is covered it is hard to get a decent curve.
You've never been a teacher.
nullsanity 6 hours ago [-]
[dead]
Aurornis 7 hours ago [-]
> It's a shame that they are also way more susceptible to cheating with AI.
They were more prone to cheating before AI, too.
Cheating has always existed at some level, but from talking to my couple of friends who teach undergrad level courses the attitudes of students toward cheating have been changing even before AI was everywhere. They would complain about cohorts coming through where cheating was obvious and rampant, combined with administrations who started going soft on cheating because they didn’t want to lose (paying) students.
AI has taken it further, with students justifying it not as cheating but as using tools at their disposal.
I was talking to my friend about this last week and he was frustrated that several of his students had submitted papers that had all the signs of ChatGPT output, so he asked them simple questions about their papers. Most of them “couldn’t remember” what they wrote about.
It’s strange to me because when I went to college getting caught cheating was a big problem that resulted in students getting put on probationary watch and being legitimately scared of the consequences. Now at many schools cheating is routine and students push the boundaries of what they can get their classes to accept because they have no fear of any punishment. YMMV depending on the institution
nradov 5 hours ago [-]
Those students will go on to "cheat" in job interviews. And the sad thing is that many of them will never face any consequences because the majority of jobs at large companies don't require any actual competence. The college degree allows them to check a box on job applications but the jobs don't actually depend on anything learned in college.
jimbokun 5 hours ago [-]
I wonder if the companies hiring students from those universities have come to understand there’s no guarantee they learned any of the material.
nradov 4 hours ago [-]
The dirty secret in HR is that performance in most regular jobs doesn't really depend on having learned anything in college. Companies only require a college degree to filter the candidate pool down to a manageable level. Line managers know they'll have to train entry level hires on almost everything.
IBM used to hire software developers based on aptitude test scores regardless of formal education, then put them through an extensive internal training program. It worked fine.
LtWorf 1 hours ago [-]
Most companies expect people to be productive sooner rather than later, and not every job is completely trivial.
Cthulhu_ 36 minutes ago [-]
I also think assignments and projects work best, but only for the percentage of students that are self-motivated, curious, interested, etc.
Unfortunately a lot aren't, they feel like they have to be there or these courses are the only path for them to get a good job. And unfortunately they end up in the workforce, too. You'll often see teams with one good developer and a lot of hangers-on.
syntaxing 11 hours ago [-]
I went to college as a MechE so unsure if compsci was different. But overall, all the “fun” projects were labs. We have three semesters of hell and all 3 semesters had 2-3 labs, and we write 20 pages or so for EACH lab a week (usually a team of 2-3).
gpm 11 hours ago [-]
Also way more susceptible to cheating in traditional non-AI ways. And your mark ends up depending a lot on how much time you have to invest independent of how good you are at the course material.
Assignments and projects are great for learning, but suck for evaluation.
lokar 11 hours ago [-]
I really appreciated classes where there was rapidly demising returns to time spent :)
Another example, lit classes where the grade is based on time limited, open book exams, hand written in "blue books"
Read the book, pay attention in class, spend 90 min writing an essay, and you are done.
jason_zig 10 hours ago [-]
is evaluation that important? ultimately if you can't do the work you're only cheating yourself in the long run...
musicale 10 hours ago [-]
That is the traditional view, the view of those who want to improve their own knowledge and abilities, and presumably the view of those who would like to consider the degree to be a meaningful credential.
However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
antonymoose 8 hours ago [-]
You say that but I was a Class of 2013, aka during the massive hiring boom of the teens. I tutored a friend of mine with a Ds get Degrees mentality who eventually graduated and now works an ass-in-seat job for Booz Allen or one of those types. I used to joke about it with another friend, that his diploma ought to include an asterisk and a half dozen other names for how much we ultimately did on his grades take homes. I’m pretty sure he makes about the same as me by now purely on tenure.
Personally, I dropped out despite a full ride+ becuase why would I put in work for a no name state school when I already has an FTE job as a developer out of high school anyway.
Turns out fraudulent action can still get the bag.
WoodenChair 8 hours ago [-]
I agree with your premise about why accurate evaluation matters, but your post comes across as pretty bitter. Unless you’re at the job with him, you really don’t know that it’s a “I just need to show up” job he has at Booz Allen. Perhaps he has other great traits like a high social or emotional intelligence that make him good at his job beyond whatever was being evaluated on those projects you helped him with.
nradov 4 hours ago [-]
I think you're missing the point. The majority of jobs at companies like Booz Allen are sort of like Kabuki theater and don't require any technical competence. The main responsibilities are to show up on time and present a certain image to customers.
II2II 9 hours ago [-]
Part of the purpose for evaluation is to provide feedback. I'm not going to claim that the form of feedback is great, but it does offer motivation to improve.
The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
jimbokun 5 hours ago [-]
It’s important to signaling to employers you have obtained skills useful to them.
And for most students that’s all they really care about.
If the companies stop valuing the diplomas, students will stop paying tuition to attend, and the universities eventually collapse.
jmye 10 hours ago [-]
Yes. I care that the work I've done and what I've learned is actually good and correct. Vibes-based learning/anything is valueless.
fma 11 hours ago [-]
Then I suppose we can go back to having computer labs that can only access white listed domains and other study materials. Students code there to ensure no cheating.
nradov 4 hours ago [-]
Will the students have to go through security screening for personal devices?
zdragnar 10 hours ago [-]
The labs I was in weren't connected to the Internet at all, only a local intranet. Though, they were all running pre-oracle solaris if memory serves, so I'm probably dating myself a bit.
Cthulhu_ 40 minutes ago [-]
Yeah none of the problems with AI in education are new; some schools (or news articles) are just panicking because they gave their students laptops (and/or made them a mandatory part) and now the genie is out of the bottle.
But there were already heaps of problems with tech in education before AI.
My CS projects were often pretty free-form so in theory I could've just used AI - today, anyway. But a big part of the grade was a face to face interview where you actually had to talk about the code you wrote. Anyone lifting along with other people who didn't actually do any work would fall through then.
stingraycharles 8 hours ago [-]
Yeah exactly, I remember having to write Java and C++ by hand in college in the early 2000s. It was also a good test how well you knew the syntax.
nradov 8 hours ago [-]
Syntax seems like a stupid thing to test in university level courses. That's trade school stuff. And I don't mean that as a criticism of trade schools, they just have a different focus.
WoodenChair 7 hours ago [-]
Syntax is not the focus of your testing, but it’s often a pre-requisite to be clearly and accurately speaking the same language. Think not of taking off points for missing a semicolon but instead understanding the difference between the syntax for a method call and a property access. The different syntax conveys different meaning and so we should expect some basic level of accuracy to the language in question. At least that’s how I see it.
hsbauauvhabzb 6 hours ago [-]
Marking at scale is hard to maintain that consistency though. It’s not whether the exam writer sees it that way, it’s whether the markers understand intent and objective over pedantic nuance
tehjoker 7 hours ago [-]
Knowledge is built on foundations. Knowing syntax in one language is necessary to be able to do anything practical, which interacts with theory. You build valuable schema of the world by iterative theory and practice.
nradov 6 hours ago [-]
Nah. Syntax is trivial and irrelevant for teaching CS theory.
robryan 3 hours ago [-]
Would you say the same thing for teaching math or music?
icelancer 6 hours ago [-]
Lambda calculus?!
eudamoniac 5 hours ago [-]
Do you think spelling is trivial and irrelevant in a foreign language class?
bawolff 5 hours ago [-]
That's a poor comparison given they said CS theory.
I could easily imagine a CS theory course that doesn't involve any programming language at all.
nradov 4 hours ago [-]
Yes, there are some university CS courses on topics like discrete mathematics, theory of computation, and ethics that don't involve any programming.
raincole 5 hours ago [-]
College exam tests C++ syntax and people frame it as a good thing. Interesting. It sounds almost indefensible for me. If it were Scheme perhaps it would be okay, but C++ and Java...
BobbyTables2 10 hours ago [-]
Today just teachers walking around during an exam instead of browsing on their phone would do wonders…
Izkata 8 hours ago [-]
Half related: reminds me of my physics teacher's test of how observant we were. The extra credit question on the test was "what is your teacher's favorite color?", which she had so far given no indication of. But while watching us she was walking all over the room in every possible direction, because the answer was on a piece of paper taped to her back.
yurishimo 43 minutes ago [-]
Sounds like something Restivo would do.
ghighi7878 11 hours ago [-]
Writing programs by hand is something I had to do too. Compete waste of time
cyberax 7 hours ago [-]
Bonus point: even if you use AI to prepare the submission, copying it down by hand will at least force you to _read_ it.
Cthulhu_ 34 minutes ago [-]
Some people spend ages writing a cheat sheet with the intent of cheating on the test but realise that because they wrote it down and / or tried to summarise it they actually learned the materials.
alanmercer 3 hours ago [-]
[dead]
SamHenryCliff 9 hours ago [-]
[dead]
curun1r 3 hours ago [-]
I'm old enough to remember a similar controversy over whether to allow calculators in math classes. While most schools were banning them to force kids to learn how to do math without them, my school went the other way. They mandated that every student had one and then changed the assignments and tests to account for it. Gone were questions that had whole number answers that could be computed in our heads. Instead, answers were complex and the only way to know whether you'd done the question correctly was to be sure of your method. They even allowed us to write programs in TI-BASIC that we could use on tests, the only limitation was that we were not allowed to share programs with other students. I discovered that rather than trying to cram for exams, I could just write a program that would solve each class of problem we were likely to see on the exam, and by essentially teaching my calculator to pass the test, I also taught myself. It was a vastly better way for me to study. It also led to my decision to major in comp sci and my career in software. I'm forever grateful to those teachers for choosing to see the latest technology as a multiplier of student potential rather than a way students could cheat to avoid learning.
So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI? These tools are not going to cease to exist. The students they are preparing are going to live and work in a world where they exist. To my mind, you best prepare students by teaching them how to use the tools most effectively, not by teaching them how to work without the tools. Students should be learning how to prompt AI without hinting it towards a specific answer. They should be learning how to double check the answers AI gives them to ferret out hallucinations. They should be learning how to produce work that is a hundred times more complex than what us older folks had to do in school. We should be graduating students who are so much more capable than any generation before them. I think we're doing them a disservice by trying to give them the same education that was given to those from previous generations. The world they will inhabit has changed radically from the one we entered into following school.
amtamt 2 hours ago [-]
> Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI?
That is way to high recurring cost that many won't be able to afford. One could get a second hand calculator or even computer, and then additional resources needed was one's willingness.
With mandating AI usage, we'd only increase the gap of haves and have-nots. I personally do not like the idea.
madrox 2 hours ago [-]
This is probably the fairest counter argument I’ve heard. One can hope that today’s AI will eventually be as cheap as a calculator, though.
neal_jones 22 minutes ago [-]
That is my hope. At the same time, feels like a peak “don’t know what we don’t know” situation
encrux 1 hours ago [-]
AI in it‘s current phase, definitely. However, we‘ve been seeing the transformer architecture plateauing in the last couple of years. There are still improvements, but open source models are catching up.
I feel like at this point it’s an inevitability that given enough time, capable models will be cheap enough for everyone.
PunchyHamster 1 hours ago [-]
> So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI?
Because using AI is the complete opposite of "I learned programming just to make tests easier".
By learning how to program solver, you not only learned how to program but also learned the method well enough to write it.
By pawning it off to AI to solve, you have learned nothing, not even how to prompt correctly as test questions are usually formulated well enough that AI doesn't need prompt massaging to get it.
You can use AI to get some knowledge about the problem (assuming you won't hit hallucination) but that's not what will happen when you use it for test.
And if you DO want to teach students how to use AI effectively, you can just have an AI class...
logicchains 5 minutes ago [-]
>By pawning it off to AI to solve, you have learned nothing, not even how to prompt correctly as test questions are usually formulated well enough that AI doesn't need prompt massaging to get it.
If you got AI to produce a working solution, you solved the problem. In the real world nobody who's paying you cares about the method as long as you deliver results. Students taught to solve easy problems by themselves will be at a big disadvantage in the workforce compared to students taught to solve hard problems using AI.
muzzleflash 2 hours ago [-]
I think the controversial part about AI is far above the "it is a new technology so use it".
When the scientific calculator was invented, people could easily know what went into its production. As in what circuitry appears in them. You knew that if you bought it, it is yours. Want to program it? Grab a book and do this. The whole package would be a fixed price. You are in control.
With AI? You are not at all in control. You rely on a big tech giant (or just like 4 useful ones) who is riding what people controversially still call an economic disaster. You are relying on a technology that is designed to very likely bait-and-switch you. As soon as you get too comfortable with AI, the big tech companies can just bump the prices up and you will not be able to say no.
You rely on a technology that you do not control.
The comparison of AI to a calculator or any other technological advancement for students is apples and oranges for that reason.
Imagine giving a student a personal AI datacenter to carry with them. This may be more of a fair comparison.
PS Training students on using AI, especially for free, is setting them up for reliance on the big tech companies and the subscription model.
mquander 2 hours ago [-]
Your comment is framed like "giving a student a personal AI datacenter to carry with them" is unrealistic, but in fact it is easy for anyone with access to $1000-$2000 worth of compute to download and operate exactly that for free, with performance perhaps a year behind the state of the art.
latexr 2 hours ago [-]
> but in fact it is easy for anyone with access to $1000-$2000 worth of compute
Even if we assume that to be true, you severely underestimate how many people that condition excludes.
zozbot234 1 hours ago [-]
There are simpler LLMs that run on much cheaper devices and are still helpful for baseline tasks. Of course they are prone to hallucinating once they reach the limits of their world knowledge, but this also changes their effectiveness in an educational context: they can help you polish a paper (much of their reliable knowledge is about language, syntax and style/pragmatics of the input texts), but you still have to plan the writing on your own.
mquander 2 hours ago [-]
It doesn't exclude people who attend high schools and colleges that have a computer lab.
mschuster91 1 hours ago [-]
That however requires significant investments - either each computer gets a powerful GPU for local inference (which cost a fortune) or the school gets a rack worth of compute. Most schools however even struggle to get their children fed.
Another issue is that it forces kids to stay in school for longer to do their homework, which can be a serious problem in rural areas where public transport is limited, so parents are forced to fetch their kids from school which may not be compatible with working hours.
oerdier 2 hours ago [-]
A critical difference between a calculator and an LLM is that a calculator doesn't make decisions. A calculator performs the operations you type in, nothing more. An LLM does make decisions. The human operator of the LLM needs to be able to evaluate the decisions made by the LLM. That requires education and experience beforehand.
An LLM is a force multiplier only, not a replacement. It's a personal assistant to an expert. To use an LLM in a acceptable way, you still first have to learn how to do what it does yourself. I think your suggestion for people to be taught how to use LLMs is justified, but they should do so only after first being taught a no-LLM curriculum. I think this should be entirely after what the notion of an education was in pre-LLM times. Don't incorporate LLMs into our current education, instead teach use of LLMs after our current education.
prox 2 hours ago [-]
The post yesterday about the teacher who gave students an Apple II and taught assembly was very enlightening and example of how to go forward.
vincnetas 1 hours ago [-]
continuing on personal asistant analogy, i bet even now we have ultra rich people who are not smart enough to do things themselves but are smart enough to buy (hire) smart people to work for them. And even allow them to make decisions without understanding them. But with only a guard rails : does this produce wealth for me. If yes, do what you need, i don't care :)
So this i think is applicable to AI also, pay for smarter than you AI's pit them against each other, let them supervise each other and measure the outcomes you need. Who cares how they achieve that (sound clinical and scary)
spaqin 60 minutes ago [-]
Replace "using AI" with "asking your parents". From a student's perspective, their parents are probably an expert in anything, but sometimes might make things up and they won't be any wiser to notice, because they don't yet have the basic knowledge to know what to double check for. Just like LLMs.
Why doesn't the essay class allow us asking your parents to write it for them? The art class, why not ask your parents to paint something for you? Geography, why not let ask your parents during a test?
ZiiS 2 hours ago [-]
The difference is you learned useful maths teaching your calculator. At the moment a teacher can't tell if you even read the LLM output. Even if all future literature is writen with an LLM it is highly likely the are skills you need to learn to become a best selling author.
yorwba 1 hours ago [-]
So, scientific calculators
- made tasks easy that were a necessary prerequisite for advanced math (basic arithmetic), but not what the lesson was supposed to be about
- could in theory also let students skip over what they were supposed to be learning (applying the correct operations in the correct order to solve a problem) but doing so would require programming or getting a program from someone else, which the teachers probably figured was a high-enough hurdle to accept the risk
Hence, scientific calculators helped teachers by removing unnecessary friction.
Meanwhile, current LLMs
- will happily attempt to do the student's entire homework for them
- cannot reliably be restricted in functionality to leave the part the students are supposed to do themselves to the student
Hence, LLMs undermine teachers by removing necessary effort.
Sure, in theory LLMs could enable even more focused lessons by removing even bigger unnecessary frictions (e.g. in history class, have a LLM scour a large collection of primary sources to exhaustively list passages mentioning a certain topic), but students cannot be trusted to use them this way.
Hence, teachers are trying to use all kinds of tricks to ensure that what they wanted to teach actually passed through the student's brain at some point.
zozbot234 1 hours ago [-]
> - cannot reliably be restricted in functionality to leave the part the students are supposed to do themselves to the student
Small local LLMs are essentially that. If an LLM can tell you to eat rocks as a tasty snack or use glue to make the cheese stick to your pizza, imagine what it says when you ask it to analyze/explain complex academic subjects, or solve fiddly problems. But it will still reliably help you polish your language, like a subject-specific dictionary/thesaurus.
PunchyHamster 1 hours ago [-]
Homework is waste of time for everyone involved tho
Cthulhu_ 33 minutes ago [-]
Just like computers in schools nowadays, I will concede that part of the education should be about learning how to use AI - but they should also learn how to do research and accomplish things without the help of AI.
rwmj 1 hours ago [-]
We weren't allowed to use calculators until aged 16 and I'm glad because I learned to do mental arithmetic. Many people I know, even in STEM, aren't able to do that, or lack a feel for numbers.
qwedaH 1 hours ago [-]
Because AI is different from calculators. People stop thinking and just excel at verbose and incomplete rationalizations of every evil, just like you do in your reply.
galkk 2 hours ago [-]
It was good for you but you don’t address reality of life.
Here’s one possible scenario: After graduation, you (or somebody else) shares the program with friend, with a promise to not to share further. Soon enough, it’s on everybody’s calculator. What did real educational thing for you, is just cheat where one needs to press the right buttons and get the right answer. This completely destroys the educational purpose, but significant amount of people just don’t care and want to get a pass.
Yes, there always is a counter weapon by teachers: for example, to point to random line and ask to explain and whatever, but this is not (always) scalable.
I’ve seen this in reality in college, when there was a cs/database course final project implementation, written in Delphi (very popular at a time in xussr), that was passed from year to year, that the professors and ta were so fed up, that I got almost auto pass because I wrote mine in C++…
——
To summarize - the overinreasing amount of pure slop is seen everywhere. Regular multi-thousand line prs where author didn’t even bother to look into code, written by ai. Just prompt -> commit, push, or. Nobody wants to deal with that
Same is happening here - u it’s not to punish people who use tool in proper context, it’s to filter out people who just don’t give a fuck.
1 hours ago [-]
sdevonoes 2 hours ago [-]
There’s one significant difference between: typewriters and calculators are a one-time-paymeny device. LLMs are subscription based baked by billionaire companies. That alone leaves LLMs in a bad enough place
fph 2 hours ago [-]
You have a point, but it is very ironic that this sounds a lot like the old argument "you must still learn to compute by hand because you won't have a calculator with you all the time".
latexr 2 hours ago [-]
At this stage, I can no longer take comparing LLMs to calculators as a good faith argument. That’s a talking point, a framing device, one whose flaws have been explained ad nauseam (as exemplified by the sibling replies), and I’m left questioning either the reasoning abilities or the honesty of those still making it.
They. Are not. The same.
Have you ever known people to commit suicide, kill, or give themselves rare diseases because of their calculators? How about people dating their calculator and going batshit for a software update?
Not to mention learning to do on your own is a useful skill to teach you to think, and an essential skill to (as you suggest) verify answers. People not understanding how things work is exactly why they take bullshit output from an LLM as gospel.
I also note that such arguments tend to be profoundly selfish and self-centred. Your anecdote happened to have an outcome you enjoyed and benefitted from, but I bet that wasn’t the reality for all your colleagues. Just like you are glad for the calculators in your class, some other student may be glad for the lack of them in theirs and it may be the reason they got into their field of study.
exceptione 46 minutes ago [-]
Bad idea. This is pretty much endgame territory you are talking about.
You would give the brains of the younger generation to American tech oligarchy, a class of people openly hostile to the principles of the democratic rule of law. If you want to see the damage actors like Fox News et alii alone can do, just take a look around in the US. Now imagine them taking over the parenting and teaching role; you wouldn't need gerrymandering if you can control people's beliefs.
lpcvoid 2 hours ago [-]
>why aren't they creating schoolwork that requires AI?
Because this makes a subscription a requirement for education, and thus advances the grift that is subscriptions, rent-seeking and dependence on a service. This isn't something we should ingrain into our children from an early age.
Calculators were buy once, use forever. Subscriptions to slop generators are a long term dependency and I want my children to not be exposed to that until they can decide for themselves.
ksenzee 2 hours ago [-]
There's a big difference between learning to program a calculator, which is deterministic, and "learning" to prompt an LLM-based AI service that is tuned deliberately to be non-deterministic and that changes every few months. I would prefer my kids learn things like thinking critically and communicating and logic, not spend their time "chatting" with an unpredictable oracle.
2 hours ago [-]
intended 45 minutes ago [-]
The calculator analogy comes up very often, and it’s a good one because it also illustrates where AI diverges.
The other analogy is taking a forklift to the gym. Sure you lift weights, but you don’t really do any exercise to develop your own muscles.
AI automates a significant chunk of the exercises. So you are left with people who didn’t build any mental muscles.
This would be bad enough, but it’s worse because AI severely benefits experts who have build mental reflexes/taste and can judge / verify output with minimum information.
shmichael 2 hours ago [-]
Imagine military instructors would say that it's important to focus on cavalry and bayonette use just because they can't figure out how to adapt their curriculum to the new reality.
World war I was very much that.
recursivedoubts 13 hours ago [-]
I used to make my classes 60-80% project work, 40-80% quizzes all online.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
zamadatix 13 hours ago [-]
I always preferred the "you get some grades along the way to gauge your progress but the lion's share of the weight went to the proctored exams" method unless the lion's share of the normal work was also proctored anyways (at which point it doesn't really matter how it's done).
The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
bee_rider 12 hours ago [-]
The things I don’t like about putting too much weight in the exams are:
* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.
acbart 12 hours ago [-]
Exams happen all the time in real life. Or rather, situations where you can't just look up fundamental knowledge. Job interviews, presentations, even mundane work tasks - all these require you to know the basics quickly "The basics" are relative, of course, but I often point out to my students: "you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'." Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental. From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.
kelnos 3 hours ago [-]
I think many of us would agree that job interviews (in tech at least) are horribly broken, because they don't do a good job of testing candidates' ability to do the actual work they'll be doing day-to-day. So saying exams are like job interviews is not a positive for exams. And, for most people, the ideal is to find a job and stick with it for years, so it's not like job interviews are common, everyday occurrences.
For presentations, usually you spend a lot of time preparing for them (similar to exams), building a slide deck or pages of notes that you refer to while giving the talk (not similar to exams). Sure, you do have to be able to think on your feet, but I don't think the comparison to a sit-down exam is all that apt.
For mundane work tasks, you have the internet and whatever reference materials you want (including LLMs, these days); this sort of thing is so different from a sit-down exam that it's almost comical that you'd try to equate the two.
I'm not saying I know of a better way to evaluate learning than proctored, in-person exams, but suggesting that sort of situation is particularly relevant to real life... no, no way.
mettamage 3 hours ago [-]
Having been both a data analyst and software engineer I agree. The data analyst one? Here is 50K of Excel rows with all kinds of weirdness in it, you're data analyst right? You have 4 hours to analyze this data. Go!
The software engineer one: here is a takehome assignment. One week later: finished!
To be fair, they both represented pretty well what work I'm going to do. The data analyst didn't show that well how much I'd also be data engineering, but whatever, I was a SWE before having a DA stint. Back to SWE again though.
deepsun 12 hours ago [-]
I think it's all about speed. In "real life" everything can be looked up, but exam optimizes to not even having to look it up. Then any research becomes much faster.
Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.
eichin 11 hours ago [-]
One of the reasons I've always encouraged software people to learn to touch type has nothing to do with typing speed - it's about reducing/eliminating the cognitive load of typing, you want to be thinking in expressions (sentences) not letters. (The increase in effectiveness comes from not getting distracted by the mechanics of typing...)
lelanthran 42 minutes ago [-]
> It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources.
I dunno how you work, but I'd be getting raised eyebrows from people watching my hit google for any question required of my role.
I mean, we're not talking about using calculators here, and we're not talking about vocational training (How do I do $FOO, in docker? In K8s? How do I write a GH runner? Basically any question that involves some million-dollar company's product).
We're talking about college stuff; you absolutely should not be allowed to look up linked lists for the first time during an exam, copy the implementation from wikipedia, port it to your language and move on.
In the real world, we want people who mostly know what to do. The real world is time-constrained (you could spend 2 hours learning to do what they thought you could do based on your diploma, but they'd be pissed to find out that you need to look up everything because that's how you coasted through college).
Exam situations are more like the real world than take-home assignments: High-stakes, high-pressure, timeboxed.
If your real world does not have high-stakes, high-pressure, timeboxed tasks, then you really haven't had much contact outside of your bubble.
II2II 8 hours ago [-]
> It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources.
Sort of. In real life, you are expected to have immediate knowledge of your field and (in some environments) be able to perform under pressure. I'm not going to pretend the curriculum is a perfect match for what people should know, but it does provide a common baseline to be able to have a common point of reference when communicating with colleagues. I would suggest the most artificial thing about exams is the format.
> It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
I don't like dismissing the ordeal of people who face test anxiety, but tests are not really high stakes. There is a potential that a person will have to repeat a course if it is a requirement for their degree. At least at the institutions I attended, the grade distribution across exams and assignments, combined with a late drop date, meant that failing a course was only an option if you choose it to be. A student may be forced to face some realities about their dedication/priorities, work habits, time management, interests, abilities, etc.. It may force a student to make some hard decisions about where they want their life to lead, but it does not bar them from success in life. And those are the worse case scenarios. A more typical scenario is that you end up with a lower GPA.
simpaticoder 12 hours ago [-]
In real life you need to know the options and their trade-offs to solve a given problem. You don't need to know all the techniques perfectly, but you do need to be able to characterize them and compare them, from rote memory.
acbart 12 hours ago [-]
I agree, I think many people who rail against exams underestimate how important memory is to more complicated skills. How can you debug a complex application if you have to keep looking up every operator and keyword in the language you're using? It'd be like trying to interpret poetry in a foreign language but you have to look up every single noun. I'm not saying people can't do it, but it's tedious, slow, and you probably wouldn't think of them as a "professional worth paying for their service". Some amount of memorization is key.
kelnos 3 hours ago [-]
It still doesn't feel to me that those things are similar. A sit-down exam is a time-limited, high-pressure situation where you're expected to demonstrate proficiency in the things you've learned over the past several months. Sure, much of that learning builds on stuff you've learned previously, but the focus is on the prior semester (or half-semeter, for mid-terms).
When I sit down to debug a complex application, I'm drawing my prior 25+ years of experience. While I certainly would rather fix the problem faster rather than slower, I don't have a time limit, and usually taking my time (or even leaving the problem alone for hours or days) can be more effective than trying to work quickly and get everything done immediate.
The last time I sat for an exam was in 2003, and I honestly have not experienced anything in life since then that feels like that. Even job interviews have not felt similar enough to me to evoke that same feeling. (Frankly, I've enjoyed most job interviews; I don't think I've ever enjoyed an exam.) That's just my experience, of course, but I don't feel like an outlier.
zamadatix 12 hours ago [-]
This is where the alternative of a course with the other (still monitored for graded activities) option comes in. The downside of that tends to force in person synchronous rather than custom scheduling of regular tests.
The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.
I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.
dublinstats 11 hours ago [-]
High stakes artificial exams can help prepare you for artificial stakes at job interviews where you need to crank out a working solution in 30 mins with jet lag and someone looking over your shoulder
ssl-3 11 hours ago [-]
That's true. They do better-prepare an applicant for a job that filters on a person's ability to accomplish arbitrary things in a vacuum that is completely disconnected from the real world.
That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.
But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.
When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.
dublinstats 10 hours ago [-]
Obviously they're both supposed to be proxy measures, not realistic scenarios. I was mostly joking before but I do think exams provide a pretty good proxy for ability in the subject if the teacher is decent. Interviews not so much unless the applicant is similarly prepared with foreknowledge of what they will be tested on and had some time to prepare and given recent practice.
bartvk 3 hours ago [-]
This. All the drama articles in the media are describing lazy institutions.
My tests are almost 100% in person. Project work included, you can hand something in, but I'm going over line by line and ask what you did there.
I can do this, because while my school hasn't updated the tests yet, my classes are small and I can do all of them in-person.
TychoCelchuuu 3 hours ago [-]
[dead]
acbart 12 hours ago [-]
So at 50%, someone who uses AI to get 100% of the homework grade will earn a D (sometimes passing) if they can get at least a 20% on your quizzes, and a C (always passing) if they get at least a 40%. Did you make your exam so difficult that students who truly didn't learn the material earn less than 20-40%? Because if it was, say, multiple choice questions with four possible answers, then you can expect them to earn at least 25% just by chance.
recursivedoubts 11 hours ago [-]
My quizzes are written responses, psuedocode and annotating code.
blharr 10 hours ago [-]
While that answers their direct question, they do bring up a good point -- how often are you handing out less than 25% scores on exams? Id imagine any professor to do that to get some severe criticism that would make even a cheater pretty livid
api 12 hours ago [-]
The last point is very interesting and might keep universities relevant.
ninjahawk1 9 hours ago [-]
In one of my classes the approach was the opposite, I’m expected to do Ph.D level work as an undergrad and am expected to use AI.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
tkgally 7 hours ago [-]
If it’s any consolation, this problem of discrepancies in rules is very common at universities now.
I teach at two universities in Japan and occasionally give lectures on AI issues at others, and the consensus I get from the faculty and students I talk with is that there is no consensus about what to do about AI in higher education.
Education in many subjects has been based around students producing some kind of complex output: a written paper, a computer program, a business plan, a musical composition. This has been a good method because, when done well, students could learn and retain more from the process of creating such output than they would from, say, studying for and taking in-class tests. Also, the product often mirrored what the students would be doing in their future lives, so they were learning useful skills as well.
AI throws a huge spanner into that product-based pedagogy, because it allows students to short-cut the creation process and thus learn little or nothing. Also, it is no longer clear how valuable some of those product-creation skills (writing, programming, planning) will be in the years ahead.
And while the fundamental assumptions behind some widely used teaching methods are being overthrown, many educators, students, and administrators remain attached to the traditional ways. That’s not surprising, as AI is so new and advancing so rapidly that it’s very difficult to say with any confidence how education needs to change. But, in my opinion at least, it does need to change at a very fundamental level. That change won’t be easy.
terrabitz 8 hours ago [-]
It's not inherently contradictory, just like using a calculator could be considered cheating depending on the context. If you're just learning basic arithmetic, a calculator is cheating since it shortcuts the path to learning. OTOH in calculus, a calculator is necessary. You still have to have a deep understanding of the concepts and functions to succeed.
It's still a new tech so I'm not surprised a lot of teachers have different takes on it. But when it comes to education, I feel like different policies are reasonable. In some cases it's more likely to shortcut learning, and in other cases it's more likely to encourage learning. It's not entirely one or the other.
Izkata 8 hours ago [-]
A better example might be physics and math classes. I was learned derivatives and integrals at the same time in those two classes, but the math one required we learn how it all works (using limits to understand why the derivative rules work, without using calculators, for example), while in physics we just memorized the rules and were expected to use the calculator.
osigurdson 8 hours ago [-]
I always thought they should teach calculus first.
cyberax 7 hours ago [-]
Why do you need a calculator for calculus?!?
ninjahawk1 6 hours ago [-]
Gotta add up all the curves
ninjahawk1 7 hours ago [-]
Exactly, AI is the next calculator. Right now the consensus is that it just does the work for you, in my opinion that says more about us not having the right questions than actual laziness. In a world where the only questions are basic arithmetic, calculators do all the work for you. My opinion is that the future what used to be done by academics will be done by high schoolers and new academics will be producing work at a rate no one could’ve ever predicted.
For example, the professor who’s leading me in this project had a fellowship at a certain university in England and said he exclusively coded using claude code for a month straight, their purpose was to solve a vaccine for a specific disease and by using AI tools such as claude code they’re several months ahead of schedule.
4 hours ago [-]
Moonye666 8 hours ago [-]
[flagged]
pesus 7 hours ago [-]
I'm really not seeing how you can do PhD level work as an undergrad. You wouldn't have the foundational knowledge necessary to do PhD level work, and you have no idea how much of what you're learning is accurate.
ninjahawk1 6 hours ago [-]
Without going into too much detail, when I said “Ph.D level” I’m meaning active research that adds a meaningful contribution to a field. I’ll probably be posting on here in a couple months about it but I’ve been doing thousands of tests with beefy GPUs on a certain theory we have about small 9b LLMs under certain external constraints.
Am I saying I’m as knowledgeable or capable as a Ph.D right now? Absolutely not. There’s just not really a terminology that correctly describes accelerated learning and iteration by use of AI since the technology is so new. I can’t speak for others but as someone who’s a senior in my physics degree, I’ve been actually learning faster by using AI. It’s either a mental crutch or mental accelerator. The difference is in if you want it to completely do work for you or if you try to learn and follow along.
It’s a very under explored and new area right now, how higher learning is effected by using AI as a tool instead of as a cheating device, but historically, new tools like the calculator or computer have done a lot to accelerate learning once new rules are in place.
osamagirl69 6 hours ago [-]
For what it is worth, no graduate student would say they are doing 'Ph.D level' research. It is called 'graduate level research' or just, you know, 'research'
Sounds like a fun project, I wish you the best. I ran a similar program (independent study that encouraged freshman/sophomore undergraduates to explore using microprocessors, at the time the EE curriculum was completely focused on analog circuit theory and ended at boolean logic) and it went well enough that it eventually became part of the official undergraduate curriculum.
margalabargala 4 hours ago [-]
It's not terribly uncommon for an undergrad to claim they're doing "PhD level work".
Undergrad research is pretty common and it's not all that hard to get your name on a paper as an undergrad. A lot of undergrads think that doing work that gets your name on a paper, equates to PhD level work.
raincole 9 hours ago [-]
Now at least you're an adult already. Imagine what mixed messages schoolchild's are receiving from their teachers...
an0malous 8 hours ago [-]
> I’m expected to do Ph.D level work as an undergrad and am expected to use AI.
Nice idea. What class and what work are you doing then?
ninjahawk1 8 hours ago [-]
For that specific one, it’s more of an independent project analyzing complex systems for 6 credits, I’m gonna be expected to submit a paper to arXiv on the subject with the professor as a co-author (fingers crossed). He said I can use claude code or any AI. I’m required to do X amount of hours per week and then submit a thorough report after about 2 months.
leptons 9 hours ago [-]
>I’ve learned more by using AI to do things about my head than hard core studying for a semester.
How do you know you actually learned, instead of being fed slop by the AI that isn't true at all? If you didn't study, then I doubt you'll really know if the AI is lying to you or not. I have to wonder if your teacher will too, sounds like they have kind of checked-out from actually teaching.
whartung 13 hours ago [-]
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
Aurornis 7 hours ago [-]
This would take about 1 day for some student to realize you can instruct one of the LLMs to operate the computer screen for you and have it type and fake edit a document for you. The tip would spread among the cheaters and the metric would become harder to judge by itself.
nlawalker 13 hours ago [-]
Typing as a service is a whole cottage industry on Etsy.
ssl-3 11 hours ago [-]
That's certainly one way to abstractly automate a task: Just pay someone else to do it. (This is a concept that regular people employ every day in the real world.)
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Even Microsoft Word stores revision history inside .docx files, and that’s been used to expose plagiarism. I heard about one case where a student took an existing paper (I believe from a previous year/student) and pasted it into Word. They then edited it just enough to make it look different.
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
kelnos 3 hours ago [-]
> pasted it into Word
I'd be surprised if copy/paste carries the revision history, though. Wouldn't they have had to start with the original document (from the other student) and make their edits directly, and then submit that file?
Dylan16807 7 hours ago [-]
Are you sure about that? I could easily see this happen with a web document link, but for a docx file the change tracking is off by default and pretty obtrusive. Basic metadata would be fine, formatting might be quirky but that's not exactly a smoking gun...
vunderba 7 hours ago [-]
It’s been a while since I heard about it, but IIRC the professor was a stickler for a very specific paper format, so they would distribute a .docx template file with Track Changes already enabled and require students to write their papers using that template.
I also think that when track changes was first introduced in earlier versions of MS Word, there wasn’t as much concern about privacy/telemetry as there is now, so it wasn’t made as prominently obvious.
eichin 12 hours ago [-]
Hmm, I have some old daisy-wheel printers in the closet that I've been meaning to strip down for stepper motors, maybe I should refurb them instead :-)
djmips 11 hours ago [-]
In general I love the idea of turning printers into typewriters. I've been thinking about how to do it with an inkjet printer.
tejtm 12 hours ago [-]
arms race....
oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.
vunderba 12 hours ago [-]
You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward simulating mistakes and typing patterns to make them seem more human-like.
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
Wow it feels like the Loebner prize went away right at the dawn of the LLM. Is it correlated?
vunderba 11 hours ago [-]
Yeah I definitely think LLMs contributed to its demise. To be honest, nobody in academic AI circles took it very seriously, because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
artikae 10 hours ago [-]
Goodhart's Law vs the Turing Test! Can our humans accurately evaluate intelligence, or will they be fooled by fakes? Live this Sunday!
djmips 10 hours ago [-]
I think it would be great to be revived with a different premise.
leptons 8 hours ago [-]
>because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.
Isn't that really what all these AI companies are doing too? It sure seems like it is.
Moonye666 8 hours ago [-]
[flagged]
RhysabOweyn 12 hours ago [-]
Why are people promoting the idea that exams are not written or given in person anymore? I graduated relatively recently and maybe had 1 take home exam during my entire education. Every other exam was proctored in person and written. The professor who made the take home exam also made it much more difficult than a normal exam so I would not really say it was easier than a normal in person test.
dublinstats 11 hours ago [-]
Take home exams were very common when I was in school, which was before you could get answers on the internet. After internet answer and cheating sites came along, a professor would have to either not care and let cheating run rampant, or struggle to constantly make unique new kinds of take home questions somehow. AI has basically killed that option too.
ryukoposting 8 hours ago [-]
Things have changed drastically since COVID-19, at least in the US. Tons of schools and universities shifted to online systems, and never abandoned the systems they built up when it was time to go back to school.
I graduated in 2020, so I've only gotten to see the changes secondhand through friends and family who are teachers, and through my sibling who graduated a few years after me. But the difference is staggering.
bmitc 11 hours ago [-]
I loved take home exams because they allowed me to study before hand but not have the insane pressure and condensed studying required for exams in the classroom. Even though they were normally much harder and longer, I liked them. I felt I learned much more through them because I could take the time to understand concepts I had missed without feeling the time pressure of in-person exams.
It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
ghighi7878 11 hours ago [-]
Exams in classroom with all the time pressure is also an important part of education. May be they should be low percentage of grade to prevent too much stress but it's am important learning experience
beej71 11 hours ago [-]
I'd like to see some data on this. My general-ed recall is minimal, and in programming before school, I certainly learned a ton more by coding than by testing. That's my perception of my time in school, as well.
bmitc 11 hours ago [-]
I disagree. Take home exams represent how work and progress occurs in the "real" world. There's nothing in the post education world that resembles in-person exams.
Maybe the medical profession is a counter example.
close04 9 hours ago [-]
> There's nothing in the post education world that resembles in-person exams.
I’d argue that dealing with any high criticality operational incident is like an in person exam (maybe even the most difficult kind, the open book one) if you are the one responsible for fixing it. Everyone is looking at you, you have time pressure to solve it ASAP and you can’t afford the time to dig through all the docs on the spot. So there’s at least some similarity with some real life situations.
phoronixrly 11 hours ago [-]
Did you by any chance graduate before the COVID-19 pandemic?
pftburger 2 hours ago [-]
Would this be a good time to post about my AI controlled typewriter project?
zozbot234 2 hours ago [-]
A computer controlled typewriter is a thing already. It's called a teletype (tty for short).
randoments 9 hours ago [-]
Reading all these comments, I feel like US universities are a joke.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
lionkor 1 hours ago [-]
I agree fully. Not sure what they are on about, with "no labs??" in the replies.
You still do all the same things, and they are graded, but this doesn't affect your final grade. Instead, you need to pass a threshold to enter the exam, which is then graded.
The US isn't so amazing at this, it simply can be done better. Recognizing where you can improve and from whom you can learn is a great first step to ACTUAL improvement.
meroes 9 hours ago [-]
No projects, no labs, no teamwork, no papers?
What a narrow set of skills to send into your economy.
bugufu8f83 4 hours ago [-]
Just because those things don't contribute to your final grade doesn't mean you don't do them.
At Oxbridge, for CS we still had lab work. We still had problem sets assigned for CS and for math which were graded. We had one large CS group project in, I want to say, our second year. Humanities students were still assigned essays. It's just that none of this stuff contributed to your final degree classification which was based entirely on your exams (although if you didn't do your CS practicals you wouldn't be allowed to pass).
Obviously Oxbridge isn't exactly representative but certainly my experience showed me that the American style is not the only way of making education work.
cafebabbe 25 minutes ago [-]
oh, that's why US has the only working economy in the world ! :)
joking, of course everyone does 'projects, labs, teamwork and papers'. It's just not the main focus of the grading process.
ivankelly 9 hours ago [-]
Given the way things are going, not knowing how to use AI will be like coming out not knowing about revision control
lionkor 1 hours ago [-]
Weaponized FOMO to drive everyone to use AI isn't really a great idea
maplethorpe 9 hours ago [-]
Isn't the selling point of AI that it does it for you? What's to learn?
lacy_tinpot 9 hours ago [-]
If the AI does it for you, you need to still learn what to do.
What is the "it" that AI does for you?
This is assuming you know how to get good work out of AI in the first place. But even that is turning out to be a skill in and of itself.
Levitz 9 hours ago [-]
"It does X for you" is the point of many technologies. You still require knowledge to work around it.
Context helps immensely, for example. Think of what you can do that someone outside tech can't.
maplethorpe 1 hours ago [-]
> "It does X for you" is the point of many technologies. You still require knowledge to work around it.
When running water replaced the need to pump water out of the ground yourself, were people urged to "learn faucets"? You kind of just need to twist a knob and water comes out, right?
Maybe there was an intermediary stage where running water was slightly more complicated and there were more steps to learn, but devoting time to learning those steps would have been a waste of time, since the end goal of the system was for it to function without much input.
strogonoff 8 hours ago [-]
The “it does X for you” aspect of technology is not completely without its downsides, for various values of X.
For example, take “X” to be “walking”. Do we have the technology that allows us to pretty much never have to walk? Sure. As far as I am aware, though, we do not generally favour a lifestyle of being bound to a mobility aid by choice, and in fact we have found that not walking when able in the long run creates substantial well-being issues for a human. (Now, we have found ways to alleviate some of those issues for those who aren’t able, but clearly it is not sufficient because we still walk.)
The problem is exacerbated immensely as the value of X approaches something as fundamental to one’s humanity as “thinking”.
Moonye666 8 hours ago [-]
[flagged]
mekael 9 hours ago [-]
I think you’re missing the /s.
doug_durham 9 hours ago [-]
So you didn't have to do any course work? No collaboration? No labs? I'm not aware of any University that doesn't have coursework outside of online diploma mills.
theFco 9 hours ago [-]
In my undergrad, coursework did not count towards the grade for the module. But you earned the right to sit for the final exam by passing the courswork.
raincole 5 hours ago [-]
A million foreign students are studying in US universities. Millions applied. What a joke education system indeed!
tyrust 9 hours ago [-]
Did you never have to write a research paper?
tim-projects 2 hours ago [-]
To me this nostalgia is pointless. AI is here and it's good enough only going to get better. The classroom should be about using AI better not ignoring it.
But that would require the teacher to be good at AI too. I think that's the problem here.
medbar 12 minutes ago [-]
> The classroom should be about using AI better not ignoring it.
No, it shouldn’t. I’m not bearish on AI but it shouldn’t replace any part of a classroom where the objective is to learn and communicate in a new language (German). The typewriter argument is memorable and interesting - the article points out the lack of editing forces kids to slow down and think about their writing, as well as iterate through multiple drafts. It’s not a nostalgia thing, they’re not old enough to have ever used one before.
I could see an argument for adding on a new class for GenAI, agents, context engineering or what have you, but considering how behind current US curriculums already are and how quickly the AI field moves, I can only see this ending in wasted time and money: even an up to date class will be stale by the time it’s over. Kids will end up learning this anyway outside of the classroom, no use lecturing them on something they’ll already know.
cafebabbe 30 minutes ago [-]
Seems like you're equating 'education' with 'employability training'.
SirHumphrey 2 hours ago [-]
Just because an AI can craft a decent Japanese text doesn’t mean I can. Just because AI can write x86 assembly also doesn’t mean I can.
You don’t give first graders a calculator because they will always have one in their pocket- they end up just inputting numbers in a magic box and not learning how to do this manually which will destroy their future mathematical education. It’s about the same with AI.
tim-projects 2 hours ago [-]
Here is a good video I watched just yesterday on how to integrate AI into a writing and thinking process.
AI is not a gun that you can't put into the hands of a child. It's a paint brush.
When I was in college, your grade fully depended on the oral exam/debate with the professor. Everything else was but the entry ticket.
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
mjlee 13 hours ago [-]
This sounds extremely susceptible to unconscious bias, or even just straightforward discrimination.
Swizec 11 hours ago [-]
It does! That’s why you can ask to be evaluated by a commission of professors.
If you don’t pass after 3 tries, commission is mandatory.
You also have a paper trail of written exams and midterms to back you up. If you keep getting good grades and failing the oral, people will find that obviously suspicious.
Honestly the only times I had any trouble in the orals were the exams where I baaaaarely passed the written. Usually oral feels like the chill easy part compared to written because you can have a back-n-forth with the professor.
Terr_ 9 hours ago [-]
> It does! That’s why you can ask to be evaluated by a commission of professors.
Still concerning from a statistical/psych fairness aspect.
There's a famous example of the Boston Symphony trying to fairly judge unseen applicants in 1952, and their results kept getting gender-skewed until they adjusted for the fact judges were reacting to the sound of shoes (e.g. high heels) when the candidate moved around behind the divider.
ryukoposting 8 hours ago [-]
> That’s why you can ask to be evaluated by a commission of professors.
Ah yes, the classic "if you think the system is abusing you, you shall out yourself to the system that's abusing you if you want any chance of recourse." Because a tribunal run by the people you're lodging a complaint against can't possibly be biased.
jubilanti 11 hours ago [-]
Moreso than a job interview?
gpm 10 hours ago [-]
More systematic than a job interview.
If you don't get one job you should have - there are others - it's unfortunate but not life altering.
If 3 years into your marine biology program a professor who always teaches a mandatory course fails you because you're a woman who wears non traditional dress - you're not graduating and now there are no jobs. (And this is an example that actually happened to someone I know - not in a western country)
fl4regun 5 hours ago [-]
I think the oral exam is probably a great way to ascertain a student's ability, but let's be real, undergraduate class sizes numbers in the hundreds, for almost every first year class. I don't think it's possible to administer that. I think I would have loved to see that in my later years at university though, we still just did things by written exam + course work.
Swizec 5 hours ago [-]
> let's be real, undergraduate class sizes numbers in the hundreds, for almost every first year class. I don't think it's possible to administer that
Our first year class was about 250 people. It was fine.
By the 4th year, class sizes were a much more manageable 30 to 50.
You get maybe 10 to 15 minutes with the professor (usually more in later years), they ask 3 questions with some followup. That’s 1 work week for the professor. And less than half the students even make it that far for every exam season (3 per school year) so you’re looking at something like 3 days of work. It’s fine.
resident423 8 hours ago [-]
Is there really much point though? I think AI will keep improving, and there will be more and more incentive to use an AI which costs $20/month, instead of a human writer that costs $30/hour. If someone want's an article written, and if people like the AI article as much as the human one, what stops anyone everyone using AI?
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
lombasihir 8 hours ago [-]
i dont think that way, ai will became better, but human-taste writing just feels different. like hand-made furniture vs factory-made furniture. they have different class.
resident423 7 hours ago [-]
I think the AI writing becoming better means it will appear more human rather than like better AI writing. I think the difference in feeling is similar to early attempts to generate faces with AI, which also seemed wierdly wrong in ways which were hard to describe, but now it's very hard to tell them apart.
emptybits 3 hours ago [-]
Aside from AI-proofing, IMO there is value in slow typing or writing. We have to think just a little bit more before putting ink to paper. There is also a higher cost to making mistakes.
As a kid, before my family could afford a home computer, I was determined to do something that resembled programming. I borrowed "BASIC Computer Games" (1978) by David Ahl[1] from the library and typed in several programs on a manual Olympia typewriter. More than just reading code and maybe even more than being able to easily execute it, I'm convinced this typewriter exercise forced me to really study the flow and the how of the code.
For programming, the way I would build a curriculum is to force students to actually learn how to program and code first. This is simple by requiring them to write code inside the classroom by hand for all exams.
I would make this the focus for 90% of the first 2 years of their degree.
I would then have them spend 75% of their last 2 years learning how to use and program with AI. Aside from knowing how things actually work, there's no more important skill now than mastering AI.
paulorlando 9 hours ago [-]
I like this. Related, this semester I've been using handwritten quizzes in class. A simple change that's been one of the best things as it changed students' expectations of class prep. Kind of do the readings and sort of prep and you can coast in class. But if you need to write out quiz answers you're forced to know the material better as well as maintain the ability to express yourself.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
binarycrusader 9 hours ago [-]
I’ve been typing for years since the 80s. However, even in the 90s I found any extended period of handwriting to be painful and laborious. I don’t think I could handle an instructor that insisted on handwritten long form but I’d happily accept a compromise in the form of a typewriter.
8 hours ago [-]
paulorlando 9 hours ago [-]
Sorry to say you can't take my class.
lizknope 9 hours ago [-]
My school couldn't afford typewriters in the 1980's and early 1990's.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
9 hours ago [-]
fizlebit 11 hours ago [-]
I think if your university doesn't do in person exams with pen and paper then the degrees it hands out are not much evidence of anything.
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
zx8080 5 hours ago [-]
After 30 seconds the site shows a fullscreen popup:
> The Sentinel not only cares deeply about bringing our readers accurate and critical news, we insist all of the crucial stories we provide are available for everyone — for free.
Thank you very much for interrupting and ruining my reading experience of your article.
breve 5 hours ago [-]
What? And after all that money you paid them? The nerve!
Our_Benefactors 5 hours ago [-]
Your worldview sucks. Essentially claiming that in order to receive information, you must also receive garbage useless information to brainwash you.
breve 5 hours ago [-]
No, complaining about something offered for free because of a minor pop up is what sucks. It's a bizarre sense of entitlement.
If you don't like the website, simply don't use it. Especially when you're making no contribution to it.
margalabargala 4 hours ago [-]
Just because you expect someone to owe something to you, doesn't mean they think they owe you that thing.
If someone gives away something free, they can and sometimes do wash their hands of it. That doesn't prevent you from expressing your opinion on what you think they should change about the work, but they're not under any obligation to do anything about it.
Someone made a thing available. You can take it as it is, you can make noise about what you don't like, you can make it better, or you can ignore it and move on.
If someone is providing a mix of useful and garbage information, well, take your pick from the above.
isolli 3 hours ago [-]
Let me be blunt: the only reason I see not to implement solutions like this appears to be laziness from instructors.
erelong 8 hours ago [-]
Things like this are well-intentioned but idk why there aren't more teachers creating optional "side quests" like these for students that want them instead of forcing them to do things like these
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development
cultofmetatron 2 hours ago [-]
this just reinforced my recent belief that the best way to handle the AI boom is to pivot into robotics
zoom6628 7 hours ago [-]
FWIW my Dad taught me how to type at 4yo on a huge Imperial typewriter. My spelling took an enormous leap in capability in a few weeks. Primary school teachers were amazed at the words I could spell correctly. (Didn't help my handwriting though which was still like intoxicated chicken scratch on a good day).
eranation 8 hours ago [-]
Soon on Show HN: I built an open source tool that controls your electric typewriter.
azhenley 8 hours ago [-]
I spoke with a bunch of profs about how they were assessing students in the age of AI:
A hand-written essay in class would seem to be a workable mechanism for a student to demonstrate an ability to reason on their own about a subject.
One of my best college professors would review such essays in-person, one-on-one twice each semester.
opengrass 12 hours ago [-]
Better dust off that old AlphaSmart!
erickhill 7 hours ago [-]
Remembering my college typewriter-use-by-quarters (coins) on a timer like being at the laundromat, I kind of love this.
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
Good luck, we’re all counting on you.
8 hours ago [-]
singpolyma3 12 hours ago [-]
If students cheat they hurt only themselves. Make sure they understand the consequences for cheating (missing out on learning) and that's about all you can do.
eszed 12 hours ago [-]
Depends on your measuring stick. Cheating themselves out of an education? Yep. Cheating themselves into a credential -> job - the status / remuneration of which is almost entirely divorced from the quality of the education, being aligned rather with the name of the organization on the diploma.
Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
bmitc 11 hours ago [-]
The fact that it's an industry is alone enough to cry.
paleotrope 12 hours ago [-]
Well from a certain perspective they are also hurting the schools reputation, the programs reputation, and ultimately their fellow students.
janalsncm 11 hours ago [-]
> If students cheat they hurt only themselves
This statement is more defensible after removing “only”. If it “only” hurt the cheaters, there would be no need to police cheating at all.
michaelt 11 hours ago [-]
The thing is, when colleges don't test students' ability properly before issuing a credential, employers start testing job applicants' ability after they've received it.
And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
jubilanti 11 hours ago [-]
They hurt other students who worked hard for the degree. They hurt the reputation of the school and the utility of the degree as a credential.
mcmcmc 12 hours ago [-]
This is untrue. Students who graduate without actually absorbing knowledge as laid out in the curriculum devalue the degree when they show up in the workforce lacking that knowledge. This is part of why new grads are undesirable job candidates, there’s a chance you are paying a higher wage for someone who may not have learned anything.
delusional 12 hours ago [-]
When i attended university (almost a decade ago i guess, time flies) we didn't have a single exam on the computer. All exams were on paper or oral, most were without notes too. Computer science does not require computers.
ButlerianJihad 12 hours ago [-]
This is usually true, but it is also true that some classes are graded "on a curve" and so grade inflation could hurt people who are honestly doing work. Also, cheaters tend to suck all the air out of a room. For example, my I.T. instructor designed a really nice oral quiz slide-show for the entire classroom. I found it a few hours before the class, I watched it in its entirety, and then when he tried to run it live, I spoilered all the answers before any other student could answer. I wasn't strictly cheating, but I wasn't being fair to my classmates' learning process, either.
gentleman11 12 hours ago [-]
I had a typewriter growing up and I remember thinking it was the coolest thing. I was amazed by it and tried writing several stories. Eventually my dad bought me a crappy old computer that was only really good for writing, and that was cool too. I loved that thing. It was small too, with an integrated monitor and keyboard, so it didn't take over the whole desk where I still used pencil and paper often
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
eichin 11 hours ago [-]
There's an entire industry of "distraction free writing devices" based mostly on that nostalgia/yearning (not to say that it isn't effective, but the effectiveness is not actually being measured :-)
dlivingston 9 hours ago [-]
I have an old MacBook Air I flashed with writerdeckOS [0]. Feels like a digital typewriter.
Next on Show HN: I built a voice activated typewriter.
DeathArrow 3 hours ago [-]
Why didn't she ask to write by hand?
amingilani 9 hours ago [-]
A typewriter tty would be a fun weekend project.
onesociety2022 13 hours ago [-]
If AI can do the work, maybe the test should be more focused on what AI can’t do? This is like anyone still doing a traditional coding interview with leetcode problems just because they haven’t yet done the work to figure out what to test for in a world where Claude Code exists.
Peritract 12 hours ago [-]
The goal of the educational process isn't the test paper, it's the learning.
Gyms aren't redundant because tractors exist.
llbbdd 12 hours ago [-]
Gyms are a great example actually because tractors exist to do the economically useful work. You now optionally go to the gym to benefit from fake labor that used to be the side effect of useful work. The fake labor is now what colleges are trying to sell, and it's going to kill them.
Peritract 10 hours ago [-]
Gyms predate tractors.
llbbdd 9 hours ago [-]
3,000 years ago, physical labor was a component of most jobs. Today gyms are for people who can afford to attend them and don't have a day job that naturally exercises them through labor. People exercising purely for health benefits, and not because the strength benefits them in their job and in other facets of their life, is new.
cumshitpiss 11 hours ago [-]
[dead]
onesociety2022 10 hours ago [-]
Huh? The gym analogy doesn’t even make sense. People didn’t go to gyms when they were farming with oxen. Gyms are popular now precisely because tractors exist and you don’t need manual labor to farm anymore but people still need the physical exercise for their health. Society has adapted to the arrival of new life-changing technology. Our education system needs to adapt to new technology like AI too. You can probably uplevel a lot of courses and cover a lot more interesting topics than before and teach real application of things you learned aided by AI. Just like when I was doing a CS major 20 years ago, they didn’t spend too much time teaching me assembly programming beyond 1 or 2 lectures (they let me use a compiler for programming assignments!).
Peritract 9 hours ago [-]
Gyms predate tractors by a couple of thousand years. You should think harder about the analogy.
ceejayoz 13 hours ago [-]
There are plenty of things AI can do that students still benefit from learning.
echelon 12 hours ago [-]
Maybe instead of trying to teach around the abacus, we need to teach the higher level things you can reach with MATLAB.
We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
IshKebab 12 hours ago [-]
This is like saying you shouldn't learn to add because we have calculators.
syngrog66 13 hours ago [-]
One consequence of LLM fraud at scale making remote/online tests & document submission worthless is it might act as a giant revitalizing boost for the bricks-and-mortars school systems. Suddenly having real teachers and students in room together has value again, for credibility and authenticity alone.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
sonzohan 8 hours ago [-]
Uni professor here.
My colleagues that teach hard skills courses (like data structures and algorithms) either love AI and incorporate it into their teaching at every moment possible, or despise it in the same way graphing calculators were by high school math teachers when they were introduced nearly 30 years ago.
I teach soft skills classes to engineering students, and I'm unconcerned with students using AI. I write my problems in a way such that, if the student truly understands the assignment, prompting the AI to solve the problem and iterating on it takes a similar amount of time to doing the work themselves. AI is not very good at writing introspectively about the student. In other words, AI isn't going to be helpful when the homework question is "A fellow student comes to you asking for suggestions on how to maximize their chances at landing an internship. What advice do you give them that's immediately actionable?"
Try it, plug that into ChatGPT or your favorite LLM. It parrots the same generic tips everyone tells you, with very little on "how" do perform the action in an effective way. Read it, copy it into your advice document, get a poor grade. Try telling other students to take this advice. Note how they don't because the advice isn't actually actionable enough for them to take action.
LLMs are also not very good at the follow-up question "In a previous assignment you gave specific and actionable advice to a peer on the job search. Which of these suggestions were so good you are now doing them?" A number of students write a "Mental Gymnastics" essay, claiming they are following all their suggestions (because they think that's what the professor wants to hear) while the evidence they provide demonstrates they are not. A student asking an LLM to write the essay for them consistently produces a digital 'pat on the back'; a mental gymnastics essay that ultimately makes the student realize how unwilling they are to solve the #1 problem in their college career.
I've done away with exams wherever possible. I stick to project-heavy courses. What I've found to be far more concerning than AI use is the increasing loss of social skills and ability to cooperate within the younger generations. The number of students who would prefer to fail a class instead of talk to literally any human being is astounding.
The number of students who refuse to build soft skills, and believe that tech is truly a meritocracy where the only thing that matters is 'lines of code', there's no politics, and they won't work call or crunch or give code reviews, is also astounding.
gorgoiler 12 hours ago [-]
I’m confused about too many things being measured at once. Is Phelps banning AI to ensure her students are fit to pass terminal examination? And doing so to ensure that her class has a good pass rate, proving she is a good teacher and can keep her job? What if her cohort are particularly dumb? Is she incentivized to make it easy to pass her classes to get that A you paid so much for? Or hard or make that A worth something?
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
PebblesRox 11 hours ago [-]
My impression was she just brings the typewriters into class as a one-day novelty thing per course, not that it becomes the norm for the whole semester. The goal is to give the students a taste of what the old-fashioned way is like, to get them thinking about it.
somewhereoutth 10 hours ago [-]
I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
14 6 hours ago [-]
When I was a kid we were told no calculators because when you grow up and are in the real world it's not like you will have a calculator with you at all times. Fast forward to today and we all have a calculator on our phones.
I think AI should be treated the same. Who cares if it assists in a lot of the work that is a good thing. BUT as we all know AI has been incorrect on many things so I think what would be a much better learning practice would be to forget if AI wrote the paper and focus heavily on students backing up their claims with sources. So if your paper says ABC is true and AI writes it up in a perfect paragraph you would still need to confirm the facts as true and find a reputable source that shows it to be true.
Probably around the time they were invented. They were mandatory on my ground exam (private pilot).
vunderba 12 hours ago [-]
OOC was this a while ago? Even when I took the ground exam around 10 years ago, everyone had electronic flight computer calculators (CX-2s).
bombcar 12 hours ago [-]
It was awhile ago (init var me == old;) - back in the era of "iPads can't be used for critical flight information, they're too unreliable".
vunderba 12 hours ago [-]
That makes sense. The CX-2 calculators are a bit less like the iPad era and more like the equivalent of calc I/II classes which only let you use specific TI models versus an app on your smartphone.
It reminds me of a family friend who's a bit older and did their scuba certification using dive tables, whereas when I did my PADI, I was able to use a dive computer.
arcfour 12 hours ago [-]
Pfft, just grab a teletype and run lpr -P ttyUSB0 ai_generated_report.txt ;-)
SilentM68 10 hours ago [-]
This will only work until somebody figures out how to connect an AI to the typewriter which will have some sort of MIC, and the person will start dictating into it with AI-assisted revisions. Once the dictation is over, the AI-enabled typewriter will be instructed to type the work out.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
dyauspitr 11 hours ago [-]
Just have them write it out. “Ain’t nobody got a goddamn typewriter”.
pbgcp2026 11 hours ago [-]
... meanwhile, all these students graduate, can't find jobs and become plumbers or bricklayers.
banana_sandwich 8 hours ago [-]
i mean, you can just have AI still do the work, you’re just doing data entry with a type writer.
pyalwin 3 hours ago [-]
[flagged]
EverMemory 7 hours ago [-]
[flagged]
SamHenryCliff 9 hours ago [-]
[dead]
rvz 12 hours ago [-]
The college instructor might as well ban calculators and use abacuses then.
sarchertech 10 hours ago [-]
We couldn’t use graphing calculators on calculus exams. There were professors who banned calculators entirely.
fl4regun 5 hours ago [-]
at my university in math exams we were only allowed to use 1 specific model of calculator, and most of the exams were answered symbolically anyways, so the calculator usually was not helpful anyways.
llbbdd 12 hours ago [-]
Might be an unpopular opinion in this thread, but college was made worthless for most degrees as soon as the internet got popular and silly performative shit like this is the death knell. College is about learning how to work in an industry. I'd predict an uptick in trade schools and other hands-on work like medicine, and a continuing downturn in so-called formal education for anything white-collar, programming included. Students are customers. Businesses are going to use AI going forward. No reason to waste time on this.
hackable_sand 11 hours ago [-]
> College is about learning how to work in an industry.
Oh
llbbdd 11 hours ago [-]
Education is a nice side effect sometimes but yeah, I don't know how you could reach any other conclusion. If you're motivated to learn for learning's sake, college is an annoying slog that you know you don't need post-millenium. I literally left college early and started making money instead of spending it, because I got tired of demonstrating to my professors that I already knew everything they were teaching and that it'd be a waste of time for me to come to class.
sarchertech 10 hours ago [-]
Or maybe you chose to waste your time because you treated college as a way to get a piece paper instead of as the only time in your life when you are surrounded by experts who will spend an hour a week answering any questions you can think of.
tim-projects 2 hours ago [-]
That doesn't work in tech based professions. In college I took music technology. It was 2 years of my own learning and explaining how everything worked to my tutor.
llbbdd 9 hours ago [-]
No time wasted at all, that option is also trivially available outside of college, it's called "email". There's a whole industry in tricking new adults into believing that college is not about getting a piece of paper, it's gross, and it's avoidable. I paid off a year of unnecessary college debt in 1/4 of a year of doing real work I learned how to do in my free time. It's a trap and articles like this where colleges are working as hard they can to make education less useful prove it.
sarchertech 9 hours ago [-]
>No wasted time
You just said that it was a waste of time. So was it or not?
> that option is also trivially available outside of college, it's called “email”.
How many experts have you cold emailed over the years and how much of their time have you taken?
llbbdd 8 hours ago [-]
It would have been more wasted time had I continued after a single year. I went to my first year of college on the advice of my well-meaning parents who are old and like most old people thought it was still important, and yet they agreed with my decision to leave after the first year on an offer for a real six-figure job because there was nothing to learn that I hadn't or couldn't have learned on my own. At least one of my own professors also openly wondered why I was there at all.
To your second question - less than a hundred, but tens. Most people who are worth listening to publish their work and their thoughts. Email is free. Experts love to answer questions about their work, professors hate doing extra work for no extra pay. The incentives here are not confusing. How much time have I taken? Confusing question. These are real people with real passion, and they answer questions with that in mind. Professors are obligated to puke up an answer. I've gotten responses in most cases, in some I haven't. When I don't get answers it's because the targets are smart and busy. If I wanted more engagement with my random questions I'd offer money, and if I had offered money every time I'd still be below par on the money I wasted on college. If I wanted to justify it - I'd say I learned enough to validate that paying real money for another 3-6 years would have been less valuable than burning it for heat.
sarchertech 7 hours ago [-]
> At least one of my own professors also openly wondered why I was there at all.
I think you completely misunderstood this interaction.
There are 2 possible explanations.
1. You are so smart/knowledgeable that the professor thinks you are beyond college.
2. You were acting like such an arrogant know-it-all that the professor was being sarcastic.
I’ve seen #1, but I’ve seen #2 many times.
You sound like you have a huge chip on your shoulder about not having a degree. I had the same issue at one point before I went back and finished (after working as a professional developer for a while), so I recognize it.
When I did go back, I asked questions in class, I went to office hours to ask questions, and I did research projects with professors. Some back of the envelope math says it would have costs me about twice what I got out owing if I’d paid for an equal amount of time with whatever experts I could find.
My strong suspicion based on the few posts I’ve read is that your attitude is the reason you had such poor interactions with instructors.
llbbdd 7 hours ago [-]
I had excellent interactions with my instructors. I interacted with them like human beings and they understood that their limited time would be better spent with students who didn't have the same energy I did. Several professors, when asked, put me through an impromptu whiteboard quiz and said yeah, do your own thing. It's great that you participated in the process in your own way. In my case I asked if I could show up for the final tests and nothing else, because the intermediate work would have been useless, received permission, and passed.
Chip on my shoulder - no, and it's a silly label to begin with. Understanding that it's for other people who value the paper more than intrinsic understanding, yeah.
EDIT: I will concede in some way that I'm proud of not having a degree, and it does influence my thoughts on this topic. I've met some real idiots that do, and I don't consider it a serious differentiator.
Also looking up the thread - at my early jobs, I was surrounded by many people who were interested in educating me on any topic I could think of, because similarly we were all being paid for our time. The difference between that and school was the assumption that we were both motivated and capable.
We already had AI proof education.
The best was when she barely unscrewed one of this big DIN connectors so at quick glance it looked fine, but wasn’t fully connected.
That's evil haha. It's the case where you unplug and plug again everything, changing seemingly nothing, but then it works
Lots of skills from those old days that have been lost/ignored in the pretence of productivity.
The internet enabled all the complexity we have today. LLMs will have a similar effect, but instead of engineers actually having to understand the system (even in it's complexity) they will just be querying the oracle to build things or solve problems.
When the oracle can't help (or maybe refuses to) is when it gets interesting.
It's a shame that they are also way more susceptible to cheating with AI.
So a student who only understands the basics should be able to answer most of the easy questions and students who have a deeper understanding can answer the harder ones.
Well-written exams should feel pretty fair and leave students feeling like the result they got is proportional to the effort they put into studying the material (or at least how well they personally felt they understood the material).
Is this kind of test - many short questions - a standard thing for math in your country?
My university exams were pretty much all "2-question", in 90 minutes.
The first half was an essay where you have to reproduce a lesson from the curriculum, in your own words.
The second half was "the formulas" - you have to develop one or two formulas from first principles.
I once got an A- even though I got "the formulas" half very wrong. As the teacher explained later, I simply chose the coordinate system beginning at not the same place the textbook did. And this was supposed to be a bad teacher - he actually gave Ds to almost all of us (180 people). This was a makeup exam.
You've never been a teacher.
They were more prone to cheating before AI, too.
Cheating has always existed at some level, but from talking to my couple of friends who teach undergrad level courses the attitudes of students toward cheating have been changing even before AI was everywhere. They would complain about cohorts coming through where cheating was obvious and rampant, combined with administrations who started going soft on cheating because they didn’t want to lose (paying) students.
AI has taken it further, with students justifying it not as cheating but as using tools at their disposal.
I was talking to my friend about this last week and he was frustrated that several of his students had submitted papers that had all the signs of ChatGPT output, so he asked them simple questions about their papers. Most of them “couldn’t remember” what they wrote about.
It’s strange to me because when I went to college getting caught cheating was a big problem that resulted in students getting put on probationary watch and being legitimately scared of the consequences. Now at many schools cheating is routine and students push the boundaries of what they can get their classes to accept because they have no fear of any punishment. YMMV depending on the institution
IBM used to hire software developers based on aptitude test scores regardless of formal education, then put them through an extensive internal training program. It worked fine.
Unfortunately a lot aren't, they feel like they have to be there or these courses are the only path for them to get a good job. And unfortunately they end up in the workforce, too. You'll often see teams with one good developer and a lot of hangers-on.
Assignments and projects are great for learning, but suck for evaluation.
Another example, lit classes where the grade is based on time limited, open book exams, hand written in "blue books"
Read the book, pay attention in class, spend 90 min writing an essay, and you are done.
However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
Personally, I dropped out despite a full ride+ becuase why would I put in work for a no name state school when I already has an FTE job as a developer out of high school anyway.
Turns out fraudulent action can still get the bag.
The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
And for most students that’s all they really care about.
If the companies stop valuing the diplomas, students will stop paying tuition to attend, and the universities eventually collapse.
But there were already heaps of problems with tech in education before AI.
My CS projects were often pretty free-form so in theory I could've just used AI - today, anyway. But a big part of the grade was a face to face interview where you actually had to talk about the code you wrote. Anyone lifting along with other people who didn't actually do any work would fall through then.
I could easily imagine a CS theory course that doesn't involve any programming language at all.
So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI? These tools are not going to cease to exist. The students they are preparing are going to live and work in a world where they exist. To my mind, you best prepare students by teaching them how to use the tools most effectively, not by teaching them how to work without the tools. Students should be learning how to prompt AI without hinting it towards a specific answer. They should be learning how to double check the answers AI gives them to ferret out hallucinations. They should be learning how to produce work that is a hundred times more complex than what us older folks had to do in school. We should be graduating students who are so much more capable than any generation before them. I think we're doing them a disservice by trying to give them the same education that was given to those from previous generations. The world they will inhabit has changed radically from the one we entered into following school.
That is way to high recurring cost that many won't be able to afford. One could get a second hand calculator or even computer, and then additional resources needed was one's willingness. With mandating AI usage, we'd only increase the gap of haves and have-nots. I personally do not like the idea.
I feel like at this point it’s an inevitability that given enough time, capable models will be cheap enough for everyone.
Because using AI is the complete opposite of "I learned programming just to make tests easier".
By learning how to program solver, you not only learned how to program but also learned the method well enough to write it.
By pawning it off to AI to solve, you have learned nothing, not even how to prompt correctly as test questions are usually formulated well enough that AI doesn't need prompt massaging to get it.
You can use AI to get some knowledge about the problem (assuming you won't hit hallucination) but that's not what will happen when you use it for test.
And if you DO want to teach students how to use AI effectively, you can just have an AI class...
If you got AI to produce a working solution, you solved the problem. In the real world nobody who's paying you cares about the method as long as you deliver results. Students taught to solve easy problems by themselves will be at a big disadvantage in the workforce compared to students taught to solve hard problems using AI.
When the scientific calculator was invented, people could easily know what went into its production. As in what circuitry appears in them. You knew that if you bought it, it is yours. Want to program it? Grab a book and do this. The whole package would be a fixed price. You are in control. With AI? You are not at all in control. You rely on a big tech giant (or just like 4 useful ones) who is riding what people controversially still call an economic disaster. You are relying on a technology that is designed to very likely bait-and-switch you. As soon as you get too comfortable with AI, the big tech companies can just bump the prices up and you will not be able to say no. You rely on a technology that you do not control.
The comparison of AI to a calculator or any other technological advancement for students is apples and oranges for that reason.
Imagine giving a student a personal AI datacenter to carry with them. This may be more of a fair comparison.
PS Training students on using AI, especially for free, is setting them up for reliance on the big tech companies and the subscription model.
Even if we assume that to be true, you severely underestimate how many people that condition excludes.
Another issue is that it forces kids to stay in school for longer to do their homework, which can be a serious problem in rural areas where public transport is limited, so parents are forced to fetch their kids from school which may not be compatible with working hours.
An LLM is a force multiplier only, not a replacement. It's a personal assistant to an expert. To use an LLM in a acceptable way, you still first have to learn how to do what it does yourself. I think your suggestion for people to be taught how to use LLMs is justified, but they should do so only after first being taught a no-LLM curriculum. I think this should be entirely after what the notion of an education was in pre-LLM times. Don't incorporate LLMs into our current education, instead teach use of LLMs after our current education.
So this i think is applicable to AI also, pay for smarter than you AI's pit them against each other, let them supervise each other and measure the outcomes you need. Who cares how they achieve that (sound clinical and scary)
Why doesn't the essay class allow us asking your parents to write it for them? The art class, why not ask your parents to paint something for you? Geography, why not let ask your parents during a test?
- made tasks easy that were a necessary prerequisite for advanced math (basic arithmetic), but not what the lesson was supposed to be about
- could in theory also let students skip over what they were supposed to be learning (applying the correct operations in the correct order to solve a problem) but doing so would require programming or getting a program from someone else, which the teachers probably figured was a high-enough hurdle to accept the risk
Hence, scientific calculators helped teachers by removing unnecessary friction.
Meanwhile, current LLMs
- will happily attempt to do the student's entire homework for them
- cannot reliably be restricted in functionality to leave the part the students are supposed to do themselves to the student
Hence, LLMs undermine teachers by removing necessary effort.
Sure, in theory LLMs could enable even more focused lessons by removing even bigger unnecessary frictions (e.g. in history class, have a LLM scour a large collection of primary sources to exhaustively list passages mentioning a certain topic), but students cannot be trusted to use them this way.
Hence, teachers are trying to use all kinds of tricks to ensure that what they wanted to teach actually passed through the student's brain at some point.
Small local LLMs are essentially that. If an LLM can tell you to eat rocks as a tasty snack or use glue to make the cheese stick to your pizza, imagine what it says when you ask it to analyze/explain complex academic subjects, or solve fiddly problems. But it will still reliably help you polish your language, like a subject-specific dictionary/thesaurus.
Here’s one possible scenario: After graduation, you (or somebody else) shares the program with friend, with a promise to not to share further. Soon enough, it’s on everybody’s calculator. What did real educational thing for you, is just cheat where one needs to press the right buttons and get the right answer. This completely destroys the educational purpose, but significant amount of people just don’t care and want to get a pass.
Yes, there always is a counter weapon by teachers: for example, to point to random line and ask to explain and whatever, but this is not (always) scalable.
I’ve seen this in reality in college, when there was a cs/database course final project implementation, written in Delphi (very popular at a time in xussr), that was passed from year to year, that the professors and ta were so fed up, that I got almost auto pass because I wrote mine in C++…
——
To summarize - the overinreasing amount of pure slop is seen everywhere. Regular multi-thousand line prs where author didn’t even bother to look into code, written by ai. Just prompt -> commit, push, or. Nobody wants to deal with that
Same is happening here - u it’s not to punish people who use tool in proper context, it’s to filter out people who just don’t give a fuck.
They. Are not. The same.
Have you ever known people to commit suicide, kill, or give themselves rare diseases because of their calculators? How about people dating their calculator and going batshit for a software update?
Not to mention learning to do on your own is a useful skill to teach you to think, and an essential skill to (as you suggest) verify answers. People not understanding how things work is exactly why they take bullshit output from an LLM as gospel.
I also note that such arguments tend to be profoundly selfish and self-centred. Your anecdote happened to have an outcome you enjoyed and benefitted from, but I bet that wasn’t the reality for all your colleagues. Just like you are glad for the calculators in your class, some other student may be glad for the lack of them in theirs and it may be the reason they got into their field of study.
You would give the brains of the younger generation to American tech oligarchy, a class of people openly hostile to the principles of the democratic rule of law. If you want to see the damage actors like Fox News et alii alone can do, just take a look around in the US. Now imagine them taking over the parenting and teaching role; you wouldn't need gerrymandering if you can control people's beliefs.
Because this makes a subscription a requirement for education, and thus advances the grift that is subscriptions, rent-seeking and dependence on a service. This isn't something we should ingrain into our children from an early age.
Calculators were buy once, use forever. Subscriptions to slop generators are a long term dependency and I want my children to not be exposed to that until they can decide for themselves.
The other analogy is taking a forklift to the gym. Sure you lift weights, but you don’t really do any exercise to develop your own muscles.
AI automates a significant chunk of the exercises. So you are left with people who didn’t build any mental muscles.
This would be bad enough, but it’s worse because AI severely benefits experts who have build mental reflexes/taste and can judge / verify output with minimum information.
World war I was very much that.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.
For presentations, usually you spend a lot of time preparing for them (similar to exams), building a slide deck or pages of notes that you refer to while giving the talk (not similar to exams). Sure, you do have to be able to think on your feet, but I don't think the comparison to a sit-down exam is all that apt.
For mundane work tasks, you have the internet and whatever reference materials you want (including LLMs, these days); this sort of thing is so different from a sit-down exam that it's almost comical that you'd try to equate the two.
I'm not saying I know of a better way to evaluate learning than proctored, in-person exams, but suggesting that sort of situation is particularly relevant to real life... no, no way.
The software engineer one: here is a takehome assignment. One week later: finished!
To be fair, they both represented pretty well what work I'm going to do. The data analyst didn't show that well how much I'd also be data engineering, but whatever, I was a SWE before having a DA stint. Back to SWE again though.
Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.
I dunno how you work, but I'd be getting raised eyebrows from people watching my hit google for any question required of my role.
I mean, we're not talking about using calculators here, and we're not talking about vocational training (How do I do $FOO, in docker? In K8s? How do I write a GH runner? Basically any question that involves some million-dollar company's product).
We're talking about college stuff; you absolutely should not be allowed to look up linked lists for the first time during an exam, copy the implementation from wikipedia, port it to your language and move on.
In the real world, we want people who mostly know what to do. The real world is time-constrained (you could spend 2 hours learning to do what they thought you could do based on your diploma, but they'd be pissed to find out that you need to look up everything because that's how you coasted through college).
Exam situations are more like the real world than take-home assignments: High-stakes, high-pressure, timeboxed.
If your real world does not have high-stakes, high-pressure, timeboxed tasks, then you really haven't had much contact outside of your bubble.
Sort of. In real life, you are expected to have immediate knowledge of your field and (in some environments) be able to perform under pressure. I'm not going to pretend the curriculum is a perfect match for what people should know, but it does provide a common baseline to be able to have a common point of reference when communicating with colleagues. I would suggest the most artificial thing about exams is the format.
> It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
I don't like dismissing the ordeal of people who face test anxiety, but tests are not really high stakes. There is a potential that a person will have to repeat a course if it is a requirement for their degree. At least at the institutions I attended, the grade distribution across exams and assignments, combined with a late drop date, meant that failing a course was only an option if you choose it to be. A student may be forced to face some realities about their dedication/priorities, work habits, time management, interests, abilities, etc.. It may force a student to make some hard decisions about where they want their life to lead, but it does not bar them from success in life. And those are the worse case scenarios. A more typical scenario is that you end up with a lower GPA.
When I sit down to debug a complex application, I'm drawing my prior 25+ years of experience. While I certainly would rather fix the problem faster rather than slower, I don't have a time limit, and usually taking my time (or even leaving the problem alone for hours or days) can be more effective than trying to work quickly and get everything done immediate.
The last time I sat for an exam was in 2003, and I honestly have not experienced anything in life since then that feels like that. Even job interviews have not felt similar enough to me to evoke that same feeling. (Frankly, I've enjoyed most job interviews; I don't think I've ever enjoyed an exam.) That's just my experience, of course, but I don't feel like an outlier.
The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.
I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.
That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.
But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.
When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.
My tests are almost 100% in person. Project work included, you can hand something in, but I'm going over line by line and ask what you did there.
I can do this, because while my school hasn't updated the tests yet, my classes are small and I can do all of them in-person.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
I teach at two universities in Japan and occasionally give lectures on AI issues at others, and the consensus I get from the faculty and students I talk with is that there is no consensus about what to do about AI in higher education.
Education in many subjects has been based around students producing some kind of complex output: a written paper, a computer program, a business plan, a musical composition. This has been a good method because, when done well, students could learn and retain more from the process of creating such output than they would from, say, studying for and taking in-class tests. Also, the product often mirrored what the students would be doing in their future lives, so they were learning useful skills as well.
AI throws a huge spanner into that product-based pedagogy, because it allows students to short-cut the creation process and thus learn little or nothing. Also, it is no longer clear how valuable some of those product-creation skills (writing, programming, planning) will be in the years ahead.
And while the fundamental assumptions behind some widely used teaching methods are being overthrown, many educators, students, and administrators remain attached to the traditional ways. That’s not surprising, as AI is so new and advancing so rapidly that it’s very difficult to say with any confidence how education needs to change. But, in my opinion at least, it does need to change at a very fundamental level. That change won’t be easy.
It's still a new tech so I'm not surprised a lot of teachers have different takes on it. But when it comes to education, I feel like different policies are reasonable. In some cases it's more likely to shortcut learning, and in other cases it's more likely to encourage learning. It's not entirely one or the other.
For example, the professor who’s leading me in this project had a fellowship at a certain university in England and said he exclusively coded using claude code for a month straight, their purpose was to solve a vaccine for a specific disease and by using AI tools such as claude code they’re several months ahead of schedule.
Am I saying I’m as knowledgeable or capable as a Ph.D right now? Absolutely not. There’s just not really a terminology that correctly describes accelerated learning and iteration by use of AI since the technology is so new. I can’t speak for others but as someone who’s a senior in my physics degree, I’ve been actually learning faster by using AI. It’s either a mental crutch or mental accelerator. The difference is in if you want it to completely do work for you or if you try to learn and follow along.
It’s a very under explored and new area right now, how higher learning is effected by using AI as a tool instead of as a cheating device, but historically, new tools like the calculator or computer have done a lot to accelerate learning once new rules are in place.
Sounds like a fun project, I wish you the best. I ran a similar program (independent study that encouraged freshman/sophomore undergraduates to explore using microprocessors, at the time the EE curriculum was completely focused on analog circuit theory and ended at boolean logic) and it went well enough that it eventually became part of the official undergraduate curriculum.
Undergrad research is pretty common and it's not all that hard to get your name on a paper as an undergrad. A lot of undergrads think that doing work that gets your name on a paper, equates to PhD level work.
Nice idea. What class and what work are you doing then?
How do you know you actually learned, instead of being fed slop by the AI that isn't true at all? If you didn't study, then I doubt you'll really know if the AI is lying to you or not. I have to wonder if your teacher will too, sounds like they have kind of checked-out from actually teaching.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Like this: https://chatgpt.com/share/69e405db-1b44-83ea-baf3-6af41fe577...
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
I'd be surprised if copy/paste carries the revision history, though. Wouldn't they have had to start with the original document (from the other student) and make their edits directly, and then submit that file?
I also think that when track changes was first introduced in earlier versions of MS Word, there wasn’t as much concern about privacy/telemetry as there is now, so it wasn’t made as prominently obvious.
oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
https://en.wikipedia.org/wiki/Loebner_Prize
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
Isn't that really what all these AI companies are doing too? It sure seems like it is.
I graduated in 2020, so I've only gotten to see the changes secondhand through friends and family who are teachers, and through my sibling who graduated a few years after me. But the difference is staggering.
It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
Maybe the medical profession is a counter example.
I’d argue that dealing with any high criticality operational incident is like an in person exam (maybe even the most difficult kind, the open book one) if you are the one responsible for fixing it. Everyone is looking at you, you have time pressure to solve it ASAP and you can’t afford the time to dig through all the docs on the spot. So there’s at least some similarity with some real life situations.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
You still do all the same things, and they are graded, but this doesn't affect your final grade. Instead, you need to pass a threshold to enter the exam, which is then graded.
The US isn't so amazing at this, it simply can be done better. Recognizing where you can improve and from whom you can learn is a great first step to ACTUAL improvement.
What a narrow set of skills to send into your economy.
At Oxbridge, for CS we still had lab work. We still had problem sets assigned for CS and for math which were graded. We had one large CS group project in, I want to say, our second year. Humanities students were still assigned essays. It's just that none of this stuff contributed to your final degree classification which was based entirely on your exams (although if you didn't do your CS practicals you wouldn't be allowed to pass).
Obviously Oxbridge isn't exactly representative but certainly my experience showed me that the American style is not the only way of making education work.
joking, of course everyone does 'projects, labs, teamwork and papers'. It's just not the main focus of the grading process.
What is the "it" that AI does for you?
This is assuming you know how to get good work out of AI in the first place. But even that is turning out to be a skill in and of itself.
Context helps immensely, for example. Think of what you can do that someone outside tech can't.
When running water replaced the need to pump water out of the ground yourself, were people urged to "learn faucets"? You kind of just need to twist a knob and water comes out, right?
Maybe there was an intermediary stage where running water was slightly more complicated and there were more steps to learn, but devoting time to learning those steps would have been a waste of time, since the end goal of the system was for it to function without much input.
For example, take “X” to be “walking”. Do we have the technology that allows us to pretty much never have to walk? Sure. As far as I am aware, though, we do not generally favour a lifestyle of being bound to a mobility aid by choice, and in fact we have found that not walking when able in the long run creates substantial well-being issues for a human. (Now, we have found ways to alleviate some of those issues for those who aren’t able, but clearly it is not sufficient because we still walk.)
The problem is exacerbated immensely as the value of X approaches something as fundamental to one’s humanity as “thinking”.
But that would require the teacher to be good at AI too. I think that's the problem here.
No, it shouldn’t. I’m not bearish on AI but it shouldn’t replace any part of a classroom where the objective is to learn and communicate in a new language (German). The typewriter argument is memorable and interesting - the article points out the lack of editing forces kids to slow down and think about their writing, as well as iterate through multiple drafts. It’s not a nostalgia thing, they’re not old enough to have ever used one before.
I could see an argument for adding on a new class for GenAI, agents, context engineering or what have you, but considering how behind current US curriculums already are and how quickly the AI field moves, I can only see this ending in wasted time and money: even an up to date class will be stale by the time it’s over. Kids will end up learning this anyway outside of the classroom, no use lecturing them on something they’ll already know.
You don’t give first graders a calculator because they will always have one in their pocket- they end up just inputting numbers in a magic box and not learning how to do this manually which will destroy their future mathematical education. It’s about the same with AI.
AI is not a gun that you can't put into the hands of a child. It's a paint brush.
https://www.youtube.com/watch?v=jbHB-rzKBAs
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
If you don’t pass after 3 tries, commission is mandatory.
You also have a paper trail of written exams and midterms to back you up. If you keep getting good grades and failing the oral, people will find that obviously suspicious.
Honestly the only times I had any trouble in the orals were the exams where I baaaaarely passed the written. Usually oral feels like the chill easy part compared to written because you can have a back-n-forth with the professor.
Still concerning from a statistical/psych fairness aspect.
There's a famous example of the Boston Symphony trying to fairly judge unseen applicants in 1952, and their results kept getting gender-skewed until they adjusted for the fact judges were reacting to the sound of shoes (e.g. high heels) when the candidate moved around behind the divider.
Ah yes, the classic "if you think the system is abusing you, you shall out yourself to the system that's abusing you if you want any chance of recourse." Because a tribunal run by the people you're lodging a complaint against can't possibly be biased.
If you don't get one job you should have - there are others - it's unfortunate but not life altering.
If 3 years into your marine biology program a professor who always teaches a mandatory course fails you because you're a woman who wears non traditional dress - you're not graduating and now there are no jobs. (And this is an example that actually happened to someone I know - not in a western country)
Our first year class was about 250 people. It was fine.
By the 4th year, class sizes were a much more manageable 30 to 50.
You get maybe 10 to 15 minutes with the professor (usually more in later years), they ask 3 questions with some followup. That’s 1 work week for the professor. And less than half the students even make it that far for every exam season (3 per school year) so you’re looking at something like 3 days of work. It’s fine.
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
As a kid, before my family could afford a home computer, I was determined to do something that resembled programming. I borrowed "BASIC Computer Games" (1978) by David Ahl[1] from the library and typed in several programs on a manual Olympia typewriter. More than just reading code and maybe even more than being able to easily execute it, I'm convinced this typewriter exercise forced me to really study the flow and the how of the code.
[1] https://archive.org/details/ahl-1978-basic-computer-games/
I would make this the focus for 90% of the first 2 years of their degree.
I would then have them spend 75% of their last 2 years learning how to use and program with AI. Aside from knowing how things actually work, there's no more important skill now than mastering AI.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
> The Sentinel not only cares deeply about bringing our readers accurate and critical news, we insist all of the crucial stories we provide are available for everyone — for free.
Thank you very much for interrupting and ruining my reading experience of your article.
If you don't like the website, simply don't use it. Especially when you're making no contribution to it.
If someone gives away something free, they can and sometimes do wash their hands of it. That doesn't prevent you from expressing your opinion on what you think they should change about the work, but they're not under any obligation to do anything about it.
Someone made a thing available. You can take it as it is, you can make noise about what you don't like, you can make it better, or you can ignore it and move on.
If someone is providing a mix of useful and garbage information, well, take your pick from the above.
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development
https://austinhenley.com/blog/aihomework.html
One of my best college professors would review such essays in-person, one-on-one twice each semester.
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
Good luck, we’re all counting on you.
Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
This statement is more defensible after removing “only”. If it “only” hurt the cheaters, there would be no need to police cheating at all.
And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
[0]: https://writerdeckos.com/
Gyms aren't redundant because tractors exist.
We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
My colleagues that teach hard skills courses (like data structures and algorithms) either love AI and incorporate it into their teaching at every moment possible, or despise it in the same way graphing calculators were by high school math teachers when they were introduced nearly 30 years ago.
I teach soft skills classes to engineering students, and I'm unconcerned with students using AI. I write my problems in a way such that, if the student truly understands the assignment, prompting the AI to solve the problem and iterating on it takes a similar amount of time to doing the work themselves. AI is not very good at writing introspectively about the student. In other words, AI isn't going to be helpful when the homework question is "A fellow student comes to you asking for suggestions on how to maximize their chances at landing an internship. What advice do you give them that's immediately actionable?"
Try it, plug that into ChatGPT or your favorite LLM. It parrots the same generic tips everyone tells you, with very little on "how" do perform the action in an effective way. Read it, copy it into your advice document, get a poor grade. Try telling other students to take this advice. Note how they don't because the advice isn't actually actionable enough for them to take action.
LLMs are also not very good at the follow-up question "In a previous assignment you gave specific and actionable advice to a peer on the job search. Which of these suggestions were so good you are now doing them?" A number of students write a "Mental Gymnastics" essay, claiming they are following all their suggestions (because they think that's what the professor wants to hear) while the evidence they provide demonstrates they are not. A student asking an LLM to write the essay for them consistently produces a digital 'pat on the back'; a mental gymnastics essay that ultimately makes the student realize how unwilling they are to solve the #1 problem in their college career.
I've done away with exams wherever possible. I stick to project-heavy courses. What I've found to be far more concerning than AI use is the increasing loss of social skills and ability to cooperate within the younger generations. The number of students who would prefer to fail a class instead of talk to literally any human being is astounding.
The number of students who refuse to build soft skills, and believe that tech is truly a meritocracy where the only thing that matters is 'lines of code', there's no politics, and they won't work call or crunch or give code reviews, is also astounding.
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
I think AI should be treated the same. Who cares if it assists in a lot of the work that is a good thing. BUT as we all know AI has been incorrect on many things so I think what would be a much better learning practice would be to forget if AI wrote the paper and focus heavily on students backing up their claims with sources. So if your paper says ABC is true and AI writes it up in a perfect paragraph you would still need to confirm the facts as true and find a reputable source that shows it to be true.
It reminds me of a family friend who's a bit older and did their scuba certification using dive tables, whereas when I did my PADI, I was able to use a dive computer.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
Oh
You just said that it was a waste of time. So was it or not?
> that option is also trivially available outside of college, it's called “email”.
How many experts have you cold emailed over the years and how much of their time have you taken?
To your second question - less than a hundred, but tens. Most people who are worth listening to publish their work and their thoughts. Email is free. Experts love to answer questions about their work, professors hate doing extra work for no extra pay. The incentives here are not confusing. How much time have I taken? Confusing question. These are real people with real passion, and they answer questions with that in mind. Professors are obligated to puke up an answer. I've gotten responses in most cases, in some I haven't. When I don't get answers it's because the targets are smart and busy. If I wanted more engagement with my random questions I'd offer money, and if I had offered money every time I'd still be below par on the money I wasted on college. If I wanted to justify it - I'd say I learned enough to validate that paying real money for another 3-6 years would have been less valuable than burning it for heat.
I think you completely misunderstood this interaction.
There are 2 possible explanations.
1. You are so smart/knowledgeable that the professor thinks you are beyond college.
2. You were acting like such an arrogant know-it-all that the professor was being sarcastic.
I’ve seen #1, but I’ve seen #2 many times.
You sound like you have a huge chip on your shoulder about not having a degree. I had the same issue at one point before I went back and finished (after working as a professional developer for a while), so I recognize it.
When I did go back, I asked questions in class, I went to office hours to ask questions, and I did research projects with professors. Some back of the envelope math says it would have costs me about twice what I got out owing if I’d paid for an equal amount of time with whatever experts I could find.
My strong suspicion based on the few posts I’ve read is that your attitude is the reason you had such poor interactions with instructors.
Chip on my shoulder - no, and it's a silly label to begin with. Understanding that it's for other people who value the paper more than intrinsic understanding, yeah.
EDIT: I will concede in some way that I'm proud of not having a degree, and it does influence my thoughts on this topic. I've met some real idiots that do, and I don't consider it a serious differentiator.
Also looking up the thread - at my early jobs, I was surrounded by many people who were interested in educating me on any topic I could think of, because similarly we were all being paid for our time. The difference between that and school was the assumption that we were both motivated and capable.