Sorry, this page is only available in
English.

hibot3000.com

◀ Return to all posts

hiblog3k

18/10/2025

nothing happened :)
anyway

An AI graduate on attitudes towards AI

if you know me, you'll know that i'm a diploma ceremony away from a master's degree. i'm obviously best versed in traditional natural language processing (NLP) methods, but i also have some formal training in the development and deployment of llms. hell, my thesis is about training neural networks on procedurally generated languages. believe me when i say i (generally) know what i'm talking about when it comes to the technical stuff surrounding ai

besides that, i had an internship where i analyzed the dutch mainstream media's sentiments on ai. i didn't arrive at a proper conclusion by the end, but skimming articles by hand, i found that "normal" people were generally positive about ai developments, with the largest risks being privacy and entrusting american corporations with all the training data. if you read about more recent developments, these tangible issues are well on their way towards being addressed. all in all, things are looking up

but then there's the bubble. everyone and their mother is dreading a collapse of the ai industry, because the return on investment is dwindling. gpt-5 wasn't the giant leap thy lord altman promised it to be (even though the tech was already pretty damn good). oh no, everything's falling apart! oh, the humanity!

relax. the .com bubble happened too, and look at my tld. the internet didn't die with the collapse of the .com bubble, and i doubt ai will die with the collapse of the ai bubble. the fact is, ai can be really useful when you're working with probabilities and/or you're looking for information that advertisers on google would love for you not to find. these tangible benefits to ai are here to stay, and they can't be undone by a recession

so who puts that idea of the annihilation of ai into people's heads? who keeps bringing up the bubble as if it'll finally end ai? well, it's the usual suspect: my peers. my sweet, chronically online peers

i have hardly any idea why, but a lot of people who use x or bluesky (particularly artists) have nothing but a burning hatred for ai. it doesn't matter if you use it for a single illustration or excerpt, they will cut your nutsack off, staple it to your face, slice your stomach wide open, use your left eye as a paddleball and throw your convulsing body off the white cliffs of dover. i'm making it more graphic than it is, but that's about the level of vitriol we're dealing with for some of these people

to a degree, i get it. you could forgive a lone programmer like me just scraping a social media platform and assuming the rights to training were secured, but that kind of attitude towards data collection is inexcusable for megacorps with lawyers. and yes, the only proven approach for good performance is wasteful: valuable electricity and real estate are being used for data centers and it is never enough. and yes, these tools allow slop to be created in industrial quantities, even allowing a whole ai ecosystem without any human intervention. these are real problems that we need solutions to, and with the right principles, i believe these are solvable problems

and yet, many of my peers are not interested in solutions. they repulse at the idea of ai having a place in our society, for these reasons or others. when someone wants to make an essay about ai, the only thing on their minds is how the essay will paint ai as the devil. one friend of mine even theorizes about an apocalyptic war wiping out all ai and the demand for it too. the message is clear: ai must go, the bubble must burst

this anti-discussion is typical of the internet, but it's also completely contained within the internet. i'm obviously a bit biased as a python/web dev with friends and acquaintances over the age of 45, but nobody in my daily life thinks this way. the issues at hand are discussed soberly in lectures and that's the way it should be. the grassroots internet needs a wake-up call from the real world, and that's myself included (though on issues other than ai)

so, i've highlit what i hear around me. i've provided a brief overview of reasons to be concerned about ai. after all that, what is my take on ai?

honestly it's pretty neat

i know what i want, i know what i'm doing and i'm cautious with outputs. that's the best user you could ask for when it comes to ai, and it's a kind of user you typically only get when they're knee-deep in the discipline. machine assistance is consistently the best-performing paradigm for a myriad of tasks, including translation, and i think we should embrace that with newer tech

that said, the current offerings are far from ideal. my dream ai would be one which is trained in accordance with european copyright and privacy laws, and which runs locally on a potato. we wouldn't have to worry about the climate impact or e-waste, and we'd have developer teams who respect the rights of writers and artists (or at least better than others). ai ecosystems are another thing but that is a much more political issue which i won't get too deep into here, i said what i said

in conclusion, there's a disconnect and absolutely gigantic divide between the real world and the internet. while the former is admittedly caught in a bit of an irresponsible hype, the latter is making its echo chamber of ai hatred stronger and stronger. i usually never ask you to listen to me - i'm just some shithead on the internet - but believe me when i say that there can be brighter days ahead for ai. and if you don't believe me, fuckin pay me. i'm serious, put me in a team and employ me

also can we please nuke twitter and bluesky thanks

UPDATE 20/10/2025

first blog post update wooo!!!! just wanna edit a few things in the main post and provide some addenda based on feedback i got (and expect)

i'll briefly address the nomenclature. i'm well aware that "ai" is a marketing buzzword for anything computery that does... well, anything at this point. some of my old professors and teaching assistants insist that i should use large language model or llm to refer to these technologies, and in an academic context, they're right. however, such specifics really only muddy the waters, because ai is much more than language at this point and people know "ai" more than they do "llm". no hard feelings btw, this is just my disclaimer that i know the "correct" name for the topic of discussion. that said, i did update the first paragraph accordingly

next, some actual feedback i got from a friend of mine. while he's personally indifferent to ai, he's worried that ai will encourage creative laziness. he even attests to a knock-on effect where an already skilled artist can abandon their trade to focus solely on ai art. to that end, he recounts people who intentionally switch to ai art as counter-vitriol to the anti-ai mob. because of this polarization and more, he'd rather be safe than sorry and just not allow ai art in the spaces he moderates

i recognize that counter-vitriol. i knew exactly such a guy who was central to a friend group i was in, and i severed contact with that group because of him. if you know who i'm talking about, you'll know the (borderline) illegal shit he makes and the aura of complete insufferability he brings to every community he delves into. such people absolutely exist and they're part of the reason why people are so radicalized against ai

however, it's wrong to blame ai for the actions of these people. it's the same reason why i emphasize detachment from american corporations: don't blame the tech, blame the people that abuse it. cookies are the devil when google or facebook deploy them, but i'd also be interested in putting cookies on my website for quality-of-life functionality. right now, the only tracking on this site is a cloudflare script that i can't disable (and if you know how to, please email me), and i will not do any tracking via cookies ever. that's correct use, and it's a bit misguided to avoid all cookies ever as protest against big tech spying on you (even if it is a 100% valid concern). it's as true for ai as it is for cookies

then what about the loss of skill? wouldn't it be lazy for me to focus solely on prompt engineering, instead of focusing on improving my skills and acquiring dedicated tools? maybe, but img2img is as valid a use of ai as any, and troubleshooting ai-generated code can still build problem-solving skills. "offloading the hard parts" is a major reason why technology exists: established professionals have an easier time doing their job, and the improved accessibility can give less gifted people a shot to make it big and innovate. again, abuse is the key word here, and that's solely with the person and not the tech

ok that's all i really wanted to say here, lest i make the addendum larger than the actual blog post