Been banned for AI-Slop on a few subs here on Lemmy as well as on Reddit.
I always provide a good amount of technical detail in my posts and i try to be as transparant and communicative about the details. My projects are very complicated and I try to document them well.
my project is pretty cryptography-heavy... the act of me sharing my efforts in an attempt to show transparency... but it is used against my project by calling it AI-slop (undermining Kerkhoff's principles).
It's 2026 and most developers are using AI. I have used it to create things like formal proof and verification.
my project is aimed to be a secure messaging app. i have all the bells-and-whistles there along with documentation.... but if the conversation cant move past "its AI-generated"... then it seems the cryptography/cybersecurity/privacy community isnt aligned with the fact that using AI is now common practice for developers of all levels.
AI is a tool. you cant (and shouldnt) "trust" AI to do anything without oversight. AI does not replace the due-diligence that has always been needed. i dont "trust" my hammer to bash in a nail... i "use" the hammer. AI is not different in how you need to be responsible for how its used.
i've busted my ass on my project for it to be called AI slop. i think its completely fine when it comes from folks in the community. cryptography is a serious subject and my ideas and implementation SHOULD/MUST be scrutinised... but its simply ignorant if mods are banning me for the quality of my work considering the the level of transparency and my engagement on discussions about it.
It's a bit reductive to call it slop. I think i try harder than most in providing links, code and documentation. Of course I used AI... and it's clearer for it. (you can find more detail on my profile)
i am of course sour from being banned, but am i wrong to think my code isnt AI slop? Some parts of my project are clearly lazy-ui... but im not sharing on some UI/UX/design sub. the cryptography module has unit tests and formal verification. if that is AI-slop and can result in me being banned, i simply dont have faith in that community to be objective on the reality of where AI can contribute.
while its understandable people dont want to review AI-slop... i think the cryptography/cybersecurity community needs to get on board with the idea of using AI to help in reviewing such code. am i wrong? is the future of cryptography is still people performing manual review of the breathtaking volumes of AI code?
"I vibecoded a lot of things, my projects is not inherently dangerous"
except it is dangerous. the fact that you declare yourself as a vibe coder implies to me you don't know what's going on in the system you're developing. Correct me if i'm wrong but everything that you know this point on in your projects is strictly a function of what generative AI Is outputting. Do you see how malicious that is, where all your knowledge comes from a single source and you believe it.
The reason I'm pointing this out is thaf these AI models make mistake due to text compression. Much like JPEG, when you compress a file, data will be lost. The so called "hallucinations" of the AI models is caused by this data loss from compressing texts.
Now whether engineers are trying to optimize that or not does not matter, it is a fundamentally flawed system, and to use it as a framework for developing codebases you wouldn't be able to tell if the code it actually generates is implementing the logic or is fit for commercial use; You involuntarily agree with the code it generates and don't question anything about it.
This doesn't even take into account the errors that I've observered that are abstract for humans to understand. Things like time complexity or code relating to physical hardware or microcontrollers that AI tries to generate functionally does nothing. For someone who cannot unit test something abstract like time complexity until it runs, I wouldn't know.
what I'm trying to point out is, you have to have a level of skepticism on the code it generates. learn the subject (in whatever you do), that allows you to question what it generates, and use it for verification of your code (that you write) over it writing code for you.
This generally seems to elude to my due-diligence. And if it's low effort AI.
It's skepticism that has me put attention towards docs and various details.
For example: I tried to get a security audit. I can't get one for free, so I created one with AI. I'd like to be clear that I understand how my apps works and am able to articulate it to the best of my ability to AI to generate the security audit. I was exhausted from the experience of creating the audit with AI and it provides me with good information and advice. I stand by the feedback there isn't it isn't ready for production.
In all my posts on all platforms Im sure to mention that it isn't production-ready. (The same for the repos on GitHub)... But the general aim is to create something secure.