Been banned for AI-Slop on a few subs here on Lemmy as well as on Reddit.
I always provide a good amount of technical detail in my posts and i try to be as transparant and communicative about the details. My projects are very complicated and I try to document them well.
my project is pretty cryptography-heavy... the act of me sharing my efforts in an attempt to show transparency... but it is used against my project by calling it AI-slop (undermining Kerkhoff's principles).
It's 2026 and most developers are using AI. I have used it to create things like formal proof and verification.
my project is aimed to be a secure messaging app. i have all the bells-and-whistles there along with documentation.... but if the conversation cant move past "its AI-generated"... then it seems the cryptography/cybersecurity/privacy community isnt aligned with the fact that using AI is now common practice for developers of all levels.
AI is a tool. you cant (and shouldnt) "trust" AI to do anything without oversight. AI does not replace the due-diligence that has always been needed. i dont "trust" my hammer to bash in a nail... i "use" the hammer. AI is not different in how you need to be responsible for how its used.
i've busted my ass on my project for it to be called AI slop. i think its completely fine when it comes from folks in the community. cryptography is a serious subject and my ideas and implementation SHOULD/MUST be scrutinised... but its simply ignorant if mods are banning me for the quality of my work considering the the level of transparency and my engagement on discussions about it.
It's a bit reductive to call it slop. I think i try harder than most in providing links, code and documentation. Of course I used AI... and it's clearer for it. (you can find more detail on my profile)
i am of course sour from being banned, but am i wrong to think my code isnt AI slop? Some parts of my project are clearly lazy-ui... but im not sharing on some UI/UX/design sub. the cryptography module has unit tests and formal verification. if that is AI-slop and can result in me being banned, i simply dont have faith in that community to be objective on the reality of where AI can contribute.
while its understandable people dont want to review AI-slop... i think the cryptography/cybersecurity community needs to get on board with the idea of using AI to help in reviewing such code. am i wrong? is the future of cryptography is still people performing manual review of the breathtaking volumes of AI code?
Cryptography is notoriously easy to get wrong. If you don't know enough about it - you should not offload it to the hallucination machine, because you will not be able to verify it properly, and those who can - will not bother to.
This is not what a real audit looks like and it should not be presented as such. This "audit" is, in fact, slop.
Do you not see the problem in this line?
Or this?
perfect. you get it. you understand that generating an AI audit is wild!
https://www.reddit.com/r/CyberSecurityAdvice/comments/1su8lir/security_audit_feedback_from_radically_open
the AI audit comes after a long time of to-and-fro from the various communities that asked for an audit... of course they asked for a professional one... but those that ask, must know that they are all prohibitively expensive. especially for a solo vibecoding dev like myself.
i also understand that people would prefer a project with a team of experts... sorry to break it to you, a team of experts are not going to hire themselves on an unfunded project like this.
while the security audit, unit test, formal proofs and verification are not good enough when its done with AI, my hope was that it could serve as a starting point for anyone like ROS to perform an actual review. i cant offer more transparancy that open source, documented and discussions.
then... vibe-code something else?.. why do you think that you should be making something you are not an expert in, that can potentially put your users into danger and make you liable for it? if it's a learning project - great, go wild. but if it's intended to be used, then sorry - this is just an irresponsible approach that should not be entertained by anyone. I get that you have "positive intentions" but pick some other venue that you can get right. or contribute to an existing project (being mindful of contribution guidelines).
i vibecode a lot of things. my project is not inherently dangerous. people can use any software irresponsible. in my project and all my communications about it, i make it clear to users to use it cautiously and that its presented for testing and demo purpose. its mentioned in all of my post and i also have terms and condition within my projects the explain as much.
nobody is being tricked into sharing sensitive information... in fact i made a proactive attempt to create something that doesnt need any personal information.
dont tell me what i should and shouldnt be coding. i put time and effort into testing and verifying. this is the issue about mentioning AI is that it undermines all other efforts. its the low-hanging-fruit of critisism.
"I vibecoded a lot of things, my projects is not inherently dangerous"
except it is dangerous. the fact that you declare yourself as a vibe coder implies to me you don't know what's going on in the system you're developing. Correct me if i'm wrong but everything that you know this point on in your projects is strictly a function of what generative AI Is outputting. Do you see how malicious that is, where all your knowledge comes from a single source and you believe it.
The reason I'm pointing this out is thaf these AI models make mistake due to text compression. Much like JPEG, when you compress a file, data will be lost. The so called "hallucinations" of the AI models is caused by this data loss from compressing texts.
Now whether engineers are trying to optimize that or not does not matter, it is a fundamentally flawed system, and to use it as a framework for developing codebases you wouldn't be able to tell if the code it actually generates is implementing the logic or is fit for commercial use; You involuntarily agree with the code it generates and don't question anything about it.
This doesn't even take into account the errors that I've observered that are abstract for humans to understand. Things like time complexity or code relating to physical hardware or microcontrollers that AI tries to generate functionally does nothing. For someone who cannot unit test something abstract like time complexity until it runs, I wouldn't know.
what I'm trying to point out is, you have to have a level of skepticism on the code it generates. learn the subject (in whatever you do), that allows you to question what it generates, and use it for verification of your code (that you write) over it writing code for you.
This generally seems to elude to my due-diligence. And if it's low effort AI.
It's skepticism that has me put attention towards docs and various details.
For example: I tried to get a security audit. I can't get one for free, so I created one with AI. I'd like to be clear that I understand how my apps works and am able to articulate it to the best of my ability to AI to generate the security audit. I was exhausted from the experience of creating the audit with AI and it provides me with good information and advice. I stand by the feedback there isn't it isn't ready for production.
In all my posts on all platforms Im sure to mention that it isn't production-ready. (The same for the repos on GitHub)... But the general aim is to create something secure.
It is not "your" project - it was generated by a glorified chatbot. Since you lack the experience to judge its output, I cannot trust you to verify the security of the project.
then what is the point of it existing, if it can't be used seriously? why should people spend their time on it, when there isn't a solid base to build on? if you want to do something useful - contribute to an existing project. if you just wanna hack away at something - sure, do that, just don't be surprised if other people happen to hate it when you try to present it as a serious project. nobody would bat an eye if you presented it as "I wanted a to try and implement Signal protocol, this is what I've learned and how far I've gotten".
way ahead of you: https://www.reddit.com/r/signal/comments/1orsjw2/signal_protocol_in_javascript
im banned for AI slop, but that there is a small fraction of the complexity of my project.
i always try to post in a way so its clear the project is far from finished.