No it's not vibe coded at all. I know exactly what this program is doing line for line.
Good point yes just found it.

Removed and banned for slop and snake oil hmmmm. What a joke.
Annoying cause it had 100 votes in an hour and I was talking with the community about potential improvement and design implementations.
Yeah, it was literally just me posting stuff that I found interesting and sharing applications I made. There was only one other active user lol.
Mods removed this post and deleted my sub 'secure coms' on lemmy.ml instance. Thanks to whoever is federating this to other instances.

I do want to reply because I think my claims are reasonable.
The only actual cryptographic function for the schema is the secrets.randbelow(). Scrutinize this function if you don't think it can achieve what I am claiming it can.
The randomize function takes each ID and assigns it a new integer. This is taking entropy at the OS level. There are no seed values used here. It's never going to repeat in a billion years. Because there are 2million+ entries, the amount of possibilities are essentially limitless. You could stack 1 petabyte drives across our entire universe and still would not be able to capture every possible state.
This function is highly documented and (as far as I know) is the one of the best available CSPRNG you can actually utilize on a device.
Here is an example of the raw shuffle map that is generated.

Before the shuffle map is loaded, if you query your word, your going to get the raw unshuffled associated message ID.
Once a shuffle map is generated and loaded into the program the query is simply looking for the new CSPRNG assigned integer.
The shuffle map can now be considered the key. Because this is a pure lookup table, there is no algorithm to attack aside from guessing how my exact device generated the shuffle map in it's exact moment of existence....that's where the strength of this schema lies.
Thanks for the discourse I've enjoyed the pushback despite we can't agree.
Edit*
Take a look at the new pack62 compression though!

I'm not selling anything though...it's completely free? What are you talking about.
The reason why you might want to take the approach I described is that you can make precise claims about the dataset and final result. Rather than saying "umm ... Chatgpt said so.."
You realize it's just a database file that you can look at right? You dont think I've looked at the database?
It's a modern implementation of a ancient form of secure communication that has been used for 1000's of years supercharged by a computer. Not sure why you are so triggered. It does exactly what I'm claiming it does.
I mean what's the real point you are arguing? I'm happy to include other datasets in the master database. A bigger database is no problem for this schema or SQLite limitations.
The LLM produced all these things with one or two prompts and they are all grammatically valid... It's just what I happened to source the initial data set from.
Will it ever be ready? I've seen this same copy paste a bunch of times in numerous places now. What's the hold up?
I used a LLM to create my database because it is not only a collection of words, but common phrases. Plus not only can the LLM format the database how I want it so it's interpretable to the program, it can build the database and included all the appropriate amount of duplicates.
The fact that you completely ignore that simply using a larger RSA key would both be faster and more secure than your approach, doesn't inspire confidence either.
The goal was to not use any modern crypto... Codebooks have been used for a very long time and are secure with proper key management.
This is an attempt at a modern codebook. It tackles most all of the shortcomings of previous iterations.
(It's also in python which is basically unusable. )
Haha.
What motivated you to write this program?
Just for fun basically.
I've had the idea for awhile but the problem is was always a huge amount of grunt work to get the initial database created. With the use of LLM I basically mined all the unique entries, common phrases.
I'm not claiming it's the best or anything at all. But for codebook standards...I tried to implement all the things that would make a good code book.
- Ability to say the same thing over and over and make it look different for mitigation against frequency analysis.
- Easy, secure, shuffling
- customizable
- Assisted composing
- Exportable
- Long term rotating key schema
- Conclusive and established database
- Portable
It must at the very minimum need multiple symbols to map to the same strings to achieve ambiguity.
It does this.
The only conventional cryptography is the shuffle function which takes entropy from the OS.






Secure Coms on lemmy.ml . I am the original creator, this is a scraped version of it.
I lost mod rights and all the posts were deleted.