RotaryKeyboard

joined 2 years ago
MODERATOR OF
[–] RotaryKeyboard@lemmy.ninja 17 points 2 years ago

This is absolutely brilliant! I’ve tried to get results like this With starter images, but I have gotten nothing as nuanced and subtle as this! Great work!

[–] RotaryKeyboard@lemmy.ninja 3 points 2 years ago

My purpose in life is to be happy. My primary challenge in life is to find the things in life that make me happy and try to find ways that those things can make other people happy.

[–] RotaryKeyboard@lemmy.ninja 43 points 2 years ago (1 children)

I’m a 15-year user of Reddit. Lemmy right now is very similar to very early Reddit. Reddit’s users were more technical back then, too. I’m betting the early adopters of places like this are usually the technical types.

Another nice thing about Lemmy is that a lot of the low-effort, casual users on Reddit haven’t gotten here yet. Interaction here is definitely a lot more pleasant.

[–] RotaryKeyboard@lemmy.ninja 29 points 2 years ago (4 children)

It's so amazing to see a comment like this. For years and years, the tech industry workers were heavily anti-union. I'm glad to see the sentiment turning around.

[–] RotaryKeyboard@lemmy.ninja 10 points 2 years ago (7 children)

Our system of measurement. There can be only one!

[–] RotaryKeyboard@lemmy.ninja 5 points 2 years ago

I’ve just spent a few weeks continually enhancing a script in a language I’m not all that familiar with, exclusively using ChatGPT 4. The experience leaves a LOT to be desired.

The first few prompts are nothing short of amazing. You go from blank page to something that mostly works in a few seconds. Inevitably, though, something needs to change. That’s where things start to go awry.

You’ll get a few changes in, and things will be going well. Then you’ll ask for another change, and the resulting code will eliminate one of your earlier changes. For example, I asked ChatGPT to write a quick python script that does fuzzy matching. I wanted to feed it a list of filenames from a file and have it find the closest match on my hard drive. I asked for a progress bar, which it added. By the time I was done having it generate code, the progress bar had been removed a couple of times, and changed out for a different progress bar at least three times. (On the bright side, I now know of multiple progress bar solutions in Python!)

If you continue on long enough, the “memory” of ChatGPT isn’t sufficient to remember everything you’ve been doing. You get to a point where you need to feed it your script very frequently to give it the context it needs to answer a question or implement a change.

And on top of all that, it doesn’t often implement the best change. In one instance, I wanted it to write a function that would parse a CSV, count up duplicate values in a particular field, and add that value to each row of the CSV. I could tell right away that the first solution was not an efficient way to accomplish the task. I had to question ChatGPT in another prompt about whether it was efficient. (I was soundly impressed that it recognized the problem after I brought it up and gave me something that ended up being quite fast and efficient.)

Moral of the story: you can’t do this effectively without an understanding of computer science.

[–] RotaryKeyboard@lemmy.ninja 2 points 2 years ago

Thanks for posting this! I was going to buy this on blu ray very soon after launch, but now I think I'll give it some time. The only thing I worry about is that the incorrect version will be sold to retailers, who will just sell me that when I go to buy it next year.

[–] RotaryKeyboard@lemmy.ninja 97 points 2 years ago (16 children)

Oh, man. Can you imagine the misery of being appointed to this post? Literally half of the government would hate and despise you and would look for ways to undercut you just to have an extra talking point while they stand in the hall talking to Fox News. And to top it off, what could you actually do to affect change? I sympathize with the poor workers of this office.

[–] RotaryKeyboard@lemmy.ninja 1 points 2 years ago (2 children)

Have we figured out if this solves the Netflix password sharing limitation yet?

 

Has anyone else noticed their comment score and post score totals getting reset to 0 when their instance updates to 0.18.1? This happened to me today when lemmy.ninja performed their update. Post score and comment score began to climb as more people upvoted my content today, however.

Is it just me?

 

Yeah, so, sorry for the Facebook post, but it looks to me like the social media coordinator for Wendys has gone insane. Certifiably bonkers.

See for yourself

At what point do we intervene?

 

Why YSK: When you cook meat, any water on the surface must first evaporate before much browning can occur. You want to get as much of a Maillard reaction as possible in the limited cooking time you have before the meat reaches the correct internal temperature. Removing the moisture first means that the heat of the cooking surface isn't wasted on evaporation and can instead interact with the meat to form the complex sugars and proteins of the Maillard reaction.

 

June 2023 may be remembered as the start of a big change in the climate system, with many key global indicators flashing red warning lights amid signs that some systems are tipping toward a new state from which they may not recover.

 

We all know that wefwef is the superior choice, but if you're looking for an alternative app to view or interact with Lemmy, then this github project has you covered. It contains a list of pretty much every app project with automatically updating statuses. Take a look and see if something strikes your fancy.

(And then use wefwef, because it's the best one.)

 

Twitter, also known as X Corp, no longer has a media relations office. Reuters could not immediately reach Twitter’s Australia office.

 

Pornhub now blocks access to users in Arkansas, Louisiana, Mississippi, Montana, Texas, Utah, and Virginia.

1
submitted 2 years ago* (last edited 2 years ago) by RotaryKeyboard@lemmy.ninja to c/town_square@lemmy.ninja
 

We may not be lemmy.world with its 81,000 users, but we like to think of ourselves as just as important as they are in the fediverse. That's why I'm thrilled to announce that Lemmy.ninja has reached 20 users! (That's legitimate users, by the way. No more bots here!)

To welcome our new users, I'd like to take a second to share the best tips that we've learned since launching on June 13, 2023.

  1. If you are a mobile user and you liked Apollo, go try wefwef.app. It's a web app that is trying to achieve parity of features with both Apollo and Lemmy. It is arguably faster than using a lemmy instance's web UI, and by combining features from Lemmy and Apollo, it creates a better user experience. Try it out! If you are a self-hoster, you can even host your own instance of wefwef, giving you greater privacy and more control.
  2. If you are looking for communities, visit lemmyverse.net and browse.feddit.de to find them. These sites give information about participation levels that is otherwise impossible to see in Lemmy's UI, allowing you to pick the most active communities to participate in.
  3. Talk to other users! The more you comment, the more people you will meet who can share insights on good communities, how to deal with bugs, or whatever else is interesting or troubling you at a given moment. Here in the Ninja Tea Room community, you can introduce yourself to other lemmy.ninja users and get a better picture of who those 20 users are.
  4. Finally, don't sweat it if things don't appear to be working well. Lemmy.world is so big that they're having issues serving up content. Other sites are aggressively updating, but may be running into issues sharing or receiving content from other Lemmy servers. If things don't work, give it a day or two and see if they resolve.

That's all for now, ninjas!

 

cross-posted from: https://lemmy.ninja/post/46230 because the kbin.social proxmox community is still teeny tiny.

I've been wondering why traffic seems to get through to LXCs and VMs on ports in spite of the Datacenter firewall being active. It's my understanding that the Datacenter firewall has an implicit DROP rule (which I confirmed is set) and that once active, it drops all traffic for all nodes and VMs and LXCs under those nodes.

However, when I port-forward port 32400 from my router to a Plex LXC, traffic gets through. If I forward port 80 from my router to my reverse proxy LXC, traffic gets through on that port.

Right now I have the datacenter, node, and VM/LXC firewalls enabled. Only the Datacenter firewall has any rules at all, which are:

  • Allow traffic to port 8006 from all subnets in my local network
  • Allow ICMP traffic from all subnets in my local network.

I confirmed that the input policy is DROP on both the Datacenter and LXC firewalls.

(I'm using Proxmox 8.0.3.)

Why is traffic forwarded from my gateway router making it into my LXCs?

Thanks for any help on this.

 

Fans who played NetherRealm’s Mortal Kombat 1 stress test last week discovered a few new details about the Roomba — which is almost assuredly not an official iRobot Roomba vacuum cleaner — including that it will actually try to clean up the blood splattered across Johnny Cage’s nice marble floors.

 

cross-posted from: https://lemmy.ninja/post/30492

Summary

We started a Lemmy instance on June 13 during the Reddit blackout. While we were configuring the site, we accumulated a few thousand bot accounts, leading some sites to defederate with us. Read on to see how we cleaned up the mess.

Introduction

Like many of you, we came to Lemmy during the Great Reddit Blackout. @MrEUser started Lemmy.ninja on the 13th, and the rest of us on the site got to work populating some initial rules and content, learning how Lemmy worked, and finding workarounds for bugs and issues in the software. Unfortunately for us, one of the challenges to getting the site up turned out to be getting the email validation to work. So, assuming we were small and beneath notice, we opened our registration for a few days until we could figure out if the problems we were experiencing were configuration related or software bugs.

In that brief time, we were discovered by malicious actors and hundreds of new bot users were being created on the site. Of course we had no idea, since Lemmy provides no user management features. We couldn't see them, and the bots didn't participate in any of our local content.

Discovering the Bots

Within a couple of days, we discovered some third-party tools that gave us the only insights we had into our user base. Lemmy Explorer and The Federation were showing us that a huge number of users had registered. It took a while, but we eventually tracked down a post that described how to output a list of users from our Lemmy database. Sure enough, there were thousands of users there. It took some investigation, but we were eventually able to see which users were actually registered at lemmy.ninja. There were thousands, just like the third-party tools told us.

Meanwhile...

While we were figuring this out, others in Lemmy had noticed a coordinated bot attack, and some were rightly taking steps to cordon off the sites with bots as they began to interact with federated content. Unfortunately for us, this news never made it to us because our site was still young, and young Lemmy servers don't automatically download all federated content right away. (In fact, despite daily efforts to connect lemmy.ninja to as many communities as possible, I didn't even learn about the lemm.ee mitigation efforts until today.)

We know now that the bots began to interact with other Mastodon and Lemmy instances at some point, because we learned (again, today) that we had been blocked by a few of them. (Again, this required third-party tools to even discover.) At the time, we were completely unaware of the attack, that we had been blocked, or that the bots were doing anything at all.

Cleaning Up

The moment we learned that the bots were in our database, we set out to eliminate them. The first step, of course, was to enable a captcha and activate email validation so that no new bots could sign up. [Note: The captcha feature was eliminated in Lemmy 0.18.0.] Then we had to delete the bot users.

Next we made a backup. Always make a backup! After that, we asked the database to output all the users so we could manually review the data. After logging into the database docker container, we executed the following command:


select
  p.name,
  p.display_name,
  a.person_id,
  a.email,
  a.email_verified,
  a.accepted_application
from
  local_user a,
  person p
where
  a.person_id = p.id;

That showed us that yes, every user after #8 or so was indeed a bot.

Next, we composed a SQL statement to wipe all the bots.


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE person_id > 85347;
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

If you read the code, you'll see that we deleted records whose person_id was > 85347. That's the approach that worked for us. But you could just as easily delete all users who haven't passed email verification, for example. If that's the approach you want to use, try this SQL statement:


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE email_verified = 'f';
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

Even more aggressive mods could put these commands into a nightly cron job, wiping accounts every day if they don't finish their registration process. We chose not to do that (yet). Our user count has remained stable with email verification on.

After that, the bots were gone. Third party tools reflected the change in about 12 hours. We did some testing to make sure we hadn't destroyed the site, but found that everything worked flawlessly.

Wrapping Up

We chose to write this up for the rest of the new Lemmy administrators out there who may unwittingly be hosts of bots. Hopefully having all of the details in one place will help speed their discovery and elimination. Feel free to ask questions, but understand that we aren't experts. Hopefully other, more knowledgeable people can respond to your questions in the comments here.

 

cross-posted from: https://lemmy.ninja/post/30492

Summary

We started a Lemmy instance on June 13 during the Reddit blackout. While we were configuring the site, we accumulated a few thousand bot accounts, leading some sites to defederate with us. Read on to see how we cleaned up the mess.

Introduction

Like many of you, we came to Lemmy during the Great Reddit Blackout. @MrEUser started Lemmy.ninja on the 13th, and the rest of us on the site got to work populating some initial rules and content, learning how Lemmy worked, and finding workarounds for bugs and issues in the software. Unfortunately for us, one of the challenges to getting the site up turned out to be getting the email validation to work. So, assuming we were small and beneath notice, we opened our registration for a few days until we could figure out if the problems we were experiencing were configuration related or software bugs.

In that brief time, we were discovered by malicious actors and hundreds of new bot users were being created on the site. Of course we had no idea, since Lemmy provides no user management features. We couldn't see them, and the bots didn't participate in any of our local content.

Discovering the Bots

Within a couple of days, we discovered some third-party tools that gave us the only insights we had into our user base. Lemmy Explorer and The Federation were showing us that a huge number of users had registered. It took a while, but we eventually tracked down a post that described how to output a list of users from our Lemmy database. Sure enough, there were thousands of users there. It took some investigation, but we were eventually able to see which users were actually registered at lemmy.ninja. There were thousands, just like the third-party tools told us.

Meanwhile...

While we were figuring this out, others in Lemmy had noticed a coordinated bot attack, and some were rightly taking steps to cordon off the sites with bots as they began to interact with federated content. Unfortunately for us, this news never made it to us because our site was still young, and young Lemmy servers don't automatically download all federated content right away. (In fact, despite daily efforts to connect lemmy.ninja to as many communities as possible, I didn't even learn about the lemm.ee mitigation efforts until today.)

We know now that the bots began to interact with other Mastodon and Lemmy instances at some point, because we learned (again, today) that we had been blocked by a few of them. (Again, this required third-party tools to even discover.) At the time, we were completely unaware of the attack, that we had been blocked, or that the bots were doing anything at all.

Cleaning Up

The moment we learned that the bots were in our database, we set out to eliminate them. The first step, of course, was to enable a captcha and activate email validation so that no new bots could sign up. [Note: The captcha feature was eliminated in Lemmy 0.18.0.] Then we had to delete the bot users.

Next we made a backup. Always make a backup! After that, we asked the database to output all the users so we could manually review the data. After logging into the database docker container, we executed the following command:


select
  p.name,
  p.display_name,
  a.person_id,
  a.email,
  a.email_verified,
  a.accepted_application
from
  local_user a,
  person p
where
  a.person_id = p.id;

That showed us that yes, every user after #8 or so was indeed a bot.

Next, we composed a SQL statement to wipe all the bots.


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE person_id > 85347;
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

If you read the code, you'll see that we deleted records whose person_id was > 85347. That's the approach that worked for us. But you could just as easily delete all users who haven't passed email verification, for example. If that's the approach you want to use, try this SQL statement:


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE email_verified = 'f';
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

Even more aggressive mods could put these commands into a nightly cron job, wiping accounts every day if they don't finish their registration process. We chose not to do that (yet). Our user count has remained stable with email verification on.

After that, the bots were gone. Third party tools reflected the change in about 12 hours. We did some testing to make sure we hadn't destroyed the site, but found that everything worked flawlessly.

Wrapping Up

We chose to write this up for the rest of the new Lemmy administrators out there who may unwittingly be hosts of bots. Hopefully having all of the details in one place will help speed their discovery and elimination. Feel free to ask questions, but understand that we aren't experts. Hopefully other, more knowledgeable people can respond to your questions in the comments here.

view more: ‹ prev next ›