783
submitted 1 year ago* (last edited 1 year ago) by Blaze@discuss.tchncs.de to c/fediverse@lemmy.ml

It is probably due to a number of people stopping using their alts after some instance hopping.

Also a few people who came to see how it was, and weren't attracted enough to become regular visitors.

Curious to see at which number we'll stabilize.

Next peak will probably happen after either major features release (e.g. exhaustive mod tools allowing reluctant communities to move from Reddit) or the next Reddit fuck up (e.g. removing old.reddit)

Stats on each server: https://lemmy.fediverse.observer/list

you are viewing a single comment's thread
view the rest of the comments
[-] RoundSparrow@lemmy.ml 4 points 1 year ago* (last edited 1 year ago)

Of course, that would be an insane amount of work, especially if it would get ignored, but something to consider!

I already did an insane amount of work to populate a Lemmy database with over 10 million posts. It is so incredibly slow out of the box that the normal API would take days to accomplish this. i had to rewrite the SQL TRIGGER logic to allow bulk inserts.

Here is my work on that:

DROP TRIGGER site_aggregates_post_insert ON public.post;


/*
TRIGGER will be replaced with per-statement INSERT only
*/
CREATE TRIGGER site_aggregates_post_insert
   AFTER INSERT ON public.post
   REFERENCING NEW TABLE AS new_rows
   FOR EACH STATEMENT
   EXECUTE FUNCTION site_aggregates_post_insert();


DROP TRIGGER community_aggregates_post_count ON public.post;


/*
TRIGGER will be replaced with per-statement INSERT only
*/
CREATE TRIGGER community_aggregates_post_count
   AFTER INSERT ON public.post
   REFERENCING NEW TABLE AS new_rows
   FOR EACH STATEMENT
   EXECUTE FUNCTION community_aggregates_post_count();


DROP TRIGGER person_aggregates_post_count ON public.post;


/*
TRIGGER will be replaced with per-statement INSERT only
*/
CREATE TRIGGER person_aggregates_post_count
   AFTER INSERT ON public.post
   REFERENCING NEW TABLE AS new_rows
   FOR EACH STATEMENT
   EXECUTE FUNCTION person_aggregates_post_count();



/*
TRIGGER will be replaced with per-statement INSERT only
no Lemmy-delete or SQL DELETE to be performed during this period.
*/
CREATE OR REPLACE FUNCTION public.site_aggregates_post_insert() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
BEGIN
   UPDATE site_aggregates SET posts = posts +
      (SELECT count(*) FROM new_rows WHERE local = true)
      WHERE site_id = 1
      ;

   RETURN NULL;
END
$$;


CREATE OR REPLACE FUNCTION public.community_aggregates_post_count() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
BEGIN
        UPDATE
            community_aggregates ca
        SET
            posts = posts + p.new_post_count
        FROM (
            SELECT count(*) AS new_post_count, community_id
            FROM new_rows
            GROUP BY community_id
             ) AS p
        WHERE
            ca.community_id = p.community_id;

    RETURN NULL;
END
$$;


/*
TRIGGER will be replaced with per-statement INSERT only
no Lemmy-delete or SQL DELETE to be performed during this period.
*/
CREATE OR REPLACE FUNCTION public.person_aggregates_post_count() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
BEGIN
        UPDATE
            person_aggregates personagg
        SET
            post_count = post_count + p.new_post_count
        FROM (
            SELECT count(*) AS new_post_count, creator_id
            FROM new_rows
            GROUP BY creator_id
             ) AS p
        WHERE
            personagg.person_id = p.creator_id;

    RETURN NULL;
END
$$;


/*
***********************************************************************************************
** comment table
*/


DROP TRIGGER post_aggregates_comment_count ON public.comment;


/*
TRIGGER will be replaced with per-statement INSERT only
*/
CREATE TRIGGER post_aggregates_comment_count
   AFTER INSERT ON public.comment
   REFERENCING NEW TABLE AS new_rows
   FOR EACH STATEMENT
   EXECUTE FUNCTION post_aggregates_comment_count();


-- IMPORTANT NOTE: this logic for INSERT TRIGGER always assumes that the published datestamp is now(), which was a logical assumption with general use of Lemmy prior to federation being added.
CREATE OR REPLACE FUNCTION public.post_aggregates_comment_count() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
BEGIN

        UPDATE
            -- per statement update 1
            post_aggregates postagg
        SET
            comments = comments + c.new_comment_count
        FROM (
            SELECT count(*) AS new_comment_count, post_id
            FROM new_rows
            GROUP BY post_id
             ) AS c
        WHERE
            postagg.post_id = c.post_id;


        UPDATE
            -- per statement update 2
            post_aggregates postagg
        SET
            newest_comment_time = max_published
        FROM (
            SELECT MAX(published) AS max_published, post_id
            FROM new_rows
            GROUP BY post_id
             ) AS c
        WHERE
            postagg.post_id = c.post_id;

        UPDATE
            -- per statement update 3
            post_aggregates postagg
        SET
            newest_comment_time_necro = max_published
        FROM (
            SELECT MAX(published) AS max_published, post_id, creator_id
            FROM new_rows
            WHERE published > ('now'::timestamp - '2 days'::interval)
            GROUP BY post_id, creator_id
             ) AS c
        WHERE
            postagg.post_id = c.post_id
            AND c.creator_id != postagg.creator_id
            ;

    RETURN NULL;
END
$$;


DROP TRIGGER community_aggregates_comment_count ON public.comment;

CREATE TRIGGER community_aggregates_comment_count
   AFTER INSERT ON public.comment
   REFERENCING NEW TABLE AS new_rows
   FOR EACH STATEMENT
   EXECUTE FUNCTION public.community_aggregates_comment_count();


CREATE OR REPLACE FUNCTION public.community_aggregates_comment_count() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
BEGIN

        UPDATE
            community_aggregates ca
        SET
            comments = comments + p.new_comment_count
        FROM (
            SELECT count(*) AS new_comment_count, community_id
            FROM new_rows AS nr
            JOIN post AS pp ON nr.post_id = pp.id
            GROUP BY pp.community_id
             ) AS p
        WHERE
            ca.community_id = p.community_id
            ;

    RETURN NULL;

END
$$;


DROP TRIGGER person_aggregates_comment_count ON public.comment;

CREATE TRIGGER person_aggregates_comment_count
   AFTER INSERT ON public.comment
   REFERENCING NEW TABLE AS new_rows
   FOR EACH STATEMENT
   EXECUTE FUNCTION public.person_aggregates_comment_count();


CREATE OR REPLACE FUNCTION public.person_aggregates_comment_count() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
BEGIN

        UPDATE
            person_aggregates personagg
        SET
            comment_count = comment_count + p.new_comment_count
        FROM (
            SELECT count(*) AS new_comment_count, creator_id
            FROM new_rows
            GROUP BY creator_id
             ) AS p
        WHERE
            personagg.person_id = p.creator_id;

    RETURN NULL;
END
$$;


DROP TRIGGER site_aggregates_comment_insert ON public.comment;

CREATE TRIGGER site_aggregates_comment_insert
   AFTER INSERT ON public.comment
   REFERENCING NEW TABLE AS new_rows
   FOR EACH STATEMENT
   EXECUTE FUNCTION public.site_aggregates_comment_insert();


CREATE OR REPLACE FUNCTION public.site_aggregates_comment_insert() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
BEGIN

   UPDATE site_aggregates
      SET comments = comments +
         (
            SELECT count(*) FROM new_rows WHERE local = true
         )
      WHERE site_id = 1
      ;

    RETURN NULL;
END
$$;

With this in place, 300,000 posts a minute can be generated and reaching levels of 5 million or 10 million don't take too long.

[-] anonymoose@lemmy.ca 7 points 1 year ago

That's really cool work! It's a bit beyond my pay grade, so I can't really comment too much about it.

I had a look at the PR you mentioned, and again, while I can't comment on the contents because I am a little out of my depth, may I voice my opinion on the exchange? This is coming from a place of trying to help, since I really do appreciate all the work you've put in and are putting in, and the fediverse can really use your talents, so I hope I don't offend you.

From my reading, it didn't appear that you were being ignored/hazed, and it seemed like the devs would have been open to your improvements. From working and leading big teams, I've noticed that communication and managing emotions is often much harder than writing code. In the thread, it appeared that communication had broken down on both sides (and seemed to have been the case in prior interactions too). Since you mentioned your struggles with autism in the thread, I wonder if that played a part in the tone of the devs perhaps being misinterpreted ? This is, of course only my interpretation, and I could be completely wrong.

Ultimately Lemmy itself is an example of trying to build a community and consensus amongst a broad and diverse group of people, who will often not see eye to eye.

In any case I would like to say I personally appreciate your hard work and really do hope you're able to help make Lemmy better. Thank you!

[-] RoundSparrow@lemmy.ml -1 points 1 year ago

Ok, so let's look at recent changes that they have deployed.... https://github.com/LemmyNet/lemmy/issues/3886

One of which makes entire tree of comments disappear. Do you see developers fretting over this and fixing it? Or do you see them ignoring the May 27 PostgreSQL JOIN problem.

How did such a bug go out? Do you see Lemmy developers actually using Lemmy to test things and notice these crashes and problems? Do you look at their posting and comment history? Do they actually go login over at Beehaw and Lemmy.world and see just how terrible the code performance is?

If it isn't hazing, what is it?

It's as if they build a product only for other people to use... and they don't notice any of the constant crashes, incredibly slow performance etc - and they act like nobody in the computer industry ever heard of Memcache or Redis to solve performance problems. If it isn't extreme hazing going on, then what is it?

[-] erlend_sh@lemmy.world 3 points 1 year ago

I’ve been reading a lot of your exchanges on the Lemmy GitHub and I can tell you with a high degree of confidence that you are not the subject of a hazing ritual. What’s going on is a miscommunication issue; there’s no ill will directed towards you.

The Lemmy devs are under a great deal of stress these days due to the recent influx of activity, both on the big Lemmy instances as well as the Lemmy GitHub. You’ve clearly gone to great lengths to investigate various SQL bottlenecks in Lemmy, and this work does not go unnoticed or unappreciated.

The problem you’re likely running into is that the Lemmy devs are trying to address a wide array of issues, whereas you are zoomed in on some very specific performance problems. Whether or not the core devs are wrong when they say your findings are irrelevant is beside the point. What they are really saying is that they do not have the attention bandwidth to try to see what you are seeing right now.

If you find yourself unable to work with the Lemmy project, there are other fedi projects in Rust like Mitra or Kitsune which might be more receptive to your contributions. I’m personally very interested in seeing rudimentary Lemmy (Groups et.al.) compatibility in Kitsune.

load more comments (8 replies)
load more comments (8 replies)
this post was submitted on 22 Aug 2023
783 points (95.8% liked)

Fediverse

17671 readers
23 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 4 years ago
MODERATORS