technology

24259 readers
338 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
1
16
Hexbear Code-Op (hexbear.net)
submitted 11 months ago* (last edited 11 months ago) by RedWizard@hexbear.net to c/technology@hexbear.net
 
 

Where to find the Code-Op

Wow, thanks for the stickies! Love all the activity in this thread. I love our coding comrades!


Hey fellow Hexbearions! I have no idea what I'm doing! However, born out of the conversations in the comments of this little thing I posted the other day, I have created an org on GitHub that I think we can use to share, highlight, and collaborate on code and projects from comrades here and abroad.

  • I know we have several bots that float around this instance, and I've always wondered who maintains them and where their code is hosted. It would be cool to keep a fork of those bots in this org, for example.
  • I've already added a fork of @WhyEssEff@hexbear.net's Emoji repo as another example.
  • The projects don't need to be Hexbear or Lemmy related, either. I've moved my aPC-Json repo into the org just as an example, and intend to use the code written by @invalidusernamelol@hexbear.net to play around with adding ICS files to the repo.
  • We have numerous comrades looking at mainlining some flavor of Linux and bailing on windows, maybe we could create some collaborative documentation that helps onboard the Linux-curious.
  • I've been thinking a lot recently about leftist communication online and building community spaces, which will ultimately intersect with self-hosting. Documenting various tools and providing Docker Compose files to easily get people off and running could be useful.

I don't know a lot about GitHub Orgs, so I should get on that, I guess. That said, I'm open to all suggestions and input on how best to use this space I've created.

Also, I made (what I think is) a neat emblem for the whole thing:

Todos

  • Mirror repos to both GitHub and Codeberg
  • Create process for adding new repos to the mirror process
  • Create a more detailed profile README on GitHub.

Done

spoiler

  • ~~Recover from whatever this sickness is the dang kids gave me from daycare.~~
2
3
 
 

seen-this-one

4
 
 
  1. Lmao.

  2. They were "troubleshooting a basic web app" - this is something you'd hear from someone learning how to program, not someone who runs a youtube channel trying to make a brand from programming.

  3. In doing so, they needed to "clear their cache" by which they mean... their browser cache? webserver cache? idk but something that shouldn't be so difficult that you'd delegate to an LLM (nor something an LLM should get so horribly wrong).

  4. All that aside, I can see that being some wide eyed naïve moron would let you believe in the magic for a little bit. Truly, I've been there. What gets me is how they trail off their reddit thread quoting what appears to be verbatim LLM marketing output about how they were the catalyst for Google putting guardrails on the rm -rf generator which they're not even paying for. Google really cares about you, my sweet special sunbeam.

Fuck me, AI slop coders are finding out in real time.

5
6
 
 

This is the future

7
8
9
 
 

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb

10
11
12
13
 
 

Machine learning community has been stuck on the autoregressive bottleneck for years, but a new paper shows that it's possible to use diffusion models to work on discrete at scale. The researchers trained two coding focused models named Mercury Coder Mini and Small that completely shatter the current speed and quality tradeoff.

Independent evaluations had the Mini model hitting an absurd throughput of 1109 tokens per second on H100 GPUs while the Small model reaches 737 tokens per second. They are essentially outperforming existing speed optimized frontier models by up to ten times in throughput without sacrificing coding capabilities. On practical benchmarks and human evaluations like Copilot Arena the Mini tied for second place in quality against huge models like GPT-4o while maintaining an average latency of just 25 ms. Their model matched the performance of established speed optimized models like Claude 3.5 Haiku and Gemini 2.0 Flash Lite across multiple programming languages while decoding exponentially faster.

The advantage of diffusion relative to classical autoregressive models stems from its ability to perform parallel generation which greatly improves speed. Standard language models are chained to a sequential decoding process where they must generate an answer exactly one token at a time. Mercury abandons this sequential bottleneck entirely by training a Transformer model to predict multiple tokens in parallel. The model starts with a sequence of pure random noise and applies a denoising process that iteratively refines all tokens simultaneously in a coarse to fine manner until the final text emerges. Because the generation happens in parallel rather than sequentially the algorithm achieves a significantly higher arithmetic intensity that fully saturates modern GPU architectures. The team paired this parallel decoding capability with a custom inference engine featuring dynamic batching and specialized kernels to squeeze out maximum hardware utilization.

14
15
 
 

treatlerism stays undefeated

16
 
 

(posting on an alt b/c I'm probably gonna end up doxing myself and i don't want my banger memes tainted by my being a fed irl)

Hi!

I made a lil webapp that I'm looking to get early feedback on.

Basic idea is to have a library of things made up of stuff that individuals are willing to lend out.
Alternatively, it's like craigslist/fb marketplace but for borrowing stuff.

This was inspired by my local org managing the same concept in a spreadsheet, and I thought that a UI on top would make it more usable (and hopefully prompt more folks to join in).

General concepts:

  • Users add things they're willing to lend
  • Users join groups and share things with folks in that group
  • View and request things that other folks have shared

Goals:

  • Help folks save money and (hopefully) build community
  • Easy to start new group (can start with immediate neighbors or friends)
  • Easy to add folks to existing groups (e.g. new members in an org)
  • Visibility controls for different levels of trust (lend your jewelry to friends but not strangers)

How to test:

  • Link: https://mutual-aid-library.vercel.app/
  • Create an account
    • There's no email verification, so feel free to just use a made up email address
    • To add things, you'll need to go to 'my profile' and add name & contact info, then 'add new thing'
    • If you don't want to create an account, you can log in with email hexbear@hexbear.net, password pigpoopballs
  • idk poke around. Create a group. Join a group. Add a thing.
  • Please don't do malicious stuff. It'll probably work, and I'll be sad.

Feedback requested:

  • If nothing else, just a thumbs up/down would be nice
  • Is there something like this already?
  • Is working on this more a waste of time?
  • Would this be something useful to you personally?
  • Do you think others would use it?
  • What should be added/removed/changed?

Some of the best feedback I could get is "don't continue this for reason," or "direct your energy to project instead."
I'm fully expecting this to go nowhere outside of my org, so don't worry about hurting my feelings.

Actually, worry a little bit. Like don't call me stupid or something. But you can criticize the app. Constructively.

Thank you!

Known issues

  • Many! The app is pretty shit atm
  • UI is dumb
  • Inefficient as all hell
  • Group admins can leave groups w/o a succession plan
  • Group admins can't kick group members
  • A single 'contact info' field isn't right
  • Likely full of security holes
  • Location stuff needs work - searching, fuzzing, filtering, etc
  • Vibe coded. I don't like it either, but I don't know how to do fronted stuff and it's just for a proof of concept.
17
18
19
20
21
22
 
 

Requires a modern chromium-based browser unfortunately, but this is forgivable for such a ridiculous project.

23
24
25
view more: next ›