technology

24264 readers
442 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
1
16
Hexbear Code-Op (hexbear.net)
submitted 11 months ago* (last edited 11 months ago) by RedWizard@hexbear.net to c/technology@hexbear.net
 
 

Where to find the Code-Op

Wow, thanks for the stickies! Love all the activity in this thread. I love our coding comrades!


Hey fellow Hexbearions! I have no idea what I'm doing! However, born out of the conversations in the comments of this little thing I posted the other day, I have created an org on GitHub that I think we can use to share, highlight, and collaborate on code and projects from comrades here and abroad.

  • I know we have several bots that float around this instance, and I've always wondered who maintains them and where their code is hosted. It would be cool to keep a fork of those bots in this org, for example.
  • I've already added a fork of @WhyEssEff@hexbear.net's Emoji repo as another example.
  • The projects don't need to be Hexbear or Lemmy related, either. I've moved my aPC-Json repo into the org just as an example, and intend to use the code written by @invalidusernamelol@hexbear.net to play around with adding ICS files to the repo.
  • We have numerous comrades looking at mainlining some flavor of Linux and bailing on windows, maybe we could create some collaborative documentation that helps onboard the Linux-curious.
  • I've been thinking a lot recently about leftist communication online and building community spaces, which will ultimately intersect with self-hosting. Documenting various tools and providing Docker Compose files to easily get people off and running could be useful.

I don't know a lot about GitHub Orgs, so I should get on that, I guess. That said, I'm open to all suggestions and input on how best to use this space I've created.

Also, I made (what I think is) a neat emblem for the whole thing:

Todos

  • Mirror repos to both GitHub and Codeberg
  • Create process for adding new repos to the mirror process
  • Create a more detailed profile README on GitHub.

Done

spoiler

  • ~~Recover from whatever this sickness is the dang kids gave me from daycare.~~
2
 
 

I recently got booted from the project I was working on (Aether). I wasn't able to work for two weeks following my car being impounded/me being arrested because the pigs took my laptop too, and it seems like that triggered some assessment thing I had to take, which I promptly failed.

I need to take some skill assessments so I can get on another project and get back to work. I was wondering if anyone here has taken these assessments, and can help me get an idea of what I need to do to prepare for them.

In particular, I'm looking to take the Generalist assessment, one of the coding assessments (HTML, CSS, maybe JavaScript, React, C, or something), and maybe Git/Docker.

All of these are things I'm fairly newb at.

3
 
 

cross-posted from: https://news.abolish.capital/post/31069

An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

The results, he said, were "sobering."

"Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."

Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps."

"If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together."

Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.

"No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying."

Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.

While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.

"Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI."

Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.

“It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them."

The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.

As CBS News reported on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities.

If Anthropic doesn't agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.


From Common Dreams via This RSS Feed.

4
5
 
 

seen-this-one

6
 
 
  1. Lmao.

  2. They were "troubleshooting a basic web app" - this is something you'd hear from someone learning how to program, not someone who runs a youtube channel trying to make a brand from programming.

  3. In doing so, they needed to "clear their cache" by which they mean... their browser cache? webserver cache? idk but something that shouldn't be so difficult that you'd delegate to an LLM (nor something an LLM should get so horribly wrong).

  4. All that aside, I can see that being some wide eyed naïve moron would let you believe in the magic for a little bit. Truly, I've been there. What gets me is how they trail off their reddit thread quoting what appears to be verbatim LLM marketing output about how they were the catalyst for Google putting guardrails on the rm -rf generator which they're not even paying for. Google really cares about you, my sweet special sunbeam.

Fuck me, AI slop coders are finding out in real time.

7
8
 
 

This is the future

9
 
 

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb

10
11
12
13
14
 
 

treatlerism stays undefeated

15
 
 

(posting on an alt b/c I'm probably gonna end up doxing myself and i don't want my banger memes tainted by my being a fed irl)

Hi!

I made a lil webapp that I'm looking to get early feedback on.

Basic idea is to have a library of things made up of stuff that individuals are willing to lend out.
Alternatively, it's like craigslist/fb marketplace but for borrowing stuff.

This was inspired by my local org managing the same concept in a spreadsheet, and I thought that a UI on top would make it more usable (and hopefully prompt more folks to join in).

General concepts:

  • Users add things they're willing to lend
  • Users join groups and share things with folks in that group
  • View and request things that other folks have shared

Goals:

  • Help folks save money and (hopefully) build community
  • Easy to start new group (can start with immediate neighbors or friends)
  • Easy to add folks to existing groups (e.g. new members in an org)
  • Visibility controls for different levels of trust (lend your jewelry to friends but not strangers)

How to test:

  • Link: https://mutual-aid-library.vercel.app/
  • Create an account
    • There's no email verification, so feel free to just use a made up email address
    • To add things, you'll need to go to 'my profile' and add name & contact info, then 'add new thing'
    • If you don't want to create an account, you can log in with email hexbear@hexbear.net, password pigpoopballs
  • idk poke around. Create a group. Join a group. Add a thing.
  • Please don't do malicious stuff. It'll probably work, and I'll be sad.

Feedback requested:

  • If nothing else, just a thumbs up/down would be nice
  • Is there something like this already?
  • Is working on this more a waste of time?
  • Would this be something useful to you personally?
  • Do you think others would use it?
  • What should be added/removed/changed?

Some of the best feedback I could get is "don't continue this for reason," or "direct your energy to project instead."
I'm fully expecting this to go nowhere outside of my org, so don't worry about hurting my feelings.

Actually, worry a little bit. Like don't call me stupid or something. But you can criticize the app. Constructively.

Thank you!

Known issues

  • Many! The app is pretty shit atm
  • UI is dumb
  • Inefficient as all hell
  • Group admins can leave groups w/o a succession plan
  • Group admins can't kick group members
  • A single 'contact info' field isn't right
  • Likely full of security holes
  • Location stuff needs work - searching, fuzzing, filtering, etc
  • Vibe coded. I don't like it either, but I don't know how to do fronted stuff and it's just for a proof of concept.
16
17
18
19
20
 
 

Machine learning community has been stuck on the autoregressive bottleneck for years, but a new paper shows that it's possible to use diffusion models to work on discrete at scale. The researchers trained two coding focused models named Mercury Coder Mini and Small that completely shatter the current speed and quality tradeoff.

Independent evaluations had the Mini model hitting an absurd throughput of 1109 tokens per second on H100 GPUs while the Small model reaches 737 tokens per second. They are essentially outperforming existing speed optimized frontier models by up to ten times in throughput without sacrificing coding capabilities. On practical benchmarks and human evaluations like Copilot Arena the Mini tied for second place in quality against huge models like GPT-4o while maintaining an average latency of just 25 ms. Their model matched the performance of established speed optimized models like Claude 3.5 Haiku and Gemini 2.0 Flash Lite across multiple programming languages while decoding exponentially faster.

The advantage of diffusion relative to classical autoregressive models stems from its ability to perform parallel generation which greatly improves speed. Standard language models are chained to a sequential decoding process where they must generate an answer exactly one token at a time. Mercury abandons this sequential bottleneck entirely by training a Transformer model to predict multiple tokens in parallel. The model starts with a sequence of pure random noise and applies a denoising process that iteratively refines all tokens simultaneously in a coarse to fine manner until the final text emerges. Because the generation happens in parallel rather than sequentially the algorithm achieves a significantly higher arithmetic intensity that fully saturates modern GPU architectures. The team paired this parallel decoding capability with a custom inference engine featuring dynamic batching and specialized kernels to squeeze out maximum hardware utilization.

21
22
23
24
 
 

Requires a modern chromium-based browser unfortunately, but this is forgivable for such a ridiculous project.

25
view more: next ›