V0ldek

joined 2 years ago
[–] V0ldek@awful.systems 10 points 2 days ago* (last edited 1 day ago) (1 children)

The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.

When I read HPMOR, which was years ago before I knew who tf Yud was and I thought Harry was intentionally written as a deeply flawed character and not a fucking self-insert, my favourite part was Hermione's death. Harry then goes into grief that he is unable to cope with, disassociating to such an insane degree he stops viewing most other people as thinking and acting individuals. He quite literally goes insane as his world - his friend and his illusion of being the smartest and always in control of the situation - ended.

Of course now in hindsight I know this is just me inventing a much better character and story, and Yud is full of shit, but I find it funny that he inadvertently wrote a character behave insanely and probably thought he's actually a turborational guy completely in control of his own feelings.

[–] V0ldek@awful.systems 8 points 2 days ago

To say you are above tropes means you don’t live and exist.

To say you are above tropes is actually a trope

[–] V0ldek@awful.systems 7 points 2 days ago (1 children)

Also, you can one-step explain from this guide why people with working bullshit detectors tend to immediately clock LLM output, vs the executive class whose whole existence is predicated on not discerning bullshit being its greatest fans. A lot of us have seen A Guy In A Suit do this, intentionally avoid specifics to make himself/his company/his product look superficially better. Hell, the AI hype itself (and the blockchain and metaverse nonsense before it) relies heavily on this - never say specifics, always say "revolutionary technology, future, here to stay", quickly run away if anyone tries to ask a question.

[–] V0ldek@awful.systems 5 points 1 week ago

Jesus what an opening shot

I find myself periodically queried for my thoughts on artificial intelligence. On the one hand, this is very silly because I’m a humanities PhD who mostly writes about comic books. On the other hand, we’re talking about a field in which one of the most influential thinkers is a Harry Potter fanfic writer who never attended high school, so it’s not like I’ve got imposter syndrome.

Now I gotta read the rest

[–] V0ldek@awful.systems 8 points 1 week ago (2 children)

Help, I asked AI to design my bathroom and it came with this, does anyone know where I can find that wallpaper?

it's the doom bathroom

[–] V0ldek@awful.systems 9 points 3 weeks ago (1 children)

can we cancel Mozilla yet

Sure! Just build a useful browser not based on chromium first and we'll all switch!

[–] V0ldek@awful.systems 9 points 3 weeks ago

Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don't like and you'll make it

[–] V0ldek@awful.systems 9 points 3 weeks ago

Here's a little lesson in trickery This is going down in history If you wanna be a sneerer number one You have to chase a lesswronger on the run!

[–] V0ldek@awful.systems 8 points 3 weeks ago (2 children)

Guess who's #1

Taylor Swift

[–] V0ldek@awful.systems 11 points 3 weeks ago

I mean if you ever toyed around with neural networks or similar ML models you know it's basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.

There's a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There's no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.

In other words, "engineers don't know how it works" can have two meanings - that they're hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don't have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it's not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don't know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn't collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I'm aware, largely true, or at least I haven't seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it'd be a major achievement everyone would be talking about.

[–] V0ldek@awful.systems 5 points 4 weeks ago (1 children)

Thank you for your service o7

[–] V0ldek@awful.systems 17 points 4 weeks ago (3 children)
 

This is a nice post, but it has such an annoying sentence right in the intro:

At the time I saw the press coverage, I didn’t bother to click on the actual preprint and read the work. The results seemed unsurprising: when researchers were given access to AI tools, they became more productive. That sounds reasonable and expected.

What? What about it sounds reasonable? What about it sounds expected given all we know about AI??

I see this all the time. Why do otherwise skeptical voices always have the need to put in a weakening statement like this. "For sure, there are some legitimate uses of AI" or "Of course, I'm not claiming AI is useless" like why are you not claiming that. You probably should be claiming that. All of this garbage is useless until proven otherwise! "AI does not increase productivity" is the null hypothesis! It's the only correct skeptical position! Why do you seem to need to extend benefit of the doubt here, like seriously, I cannot explain this in any way.

 

An excellent post by Ludicity as per usual, but I need to vent two things.

First of all, I only ever worked in a Scrum team once and it was really nice. I liked having a Product Owner that was invested in the process and did customer communications, I loved having a Scrum Master that kept the meetings tight and followed up on Retrospective points, it worked like a well-oiled machine. Turns out it was a one-of-a-kind experience. I can't imagine having a stand-up for one hour without casualties involved.

A few months back a colleague (we're both PhD students at TU Munich) was taking a piss about how you can enroll in a Scrum course as an elective for our doctor school. He was in general making fun of the methodology but using words I've never heard before in my life. "Agile Testing". "Backlog Grooming". "Scrum of Scrums". I was like "dude, none of those words are in the bible", went to the Scrum Guide (which as far as I understood was the only document that actually defined what "Scrum" meant) and Ctrl+F-ed my point of literally none of that shit being there. Really, where the fuck does any of that come from? Is there a DLC to Scrum that I was never shown before? Was the person who first uttered "Scrumban" already drawn and quartered or is justice yet to be served?

Aside: the funniest part of that discussion was that our doctor school has an exemption that carves out "credits for Scrum and Agile methodology courses" as being worthless towards your PhD, so at least someone sane is managing that.

Second point I wanted to make was that I was having a perfectly happy holiday and then I read the phrase "Agile 2" and now I am crying into an ice-cream bucket. God help us all. Why. Ludicity you fucking monster, there was a non-zero chance I would've gone through my entire life without knowing that existed, I hate you now.

 

Turns out software engineering cannot be easily solved with a ~~small shell script~~ large language model.

The author of the article appears to be a genuine ML engineer, although some of his takes aged like fine milk. He seems to be shilling Google a bit too much for my taste. However, the sneer content is good nonetheless.

First off, the "Devin solves a task on Upwork" demo is 1. cherry picked, 2. not even correctly solved.

Second, and this is the absolutely fantastic golden nugget here, to show off its "bug solving capability" it creates its own nonsensical bugs and then reverses them. It's the ideal corporate worker, able to appear busy by creating useless work for itself out of thin air.

It also takes over 6 hours to perform this task, which would be reasonable for an experienced software engineer, but an experienced software engineer's workflow doesn't include burning a small nuclear explosion worth of energy while coding and then not actually solving the task. We don't drink that much coffee.

The next demo is a bait-and-switch again. In this case I think the author of the article fails to sneer quite as much as it's worthy -- the task the AI solves is writing test cases for finding the Least Common Multiple modulo a number. Come on, that task is fucking trivial, all those tests are oneliners! It's famously much easier to verify modulo arithmetic than it is to actually compute it. And it takes the AI an hour to do it!

It is a bit refreshing though that it didn't turn out DEVIN is just Dinesh, Eesha, Vikram, Ishani, and Niranjan working for $2/h from a slum in India.

 

I'm not sure if this fully fits into TechTakes mission statement, but "CEO thinks it's a-okay to abuse certificate trust to sell data to advertisers" is, in my opinion, a great snapshot of what brain worms live inside those people's heads.

In short, Facebook wiretapped Snapchat by sending data through their VPN company, Onavo. Installing it on your machine would add their certificates as trusted. Onavo would then intercept all communication to Snapchat and pretend the connection is TLS-secure by forging a Snapchat certificate and signing it with its own.

"Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them," Facebook CEO Mark Zuckerberg wrote in a 2016 email to Javier Olivan.

"Given how quickly they're growing, it seems important to figure out a new way to get reliable analytics about them," Zuckerberg continued. "Perhaps we need to do panels or write custom software. You should figure out how to do this."

Zuckerberg ordered his engineers to "think outside the box" to break TLS encryption in a way that would allow them to quietly sell data to advertisers.

I'm sure the brave programmers that came up with and implemented this nonsense were very proud of their service. Jesus fucking cinammon crunch Christ.

view more: next ›