[-] Zalack@startrek.website 7 points 1 year ago

They're talking about the visions of sci-fi authors, filmmakers, and artists. The tech Bros are the ones being drawn towards those artists' visions.

[-] Zalack@startrek.website 7 points 1 year ago

First thing I thought of as well: https://youtu.be/rBQhdO2UxaQ

[-] Zalack@startrek.website 6 points 1 year ago* (last edited 1 year ago)

Why shouldn't we, as engineers, be entitled to a small percentage of the profits that are generated by our code? Why are the shareholders entitled to it instead?

I worked in Hollywood before becoming a programmer, and even as a low level worker, IATSE still got residuals from union shows that went to our healthcare and pension funds. My healthcare was 100% covered by that fund for a top-of-the-line plan, and I got contributions to both a pension AND a 401K that were ON TOP of my base pay rather than deducted from it.

Lastly, we were paid hourly, which means overtime, but also had a weekly minimum. Mine was 50 hours. So if I was asked to work at all during a week I was entitled to 50 hours of pay unless I chose to take days off myself.

Unions fucking rock and software engineers work in a field that is making historic profits off of our labor. We deserve a piece of that.

[-] Zalack@startrek.website 7 points 1 year ago

IMO it's a good feature and it's a good thing it's required. I remember the days when I would boot up a game and never be sure if my system crashed or not.

This requires the game to start giving you feedback before you start wondering if you should do a power cycle.

[-] Zalack@startrek.website 7 points 1 year ago* (last edited 1 year ago)

Is !lostlemmings a thing anywhere?

[-] Zalack@startrek.website 7 points 1 year ago* (last edited 1 year ago)

I think the problem is that there is less often something to be said if you agree. Every now and then you might have something to add that fleshes out the idea or adds additional context, but generally if I totally agree with a comment I just upvote it.

On the other hand, when you disagree with something your response will, by logical necessity, be different from the parent comment.

So if you want to prioritize "adding something novel" there's a logical bias towards comments that disagree since only some percentage of agreement will tick that box.

Otherwise you end up with a bunch of comments that literally or figuratively add up to "this".

[-] Zalack@startrek.website 6 points 1 year ago

More good options is always a good thing.

[-] Zalack@startrek.website 7 points 1 year ago

Federation isn't opt-in though. It would be VERY easy to spin up a bunch of instances with millions or billions of fake communities and use them to DDOS a server's search function.

Searching current active subscriptions helps mitigate that vector a little.

[-] Zalack@startrek.website 8 points 1 year ago* (last edited 1 year ago)

While that's true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.

I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and -- maybe more importantly -- start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I'm imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.

Could something like that become conscious without realizing it's "communicating" with us? The program executing the LLM might reflexively process data without any concept that it's text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn't realize the data represents a link to other conscious beings.

As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn't understand they were doing math even when they got it "right", but they would still be sentient, if not sapient, despite that.

It's the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.

But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it's own bounds. Something that might not even recognize it's executing a program the same way we aren't consciously aware of the chemical reactions our brain is executing to make us think.

I don't believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven't started to be heavily layered and interconnected the way I think they'll end up.

At the very least it makes for a fun Sci-fi premise.

[-] Zalack@startrek.website 7 points 1 year ago

Lol, Texas and Florida are doing a good enough job of knocking themselves down without help from me.

[-] Zalack@startrek.website 7 points 1 year ago

I joined the Star Trek instance solely because I like startrek.website being in my handle.

[-] Zalack@startrek.website 8 points 1 year ago

You can customize both those options in Sync. I had the same initial issues, but you can switch comment collapse to single tap as well as increase font size.

Sync is very very customizable.

view more: ‹ prev next ›

Zalack

joined 1 year ago