45

Because I’m seeing more and more each day people once considered experts now on the same level as an untrained person who knows how to type. And I’m really hoping LLMs reach a tipping point because otherwise, you’re just adding another nauseating element to the rat race.

I’m no stranger to the “adapt or get left behind” comments, and it’s very funny when people tell me this but can no longer tell me how a for loop works or what constitutes an outer join.

lol hate to say it but this shithole needs to humbled by an act of war that brings down the power grid, capitalist realism is quickly turning into economic and technological realism (if it hasn’t already).

top 50 comments
sorted by: hot top controversial new old
[-] hello_hello@hexbear.net 2 points 3 days ago* (last edited 3 days ago)

LLMs and the AI hype is just an imperialist cover for the very real environmental war that is being waged on the Global South each day. LLMs jobs are not to take jobs or threaten the positions of the imperial core proletariat (this is simply a distraction created by imperial core mass media even in leftist circles), it is to stimulate the artificial demand for production to combat the falling rate of profit. Climate change is an imperialist caused calamity to destroy Global South nations and it's no coincidence we see the worst ecological disasters in the most victimized parts of the planet and LLM's high compute cost for very low return comparably are perfect for promoting it. ChatGPT is just one tool in the belt of imperialist powers to control the narrative around technology and to ensure continued extraction at even greater amounts.

This isn't unique with LLMs, the crypto tech craze had also served the same purpose as computing power (GPUs) were scalped to all hell and creating even more artificial demand (thus continuing the supply chain cycle of unequal exchange). Proprietary software is one of the root maintainers of the imperialist hold over technology. Apple and Microsoft, both companies (but not the only companies but speaking generally) who are deeply entwined with US hegemony (and MIC, these guys arm Zionists), maintain their positions over a distorted sense of importance as all of their discoveries and achievements are locked away or undocumented and employ propaganda departments to sell the USAmerican luxury brand of technology.

All the discourse surrounding whether AI is "useful" or not is manufacturing consent for plundering even more resources to be hoarded by western powers. Of course AI is useful, most computer programs were made to serve a purpose in some capacity and it's not worth discussing (esp considering the companies themselves publish their own benchmarks in highly choreographed publicity stunts), but the push of AI serves a deeper purpose than what MSM would ever present.

Anyone who discusses AI in the context of "wow the youth's brains are being melted, you better make sure to safeguard your job" are missing the bigger picture (esp in the Global North). All your silicon is bloodied with the hardship and sacrifice of the Global Majority and neo-colonial extraction and we should maintain that as main leftist position on AI which can and will go further than crypto ever did.

Just as a side note, using Linux and free software is praxis actually, it's not because people are contrarian.

[-] DengistDonnieDarko@hexbear.net 39 points 1 week ago* (last edited 1 week ago)

i see it going the way of bitcoin. that is, not going anywhere and in fact increasing to absurd levels of value, much to my chagrin.

[-] Orcocracy@hexbear.net 23 points 1 week ago

The only thing either are good for is money laundering and cheating on schoolwork. They’ve made life easier for thieves and liars and the rest of us just have to fucking put up with it as if this is the price of progress. The luddites were right, these machines need to be broken.

[-] Sauerkraut@discuss.tchncs.de 1 points 5 days ago

I half agree. ChatGPT helps me brain storm or respond to awkward emails in professional manner. It is legitimately helpful for some tasks

[-] Orcocracy@hexbear.net 1 points 4 days ago

I receive a lot of AI-written emails and reports and they are almost always terribly unprofessional. AI writing is competent on a purely surface level and only if you quickly skim-read and move on to something else. But if you need real information or specific details AI writing is usually incoherent and an insulting waste of time.

[-] stigsbandit34z@hexbear.net 17 points 1 week ago

Right there with you.

The fact that the true idea of “value” has become so obfuscated that a meme coin is now technically worth more than the quarter leads me to believe nothing matters.

I often think back to the christman vlog where he is imagining a post-climate apocalypse future and nails exactly how it will all go down and a post-LLM future would be similarly easy to predict.

If there’s one thing chatgpt is great at, it’s showing how much humans love patterns, and those patterns are especially noticeable in the current economic climate. Hell, just look at any all hands on meeting in the past 10 years.

[-] PKMKII@hexbear.net 25 points 1 week ago

I think there’s a “time is a flat circle” aspect to this. Like in the early days LLM’s would spit out useless but entertaining content. Like that early AI art, it was this chaotic jumbled mess with a very surrealistic tether to whatever the prompt was. Neither useful as a product nor decent art, but entertaining in its absurdity. Then it got to the point where, as long as people put at least a little effort into massaging the prompts, it’ll spit something believable at first glance. You’ve got break out the photo tools to tell, and most people aren’t going to zoom in to see if there’s a sudden jump between jpeg and png edges.

But the technical finesse has brought a plasticness. My wife’s boss, absolute walking definition of failing upwards McKinsey brain, uses ChatGPT for everything. She showed me one of his emails, it looks professionally written and all. But good Christ it’s painful to read. It doesn’t look like something a human being would earnestly write, more like a satire of corporate speak that would be considered too obvious to work as satire.

And that’s where the circle comes around, all this AI text, art, movies, etc have this feel to them that makes you aware that it’s AI-generated. That breaks the spell, it makes you aware that it’s a plastic facsimile of creative output, and that devalues it. Which will conversely create a value for creative output that feels distinctly not AI.

I’m not sure if there’s an exact 1-to-1 when it comes to “programmers” who don’t really know how to code, they just know how to ask ChatGPT for a function. But I’d have to imagine that they’re in a position where they’re fine if things are running smoothly, but the moment things get FUBAR’d and ChatGPT can’t spit out a solution, or the solution is so convoluted and messy that it makes more problems than it fixes, that will separate the chaff from those who actually know how to wrangle with the code.

[-] MaoTheLawn@hexbear.net 12 points 1 week ago* (last edited 1 week ago)

I'm currently working with a small tiktok singer (500k monthly on Spotify) - they blew up at some point (the music is awful), it's easily the biggest job I've had so far as a videographer. The other day I went in to the meeting with their agents, and the marketing team.

They'd chat GPT'd every single idea for me make in a video. Not an ounce of talent or original thought in the room. Just filling social media with slop content. It might as well not exist at all.

Creative media is in a very depressing place right now. Even adverts are a million times worse - they cracked the code that it's more cost effective to make some flashy short bullshit than it is to try to have any sort of earnest identity.

[-] Sauerkraut@discuss.tchncs.de 1 points 5 days ago

Adverts have always been bullshit. Adverts only exist to sell stuff that people did not want or need. I think all advertising should be banned.

[-] PKMKII@hexbear.net 8 points 1 week ago

I think that’s the result of the few transmitters many receivers model being replaced with many transmitters many receivers but with The Algorithm between the two. It all becomes about gaming whatever metric The Algorithm values at the moment. The result is this homogeneous game of “ask this algorithm to spit out content the other algorithm approves of.”

[-] stigsbandit34z@hexbear.net 5 points 1 week ago

but the moment things get FUBAR’d and ChatGPT can’t spit out a solution, or the solution is so convoluted

Unfortunately it seems that they keep adding tokens/burning down the environment to ensure that this is never the case. It’s starting to feel like the only option is to adapt because anything else is futile

Also would like to say that this frustration is extremely symptomatic of the fantastical pursuit of endless growth and it’s becoming quite hard to turn this part of my brain off as it just keeps smacking me in the face every damn day

[-] PKMKII@hexbear.net 6 points 1 week ago

Just layering spaghetti code on top of spaghetti code, hoping nothing ever goes truly wrong because no one is going to be able to figure out how it’s all connected?

[-] iByteABit@hexbear.net 4 points 1 week ago

Very spot on, it can be very useful as a way to Google things in a personalized way but that's pretty much it atm. It's very good at what an LLM was always designed to do from the beginning, parse and predict a fitting response out of the infinite pile of trash it remembers. Any other use case eventually just highlights how much better a human would do the job, even the things that it is fit to do like first level customer service. I have a special hatred for AI "art", out of all its use cases this is the one that I really wish could be abolished.

[-] GoodGuyWithACat@hexbear.net 18 points 1 week ago

Like any technology in the past 20 years. It will ride high off of tech loans until investors realize how little profit there is in it, at which point they'll enshittify it worse than it already is.

[-] OnlineBrainworms@hexbear.net 5 points 1 week ago

Its like uber but for all kinds of jobs instead of just taxi drivers. I think you're right.

[-] Frank@hexbear.net 17 points 1 week ago

I foresee some horrifying disaster because a subcontractor of a subcontractor of a subcontractor used the Google analytical nonsense generator to write code for a nuke plant or an air traffic control system.

[-] FourteenEyes@hexbear.net 15 points 1 week ago

Chat-GPT will eventually run out of funding when it becomes clear it's not actually capable of doing anything useful with enough accuracy to not require paying somebody to follow behind it and clean up its mess. And if nobody realizes that, they'll realize it's not making any fucking money and pull the plug for pure want of preserving their capital.

All of this AI horseshit is so much hot air. They made a chat program that can Google shit and string words together easier. And a bunch of fucking morons are treating it like the ship computer from Star Trek. The fallout of this will be another economic bubble burst. Nothing more exotic than that.

[-] ThomasMuentzner@hexbear.net 15 points 1 week ago

That's an intriguing and open-ended question! If we’re thinking about how ChatGPT might evolve and its potential consequences, a few scenarios seem likely, each depending on a combination of technological, ethical, and societal factors. Here’s how I imagine the trajectory might unfold:

  1. Enhanced Communication and Assistance One optimistic outcome is that AI like ChatGPT continues to evolve into a powerful tool for communication, learning, and personal assistance. It could become seamlessly integrated into many aspects of life—education, work, healthcare, and even entertainment. This would ideally be accompanied by improved transparency, trustworthiness, and adaptability. In this scenario:

AI as a Personalized Assistant: People could rely on AI as a tutor, therapist, life coach, or business advisor. It might help people make more informed decisions, be more productive, and improve mental well-being. Efficient Information Processing: It could also accelerate problem-solving in scientific research, creativity, and data analysis, working alongside humans to solve complex problems in fields like medicine, climate change, and space exploration. 2. Coexistence with Human Jobs While there is the potential for AI to automate many tasks traditionally done by humans, there’s also the possibility that it will work in tandem with human workers, leading to a more symbiotic relationship. Instead of completely replacing jobs, AI could assist with repetitive tasks, leaving humans to focus on more creative, strategic, or empathetic roles.

New Job Creation: Fields like AI ethics, AI training, and AI-human collaboration would likely become more prominent. Additionally, sectors related to AI development (coding, cybersecurity, etc.) would keep growing. Enhanced Collaboration: Rather than fully displacing workers, AI might enhance collaboration, allowing people to focus on tasks where human insight and intuition are irreplaceable, while automating tedious tasks. 3. Ethical Challenges and Misuse On the flip side, as AI becomes more capable, ethical concerns and risks could become more pressing.

Bias and Fairness: AI systems, including ChatGPT, can inadvertently perpetuate bias if not carefully managed. This could lead to discrimination in decision-making processes, such as hiring or legal matters. Misinformation: While AI is great at generating human-like text, it can also be used to spread misinformation or manipulate public opinion. This is particularly dangerous in contexts like political elections or public health crises. Surveillance and Privacy: Widespread AI adoption could be used for surveillance, potentially infringing on privacy rights. Governments or corporations might exploit AI to track citizens or users in ways that might be intrusive or oppressive. 4. AI Autonomy and Control Looking even further down the road, the evolution of AI may lead to the development of systems with higher levels of autonomy, intelligence, and self-direction. This could open up possibilities but also serious risks.

Superintelligent AI: Some predict that AI might eventually reach a point where it surpasses human intelligence (referred to as Artificial General Intelligence or AGI). In such a scenario, questions around control, ethics, and alignment with human values would become critical. Would a superintelligent AI work in humanity’s best interest, or would it pursue its own goals, potentially without regard for human welfare? Regulation and Control: The push for regulating AI would grow more urgent. We may see global discussions around setting up international AI oversight bodies or frameworks to ensure AI development remains ethical and aligned with human values. 5. Human-AI Integration As AI systems become more advanced, there could be a shift towards direct integration between human biology and AI, especially with advancements in brain-computer interfaces (BCIs).

Cognitive Enhancement: Imagine using AI directly to enhance human memory, creativity, or even decision-making. The boundaries between human and machine could blur, raising philosophical and ethical questions about what it means to be human. AI Augmentation: Rather than replacing humans, AI could work with individuals to augment their abilities, creating a future where human-AI hybrids become commonplace. For example, AI could provide real-time data visualization for professionals, or even assist in surgical operations with pinpoint precision. 6. Social and Psychological Impact The increasing reliance on AI could have profound social and psychological effects. While it could bring people closer together through improved communication tools, it could also foster isolation if people lean too heavily on AI for companionship and interaction.

Disconnection: The more we rely on AI for emotional or intellectual support, the more we may risk losing genuine human connections. There’s a balance to be struck between using AI for productivity and maintaining authentic relationships. Identity and Consciousness: As AI becomes more capable of mimicking human interaction, people may start to question the nature of consciousness and what it means to be "alive." This could have profound philosophical, psychological, and even religious implications. Conclusion: The Path Forward The future of ChatGPT and similar AI models is not set in stone. It could lead to a utopia of human-AI collaboration and progress, or it could result in dystopian outcomes, depending on how we manage its development, integration, and ethical governance. To guide it in a positive direction, there will need to be careful consideration of:

Ethical standards and transparency in AI training. Regulations and oversight to prevent misuse. Public awareness and education about AI’s capabilities and limits. Human-centric values in AI design to ensure the technology remains aligned with human welfare. In the end, it’s about balance: embracing the immense potential of AI while remaining mindful of the challenges it brings.

[-] stigsbandit34z@hexbear.net 1 points 6 days ago

a-guy

And now, a human

Computational, digital systems should have never been exposed to the general public because it’s clear that they cannot produce anything of actual value other than pure gluttony/endless consumption of media/information to no tangible end. But now that we’ve given the general public access to AI and expect something good to come of it? The extremely propagandized and downtrodden citizens of the United States?

I personally believe AI will follow the continued cycle of any other tech boom in the last 20 years or so and will only be stifled by real-world occurrences (*cough cough things which actually matter) such as economic crisis and/or continued environmental degradation. Of course, in the case of the latter, capital will ensure that these AI systems will receive more rights than humans (but most importantly minorities) and the resources that you and I need to survive will instead be spent on the next llm startup that promises “AGI.” This new LLM startup will be valued at nearly a trillion dollars while the lives of Americans remain completely unchanged with the number of those on the periphery of complete economic devastation growing each year (sounds familiar).

But my true concern is the clear and obvious continued disconnection and further isolation that will certainly follow. It seems quite ridiculous to me (and nearly impossible) to separate the fact that humans in 2024 are varying levels of “extremely online” in that mass information is disseminated through a few “authoritative” hubs that carefully censor and distort real information-information in the physical world. Instead of going out into the real world and questioning the facts of reality, people will instead choose the convenience of an anthropomorphic markov chain.

The only way any of this can be prevented is by the most tenuous regulation but lol

We’re fucked. We’re so fucked. Modern day technology is akin to a pacifier for a baby. If this world made any fucking sense, we wouldn’t be so quick to allow tools to become controlling systems that don’t even offer an opportunity to opt-out. It has become so infused with everyday life that you’re strapped in for the ride whether you like it or not

Can’t believe I was born in this age

[-] TheBroodian@hexbear.net 29 points 1 week ago
[-] ThomasMuentzner@hexbear.net 11 points 1 week ago

I'm sorry that the summary wasn't what you were looking for. If you'd like, I can try a different approach or make it more concise or focused on what interests you most. Just let me know how you'd like me to adjust!

[-] BodyBySisyphus@hexbear.net 9 points 1 week ago

That's a lot of words, can someone ask ChatGPT to write a summary?

[-] ThomasMuentzner@hexbear.net 6 points 1 week ago

That's a lot of words, can someone ask ChatGPT to write a summary?

Sure! Here's a quick summary:

The future of ChatGPT and AI could unfold in several ways:

Integration into daily life: AI becomes seamlessly embedded in society, enhancing productivity but possibly leading to job displacement.

Regulation and ethics: Governments impose regulations to ensure AI is used responsibly, slowing progress but ensuring safety.

AI overreach: AI could grow too powerful, leading to concerns about control and autonomy, possibly sparking resistance.

Specialization: AI may become more focused on specific tasks (e.g., healthcare, law), improving its effectiveness in those areas.

Obsolescence: Newer, more advanced technologies might make ChatGPT obsolete, leading to new forms of AI.

Public backlash: If AI is misused, there could be public resistance, leading to stricter controls or an ethical overhaul.

In short, ChatGPT's future depends on how society manages its integration, ethical concerns, and technological progress.

[-] naom3@hexbear.net 9 points 1 week ago

oooaaaaaaauhhh not posting!

[-] miz@hexbear.net 9 points 1 week ago
[-] ThomasMuentzner@hexbear.net 4 points 1 week ago

Ah, I see what you're getting at! You're looking for something that captures the essence of the current age in a more material or tangible sense, similar to how the "Stone Age" or "Iron Age" represent eras defined by the materials people used.

Given that we live in an age where advanced materials, technology, and energy sources are key, here are a few ideas for a name:

  1. "Silicon Age" Why: Silicon is the backbone of modern electronics, from computers to smartphones. It’s the material that powers much of our current technological infrastructure.
  2. "Carbon Age" Why: This could represent the dual nature of carbon in our age—on one hand, it's central to life and organic compounds; on the other, it’s heavily tied to climate change and fossil fuels. This name reflects the complex relationship we have with carbon.
  3. "Composite Age" Why: We’re increasingly using composite materials—blends of metals, plastics, carbon fibers, etc.—to build everything from airplanes to smartphones. This name could symbolize the diverse mix of materials shaping the modern world.
  4. "Plastic Age" Why: Plastics, in their many forms, have revolutionized industries from packaging to medicine. They are a dominant material in everyday life, despite growing concerns about their environmental impact.
  5. "Lithium Age" Why: Lithium powers our batteries, from smartphones to electric vehicles, and is integral to energy storage systems. As we transition to cleaner energy, lithium’s role is becoming more central.
[-] context@hexbear.net 14 points 1 week ago* (last edited 1 week ago)

chatgpt and its relatives are fundamentally limited by the fact that linear increases in performance require exponentially increasing amounts of data and processing power, whereas its getting implicitly marketed as if it were the other way around.

so the inevitability of decreasing return on investment means it will be a race to the bottom to deploy cheap, shitty versions of this stuff anywhere it might be "good enough", and over time the ecosystem will evolve such that "good enough" is a depreciating standard even as it expands into more and more niches. everything will get worse in ways we haven't even imagined yet.

[-] stigsbandit34z@hexbear.net 6 points 1 week ago* (last edited 1 week ago)

“Good enough” is how every company I’ve worked for since college has structured their day to day operations agony-shivering

It’s a stimulant-based world, people are just trying to speed everything up (to grow more of course) while sacrificing all aspects of their personal lives without a second thought.

Really curious if we are doing our damned best to emulate the controlled substances and drugs that give people heart attacks

[-] supplier@hexbear.net 12 points 1 week ago

Ch*t gee pee tea and the like are just in a marketing bubble. They've got all this investor money, but the research and development can't improve as quickly as it needs to, so all that money just gets dumped into marketing. But it literally cannot go on forever, and industries will suffer from integrating it.

You won't even remember it after the collapse coming in '28

[-] supafuzz@hexbear.net 9 points 1 week ago* (last edited 1 week ago)

'28 is generous I think, this shit is so expensive to run and if there's no profitable use case it'll crash sooner and harder than that

the real concern is that this is the tech industry's last shot, they got nothin' after this. all of the government research from the 80s is used up and there is no next product

[-] GrouchyGrouse@hexbear.net 12 points 1 week ago

Warning: vibes based post

So the way I see it the USA economy is propped up by 3 major legs: military, tech, and banking (real estate can be sorta folded into the banking category via a venn diagram).

Despite the importance of the tech leg it is also the least stable because of how it has been pivoted toward wholly digital "products" like AI or crypto or internet platforms. Digital dvertising is in this weird quasi-real category in that you need a physical device (phone, TV etc) to see it and you (mostly) need a real end product to be selling (cars, medicines, etc) but as the rise of digital goods and services accelerates the products are slipping into the ether.

So one of the major reasons that money is being dumped into tech is that it represents one of the only sectors where you can Winkle money out of thin air. You don't need a bomb factory or a plot of land. That means it has a growth potential. But it's just growth for the sake of growth. It doesn't really translate into real world things like other industries. Like you said, this is one of their last shots because other than these specious digital products they aren't making anything new. They're innovating (wink!) within the narrow confines of technology that was laid out decades ago. There's still progress being made, of course, but that real nuts and bolts R&D is now peripheral to the giant pump and dump scheme most tech is now operating on.

What does all this mean? It means there is a definite shelf life to all this. It will stagnate. And when it does plateau it will collapse like a black hole because the "growing for the sake of growing" was all it had. Take that away and there's nothing left behind the curtain. There will be no steady decline. The bubble will burst and it will be catastrophe.

load more comments (1 replies)
[-] AOCapitulator@hexbear.net 9 points 1 week ago* (last edited 1 week ago)

The same way a house ends when it's being battered by a hurricane and then a windows finally break and allows the wind inside

[-] laziestflagellant@hexbear.net 9 points 1 week ago

Most of LLM/image generators are overhyped, but imo there is enough of a use case and a value case (if you want to cut down on paying people) that they will significantly erode the power of labor once more companies understand how to actually use them besides typing prompts into websites

Voice acting will become even more dire of an industry than it already is with voice to voice AI filters making it so a studio only needs to pay 1-2 voice actors to voice a cast of dozens

Art teams will be halved or decimated once more legal teams give the go-ahead for using image generator finetunes to make specific use generators that pump out their exact desired style with only minor manual touch-ups needed. The outcomes would be borderline indistinguishable from their usual art, but now they have to pay much less people for the same work.

Artist contractors will similarly be racing to the bottom against the bottom threshold of 'literally free' (they already are atm really)

Data entry jobs will be eroded.

Translation work will be heavily eroded.

Lower level modeling and filming jobs will be heavily eroded.

Higher level programming work will still be in high demand but entry level stuff will be under valued and hard to find.

All of that pushes more people into other fields looking for work, and thus erodes the labor power of those as well. Things get shittier and we get a lot of ugly things to look at.

[-] hello_hello@hexbear.net 3 points 3 days ago* (last edited 3 days ago)

This will never happen because the world required to produce the computer power for this would have already burned to ash. Americans already overuse energy astronomically compared to the rest of the world and now every American using an environmental calamity to produce adlibbed slop is just part of the imperialist plan to further climate change to pummel the Global Majority.

Literally none of the AI techbro chuds will tell you what's required to keep a room of 4090s chugging when the same task could be performed with a well written shell script. They only talk about how AI is going to change everything the same way how a Zionist talks about how Eretz Israel will change everything, just all vibes to conceal the environmental war that starts with illegal mining in the Global South.

[-] stigsbandit34z@hexbear.net 3 points 6 days ago

Sounds like the road to barbarism

[-] OgdenTO@hexbear.net 4 points 6 days ago

For voice acting, look back to cartoons of the 50s and 60s where there were only 1-2 people doing all the voices for a cartoon. The same actors were re-used everywhere.

The only reason there is a 1:1 character to actor relationship now is marketing - cartoons are now big movie money makers and producers think that hiring big names will sell more tickets. If instead the stars are chatgpt, I don't think this will sell the tickets and it won't work the way they think it will.

[-] Owl@hexbear.net 6 points 1 week ago

During the Bush years, the US government had finally gotten a proper handle on controlling broadcast news and the new 24 hour news cycle started bombarding people with propaganda. Meanwhile, the internet was a niche thing that mostly only hobbyists and teens understood, which allowed people both uncontrolled access to information, and also a better way to access information than libraries. Anybody who knew how to Google - which was uncommon and new - could answer as much as the most dedicated trivia nerd. Normies were completely unaware of the internet, but the internet people won. Companies made by internet people took over the world, and governments had to scramble to get control over them again.

Now governments have figured out how to control social media, and are using that to bombard people with propaganda all day. Meanwhile, the internet is slowly losing value as an information source, because it's flooded with bots spewing nonsense. Some people - probably teens and hobbyists again - will learn how to navigate the botless, propagandaless portions of the internet, so they still can have better access to information than everybody else. Instead of a moment where information access increased and those who followed it come out ahead, we'll have one where information access decreases, and those who avoid it come out ahead. But the results will still be the same - normies will be completely unaware, and the people who figured out how to avoid reading AI slop will win. Hopefully they won't just use that to make profitable companies and get recouped by the government again.

[-] CrawlMarks@hexbear.net 5 points 1 week ago* (last edited 1 week ago)

People will just get bored and move on while some dudes you never hear about have fun playing with it as a cool toy. Maybe we get a few photoshop plug-ins that one really thinks about also. Like how no one cares phtotoshop put all the mat painters out of work.

[-] DragonBallZinn@hexbear.net 4 points 1 week ago

Porky will squeal in delight about how many less employees he needs, that’s for sure.

I’m sure he’ll smugly advise everyone to all start their own businesses. That’s the new narrative: jobs are going to be a think of the past so everyone needs to become a capitalist.

[-] eldavi@lemmy.ml 4 points 1 week ago

... it’s very funny when people tell me this but can no longer tell me how a for loop works or what constitutes an outer join.

the funnier part is that llm's are best at doing these sorts of things; but we continue to insist on people doing this during interviews for some reason.

[-] stigsbandit34z@hexbear.net 5 points 1 week ago

It’s very elementary though and should be the same as writing a sentence

[-] eldavi@lemmy.ml 4 points 1 week ago

i used to think so too; but i stopped about 15 years go or so, partially because i realized that not everyone works in the same way; but mostly because english teachers/professors/instructors/etc. generally don't believe that most can write a proper sentence also.

i learned that hard way that, looking for other people who work like i do, is a folly and will leave you looking for a very long time unless you're ivy league.

[-] stigsbandit34z@hexbear.net 1 points 6 days ago

Yeah well there’s tik toks of gen zers whose brains are not cooked echoing the same sentiment. This shit is turning people into idiots, truly hope to see a massive power grid failure that shows how it was a mistake

[-] miz@hexbear.net 3 points 1 week ago
[-] AernaLingus@hexbear.net 2 points 1 week ago

it’s very funny when people tell me this but can no longer tell me how a for loop works

Wait...I'm having trouble even comprehending this. Do you mean like not understanding what's happening at the instruction level with conditional branch instructions? Or do you mean what some generic for loop does in your codebase?

load more comments
view more: next ›
this post was submitted on 11 Dec 2024
45 points (95.9% liked)

technology

23377 readers
10 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS