this post was submitted on 18 Jan 2024
206 points (93.6% liked)

Technology

73301 readers
3429 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)

top 50 comments
sorted by: hot top controversial new old
[–] LWD@lemm.ee 112 points 2 years ago (2 children)

OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

Good job, Sam Altman. Saying one thing and doing the opposite. There are already enough conspiracy theories about Davos running the world, and the creepy eye-scanning orb guy isn't helping.

[–] hai@lemmy.ml 21 points 2 years ago* (last edited 2 years ago)

Worldcoin, founded by US tech entrepreneur Sam Altman, offers free crypto tokens to people who agree to have their eyeballs scanned.

What a perfect sentence to sum up 2023 with.

[–] ItsAFake@lemmus.org 11 points 2 years ago (1 children)

Mr Altman, who founded Open AI which built chat bot ChatGPT, says he hopes the initiative will help confirm if someone is a human or a robot.

That last line kinda creeps me out.

[–] LWD@lemm.ee 11 points 2 years ago (2 children)

The whole thing is creepy. The name, the orb, scanning people's eyes with it, specifically targeting poor Kenyan people (the "unbanked") like a literal sci-fi villain.

[–] ItsAFake@lemmus.org 8 points 2 years ago* (last edited 2 years ago) (1 children)

Yeah that's most most sci-fi dystopian article I've read in a while.

The line where one of the people waiting to get their eyes scanned is well eye opening " I don't care what they do with the data, I just want the money", this is why they want us poor, so we need money so badly that we will impatiently hand over everything that makes us.

But we already happily hand over our DNA genome to private corporations, so what's an eye scan gonna do......

[–] LWD@lemm.ee 3 points 2 years ago

We hand over our DNA to ancestry companies for some obscene vanity reason and then pay them for the privilege of keeping it

[–] PipedLinkBot@feddit.rocks 3 points 2 years ago

Here is an alternative Piped link(s):

literal sci-fi villain

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] Bipta@kbin.social 42 points 2 years ago

That's why they just removed the military limitations in their terms of service I guess...

[–] deegeese@sopuli.xyz 33 points 2 years ago

I also want to sell my shit for every purpose but take zero responsibility for consequences.

[–] Sludgehammer@lemmy.world 32 points 2 years ago (1 children)

Considering what we've decided to call AI can't actually make decisions, that's a no-brainer.

[–] nyakojiru@lemmy.dbzer0.com 2 points 2 years ago

AI term means humans are no brainers

[–] fidodo@lemmy.world 19 points 2 years ago (2 children)

Shouldn't, but there's absolutely nothing stopping it, and lazy tech companies absolutely will. I mean we live in a world where Boeing built a plane that couldn't fly straight so they tried to fix it with software. The tech will be abused so long as people are greedy.

[–] TwilightVulpine@lemmy.world 5 points 2 years ago (1 children)

So long as people are rewarded for being greedy. Greedy and awful people will always exist, but the issue is in allowing them to control how things are run.

[–] fidodo@lemmy.world 6 points 2 years ago

More than just that, they're shielded from repercussions. The execs involved with ignoring all the safety concerns should be in jail right now for manslaughter. They knew better and gambled with other people's lives.

[–] monkeyslikebananas2@lemmy.world 4 points 2 years ago* (last edited 2 years ago)

They fixed it with software and then charged extra for the software safety feature. It wasn’t until the planes started falling out of the sky that they decided they would gracefully offer it for free.

[–] homesweethomeMrL@lemmy.world 17 points 2 years ago (1 children)

Has anyone checked on the sister?

OpenAI went from interesting to horrifying so quickly, I just can't look.

[–] LWD@lemm.ee 14 points 2 years ago (1 children)

The only difference between a beloved tech mogul and a deservedly hated one is time.

[–] AVincentInSpace@pawb.social 1 points 2 years ago

People still like Steve Jobs.

Ugh. There's time yet.

[–] Nei@lemmy.world 13 points 2 years ago (2 children)

OpenAI went from an interesting and insightful company to a horrible and a weird one in a very little time.

[–] TurtleJoe@lemmy.world 5 points 2 years ago

People only thought it was the former before they actually learned anything about them. They were always this way.

[–] AVincentInSpace@pawb.social 4 points 2 years ago

Remember when they were saying GPT-2 was too dangerous to release because people might use it to create fake news or articles about topics people commonly Google?

Hah, good times.

[–] mriormro@lemmy.world 11 points 2 years ago

I’m tired of dopey white men making the world so much worse.

[–] los_chill@programming.dev 7 points 2 years ago

Agreed, but also one doomsday-prepping capitalist shouldn't be making AI decisions. If only there was some kind of board that would provide safeguards that ensured AI was developed for the benefit of humanity rather than profit...

[–] cosmicrookie@lemmy.world 6 points 2 years ago

AI shouldn't make any decisions

[–] iAvicenna@lemmy.world 5 points 2 years ago* (last edited 2 years ago)

I am sure Zergerberg is also claiming that they are not making any life-or-death decisions. Lets see you in a couple years when the military gets involved with your shit. Oh wait they already did but I guess they will just use AI to improve soldiers' canteen experience.

[–] nymwit@lemm.ee 4 points 2 years ago

So just like shitty biased algorithms shouldn't be making life changing decisions on folks' employability, loan approvals, which areas get more/tougher policing, etc. I like stating obvious things, too. A robot pulling the trigger isn't the only "life-or-death" choice that will be (is!) automated.

[–] TheFriar@lemm.ee 4 points 2 years ago* (last edited 2 years ago) (1 children)

Ummm…no fucking shit. Who was thinking that was a good idea?

[–] sus@programming.dev 7 points 2 years ago

probably about half of the executives this guy talks to

[–] chemicalwonka@discuss.tchncs.de 4 points 2 years ago

is exactly this AI will do in a near future (not dystopia)

[–] captainastronaut@seattlelunarsociety.org 3 points 2 years ago (16 children)

But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?

Too little too late, Sam. 

load more comments (15 replies)
[–] autotldr@lemmings.world 2 points 2 years ago

This is the best summary I could come up with:


ChatGPT is one of several generative AI systems that can create content in response to user prompts and which experts say could transform the global economy.

But there are also dystopian fears that AI could destroy humanity or, at least, lead to widespread job losses.

AI is a major focus of this year’s gathering in Davos, with multiple sessions exploring the impact of the technology on society, jobs and the broader economy.

In a report Sunday, the International Monetary Fund predicted that AI will affect almost 40% of jobs around the world, “replacing some and complementing others,” but potentially worsening income inequality overall.

Speaking on the same panel as Altman, moderated by CNN’s Fareed Zakaria, Salesforce CEO Marc Benioff said AI was not at a point of replacing human beings but rather augmenting them.

As an example, Benioff cited a Gucci call center in Milan that saw revenue and productivity surge after workers started using Salesforce’s AI software in their interactions with customers.


The original article contains 443 words, the summary contains 163 words. Saved 63%. I'm a bot and I'm open source!

[–] OutrageousUmpire@lemmy.world 2 points 2 years ago (1 children)

Fair enough. I do think AI will become a valuable tool for doctors, etc who do make those decisions

[–] cosmicrookie@lemmy.world 9 points 2 years ago

Using AI to base a decision on, is different from letting it make decisions

[–] northendtrooper@lemmy.ca 2 points 2 years ago

And yet it persuades people to choose for it.

[–] Thedogspaw@midwest.social 1 points 2 years ago

When there's no human to blame because the robot made the decision the ceo should carry all the blame

[–] TimeSquirrel@kbin.social 1 points 2 years ago* (last edited 2 years ago)

We've been putting our lives in the hands of automated, programmed decisions for decades now if y'all haven't noticed. The traffic light that keeps you from getting T-boned. The autopilot that keeps your plane straight and level and takes workload off the pilots. The scissor lift that prevents you from raising the platform if it's too tilted. The airbag making a nanosecond-level decision on whether to deploy or not. And many more.

load more comments
view more: next ›