994
submitted 11 months ago by L4s@lemmy.world to c/technology@lemmy.world

Over half of all tech industry workers view AI as overrated::undefined

you are viewing a single comment's thread
view the rest of the comments
[-] Boozilla@lemmy.world 73 points 11 months ago

I think it will be the next big thing in tech (or "disruptor" if you must buzzword). But I agree it's being way over-hyped for where it is right now.

Clueless executives barely know what it is, they just know they want it get ahead of it in order to remain competitive. Marketing types reporting to those executives oversell it (because that's their job).

One of my friends is an overpaid consultant for a huge corporation, and he says they are trying to force-retro-fit AI to things that barely make any sense...just so that they can say that it's "powered by AI".

On the other hand, AI is much better at some tasks than humans. That AI skill set is going to grow over time. And the accumulation of those skills will accelerate. I think we've all been distracted, entertained, and a little bit frightened by chat-focused and image-focused AIs. However, AI as a concept is broader and deeper than just chat and images. It's going to do remarkable stuff in medicine, engineering, and design.

[-] bassomitron@lemmy.world 25 points 11 months ago

Personally, I think medicine will be the most impacted by AI. Medicine has already been increasingly implementing AI in many areas, and as the tech continues to mature, I am optimistic it will have tremendous effect. Already there are many studies confirming AI's ability to outperform leading experts in early cancer and disease diagnoses. Just think what kind of impact that could have in developing countries once the tech is affordably scalable. Then you factor in how it can greatly speed up treatment research and it's pretty exciting.

That being said, it's always wise to remain cautiously skeptical.

[-] dustyData@lemmy.world 23 points 11 months ago

I’s ability to outperform leading experts in early cancer and disease diagnoses

It does, but it also has a black box problem.

A machine learning algorithm tells you that your patient has a 95% chance of developing skin cancer on his back within the next 2 years. Ok, cool, now what? What, specifically, is telling the algorithm that? What is actionable today? Do we start oncological treatment? According to what, attacking what? Do we just ask the patient to aggressively avoid the sun and use liberal amounts of sun screen? Do we start a monthly screening, bi-monthly, yearly, for how long do we keep it up? Should we only focus on the part that shows high risk or everywhere? Should we use the ML every single time? What is the most efficient and effective use of the tech? We know it's accurate, but is it reliable?

There are a lot of moving parts to a general medical practice. And AI has to find a proper role that requires not just an abstract statistic from an ad-hoc study, but a systematic approach to healthcare. Right now, it doesn't have that because the AI model can't tell their handlers what it is seeing, what it means, and how it fits in the holistic view of human health. We can't just blindly trust it when there's human lives in the line.

As you can see, this seems to be relegating AI to a research role for the time being, and not on a diagnosing capacity yet.

[-] SkyeStarfall@lemmy.blahaj.zone 5 points 11 months ago

You are correct, and this is a big reason for why "explainable AI" is becoming a bigger thing now.

[-] randon31415@lemmy.world 3 points 11 months ago

There is a very complex algorithm for determining your risk of skin cancer: Take your age ... then add a percent symbol after it. That is the probability that you have skin cancer.

[-] agent_flounder@lemmy.world 5 points 11 months ago

Like you say, "AI" isn't just LLMs and making images. We have previously seen, for example, expert systems, speech recognition, natural language processing, computer vision, machine learning, now LLM and generative art.

The earlier technologies have gone through their own hype cycles and come out the other end to be used in certain useful ways. AI has no doubt already done remarkable things in various industries. I can only imagine that will be true for LLMs some day.

I don't think we are very close to AGI yet. Current AI like LLMs and machine vision require a lot of manual training and tuning. As far as I know, few AI technologies can learn entirely on their own and those that do are limited in scope. I'm not even sure AGI is really necessary to solve most problems. We may do AI "ala carte" for many years and one day someone will stitch a bunch of things together, et voila.

[-] Boozilla@lemmy.world 5 points 11 months ago

Thanks.

I'm glad you mentioned speech. Tortoise-TTS is an excellent text to speech AI tool that anyone can run on a GPU at home. I've been looking for a TTS tool that can generate a more natural -sounding voice for several years. Tortoise is somewhat labor intensive to use for now, but to my ear it sounds much better than the more expensive cloud-based solutions. It can clone voices convincingly, too. (Which is potentially problematic).

[-] agent_flounder@lemmy.world 2 points 11 months ago

Ooh thanks for the heads up. Last time I played with TTS was years ago using Festival, which was good for the time. Looking forward to trying Tortoise TTS.

[-] thedeadwalking4242@lemmy.world 2 points 11 months ago

Honestly I believe AGI is currently a compute resource problem less than a software problem. A paper came out awhile ago showing that individual neurons in the human brain displayed behavior like decently sized deep learning models. If this is true the number of nodes required for artificial neural nets to even come close to human like intelligence maybe astronomically higher then predicted.

[-] NightAuthor@lemmy.world 3 points 11 months ago

That’s my understanding as well, our brain is just an insane composition of incredibly simple mechanisms. Its compositions of compositions of compositions ad nauseam. We are manually simulating billions of years of evolution, using ourselves as a blueprint. We can get there… it’s hard to say when we’ll get there, but it’ll be interesting to watch.

[-] thedeadwalking4242@lemmy.world 0 points 11 months ago

Exactly, plus human consciousness might not be the most effective way to do it, might be easier less resource intensive ways.

this post was submitted on 21 Nov 2023
994 points (97.9% liked)

Technology

59081 readers
3280 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS