[-] model_tar_gz@lemmy.world 26 points 16 hours ago

… come to think of it now, I would have played ball with them if they’d just been transparent about the situation upfront. It was good interview practice and in retrospect prepared me well for the interviews at my current role. And I’m way happier with this company than I would’ve been there.

The Universe does funny things.

[-] model_tar_gz@lemmy.world 41 points 18 hours ago

I took an interview like this before. I checked the vast majority of the boxes of technologies used, and experience in a specific type of processing models prior to deployment. Thought it was bagged and tagged mine. 4 rounds of interviews, two technical rounds and a system design.

Asked me some hyper-specific question about X and wanted a hyper-specific implementation of Z technology to solve the problem. The way I solved it would have worked, but it wasn’t the X they were looking for.

Turns out the guy interviewing me at the second tech interview round was the manager of the guy he wanted in the role—and the guy working for him already was the founder of the startup that commercialized X, and they just needed to check a box for corporate saying they’d done their diligence looking for a relevant senior engineer.

That fucking company put me through the wringer for that bullshit. 4 rounds of interviews.

Never again.

[-] model_tar_gz@lemmy.world 1 points 21 hours ago

The funny thing here is that there are many good distributions that are based on Ubuntu. I’m a Pop!_OS fanboy, many of my colleagues enjoy Mint. Yet, almost everyone I know in the Linux world despises Ubuntu.

[-] model_tar_gz@lemmy.world 28 points 1 day ago

Fun fact: the scientist’s name was Adam, and the soup was primordial.

[-] model_tar_gz@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

If I want to wear my sunglasses while I’m watching a movie in the cinema because I have a light-sensitivity condition—usagenof the sunglasses alters my perception of the film without changing the permanent media storage of the film—am I cheating and subject to copyright infringement action?

[-] model_tar_gz@lemmy.world 2 points 3 days ago

Check your pocket for the fourth time. Might actually be there this time.

[-] model_tar_gz@lemmy.world 1 points 4 days ago

Kneel dat Ass, my Son

[-] model_tar_gz@lemmy.world 3 points 4 days ago* (last edited 4 days ago)

Reminds me of this trad classic in Lumpy Ridge @ Estes Park, CO: Magical Chrome Plated Semi-Automatic Enema Syringe

The novelty of the view is more fun than the climb/crux itself but the overall route is fun and worth doing.

[-] model_tar_gz@lemmy.world 3 points 5 days ago

It’s Hammer time!

[-] model_tar_gz@lemmy.world 3 points 6 days ago

Some say we’ll see Armageddon soon

[-] model_tar_gz@lemmy.world 87 points 1 month ago

I’m an AI Engineer, been doing this for a long time. I’ve seen plenty of projects that stagnate, wither and get abandoned. I agree with the top 5 in this article, but I might change the priority sequence.

Five leading root causes of the failure of AI projects were identified

  • First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.
  • Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.
  • Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
  • Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.
  • Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.

4 & 2 —>1. IF they even have enough data to train an effective model, most organizations have no clue how to handle the sheer variety, volume, velocity, and veracity of the big data that AI needs. It’s a specialized engineering discipline to handle that (data engineer). Let alone how to deploy and manage the infra that models need—also a specialized discipline has emerged to handle that aspect (ML engineer). Often they sit at the same desk.

1 & 5 —> 2: stakeholders seem to want AI to be a boil-the-ocean solution. They want it to do everything and be awesome at it. What they often don’t realize is that AI can be a really awesome specialist tool, that really sucks on testing scenarios that it hasn’t been trained on. Transfer learning is a thing but that requires fine tuning and additional training. Huge models like LLMs are starting to bridge this somewhat, but at the expense of the really sharp specialization. So without a really clear understanding of what can be done with AI really well, and perhaps more importantly, what problems are a poor fit for AI solutions, of course they’ll be destined to fail.

3 —> 3: This isn’t a problem with just AI. It’s all shiny new tech. Standard Gardner hype cycle stuff. Remember how they were saying we’d have crypto-refrigerators back in 2016?

[-] model_tar_gz@lemmy.world 92 points 4 months ago* (last edited 4 months ago)

No, this is incompetent management.

Senior engineers write enabling code/scaffolding, and review code, and mentor juniors. They also write feature code.

Lead engineers code and lead dev teams.

Principal engineers code, and talk about tech in meetings.

Senior Principal engineers, and distinguished technologists/fellows talk about tech, and maybe sometimes code.

Good managers go to meetings and shield the engineers from the stream of exec corporate bs. Infrequently they may rope any of the engineers in this chain in to explain the decisions that the engineers make along the way.

Bad managers bring engineers in to these meetings frequently.

Terrible managers make the engineering decisions and push those to the engineers.

view more: next ›

model_tar_gz

joined 1 year ago