TurdBurgler

joined 2 days ago
[–] TurdBurgler@sh.itjust.works 0 points 4 hours ago* (last edited 4 hours ago)

Early adopters will be rewarded by having better methodology by the time the tooling catches up.

Too busy trying to dunk on me than understand that you have some really helpful tools already.

[–] TurdBurgler@sh.itjust.works 1 points 4 hours ago

This is why I say some people are going to lose their jobs to engineers using AI correctly, lol.

[–] TurdBurgler@sh.itjust.works 1 points 4 hours ago* (last edited 2 hours ago)

What are you even trying to say? You have no idea what these products are, but you think they are going to fail?

Our company does market research and test pilots with customers, we aren't just devs operating in a bubble pushing AI.

We are listening and responding to customer needs and investing in areas that drive revenue using this technology sparingly.

[–] TurdBurgler@sh.itjust.works 1 points 4 hours ago

These tools are mostly determistic applications following the same methodology we've used for years in the industry. The development cycle has been accelerated. We are decoupled from specific LLM providers by using LiteLLM, prompt management, and abstractions in our application.

Losing a hosted LLM provider means we prox6 litellm to something out without changing contracts with our applications.

[–] TurdBurgler@sh.itjust.works 3 points 4 hours ago

Well, I typed it with my fingers.

[–] TurdBurgler@sh.itjust.works 2 points 4 hours ago

Incorrect, but okay.

[–] TurdBurgler@sh.itjust.works 1 points 4 hours ago* (last edited 2 hours ago)

We use a layered architecture following best practices and have guardrails, observability and evaluations of the AI processes. We have pilot programs and internal SMEs doing thorough testing before launch. It's modeled after the internal programs we've had success with.

We are doing this very responsibly, and deliver a product our customers are asking for, with the tools to help calibrate minor things based on analytics.

We take data governance and security compliance seriously.

[–] TurdBurgler@sh.itjust.works -5 points 16 hours ago* (last edited 16 hours ago) (8 children)

While it's possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.

I see a lot of people saying, "I tried it and it didn't work," but have they read the guides or just jumped right in?

For example, if you haven't read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.

Your CLAUDE.md might be trash, and maybe you're using @file wrong and blowing tokens or biasing your context wrong.

LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.

  1. Plan first, using planning modes to help you, decomposition the plan
  2. Have the model keep track of important context externally (like in markdown files with checkboxes) so the model can recover when the context gets fucked up

https://www.promptingguide.ai/

https://www.anthropic.com/engineering/claude-code-best-practices

There are community guides that take this even further, but these are some starting references I found very valuable.

[–] TurdBurgler@sh.itjust.works 2 points 16 hours ago

In my opinon, Codex is fine, but copilot has better support across AI providers (mode models), and Claude is a better developer.

[–] TurdBurgler@sh.itjust.works -3 points 17 hours ago

Sure thing, crazy how anti AI lemmy users are!

[–] TurdBurgler@sh.itjust.works -3 points 20 hours ago (8 children)

I get it. I was a huge skeptic 2 years ago, and I think that's part of the reason my company asked me to join our emerging AI team as an Individual Contributor. I didn't understand why I'd want a shitty junior dev doing a bad job... but the tools, the methodology, the gains.. they all started to get better.

I'm now leading that team, and we're not only doing accelerated development, we're building products with AI that have received positive feedback from our internal customers, with a launch of our first external AI product going live in Q1.

[–] TurdBurgler@sh.itjust.works -2 points 20 hours ago* (last edited 15 hours ago)

If you're not already messing with mcp tools that do browser orchestration, you might want to investigate that.

For example, if you setup puppeteer, you can have a natural conversation about the website you're working on, and the agent can orchestrate your browser for you. The implication is that the agent can get into a feedback loop on its own to verify the feature you're asking it to build.

I don't want to make any assumptions about additional tooling, but this is a great one in this space https://www.agentql.com/

view more: next ›