LiamMayfair

joined 2 years ago
[–] LiamMayfair@lemmy.sdf.org 37 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

IT guy here. The CLI is not something I'd expect the average computer user to use at all. However, for power users and professionals it's a force multiplier at least, and a prerequisite often.

There are several reasons for this. Firstly, IT system and server administration, in the cloud or your own hardware, is often done via the CLI. This is because it's not that common or convenient to hook up every server in a rack to a monitor to click on stuff. But dialling into it remotely via SSH or even a serial port to perform bootstrapping procedures, troubleshooting and even routine management tasks sometimes, is very quick , easy and reliable.

The other main reason is automation. If I buy 10 servers to power my website, they all need installing and configuring a whole bunch of software, e.g. an Apache web server, DNS, SQL, Active Directory, AV, firewall, networking, and a host of other services. Now imagine doing all of that by hand. You don't even need to be a professional sysadmin installing server racks for a living for this to be important. Even if you run a couple desktop/servers/Raspberry Pi/NAS at home, they'll need updating, upgrading or replacing every once in a while. Having to click your way through everything every time you need to (re)configure them gets old very quickly.

GUIs are extremely poor at providing a consistent, predictable, automatable way to do things. They force you to do mostly everything manually and be present to supervise the whole thing. With the CLI you can script out pretty much any task and let it run in the background while you go do other things. I really don't see CLIs going anywhere anytime soon. I'd say it's actually the opposite. PowerShell was Microsoft's way of acknowledging this very fact years ago. The primitive Windows Batch scripting language wasn't cutting it for anyone, especially Windows Server users who had to painstakingly configure every Win Server install they did manually through a GUI wizard.

[–] LiamMayfair@lemmy.sdf.org 3 points 3 weeks ago (1 children)

I installed Zorin OS on two family laptops today. Hope it works out. They also run Ubuntu Cinnamon on another one and I was amazed to see a crusty 2005 laptop I'd last booted to install Debian on in 2018 start up for the first time in 7 years just fine. The thing just bloody worked, no drama.

[–] LiamMayfair@lemmy.sdf.org 20 points 1 month ago

Asking for source on the internet? We don't do that round these parts

[–] LiamMayfair@lemmy.sdf.org 23 points 1 month ago

Because Signal works and Matrix doesn't.

[–] LiamMayfair@lemmy.sdf.org -2 points 2 months ago (2 children)

Given enough data, you can prove anything. Correlation doesn't imply causation folks!

[–] LiamMayfair@lemmy.sdf.org -2 points 2 months ago (4 children)

I mean sure, OP's title is somewhat clickbaity but it's kind of true though. YouTube have broken compatibility with all existing unofficial clients. It's good that yt-dlp are managing to work around it. I expect many other clients will follow suit but some of them may be unable to install additional dependencies and remain broken.

Ultimately, what is the last straw that will break the camel's back? Every client/consumer will have their breaking point.

[–] LiamMayfair@lemmy.sdf.org 2 points 2 months ago (1 children)

Writing tests is the one thing I wouldn't get an LLM to write for me right now. Let me give you an example. Yesterday I came across some new unit tests someone's agentic AI had written recently. The tests were rewriting the code they were meant to be testing in the test itself, then asserting against that. I'll say that again: rather than calling out to some function or method belonging to the class/module under test, the tests were rewriting the implementation of said function inside the test. Not even a junior developer would write that nonsensical shit.

The code those unit tests were meant to be testing was LLM written too, and it was fine!

So right now, getting an LLM to write some implementation code can be ok. But for the love of god, don't let them anywhere near your tests (unless it's just to squirt out some dumb boilerplate helper functions and mocks). LLMs are very shit at thinking up good test cases right now. And even if they come up with good scenarios, they may pull these stunts on you like they did to me. Not worth the hassle.

[–] LiamMayfair@lemmy.sdf.org 17 points 3 months ago

If kids have learned to run their own Minecraft private servers, hosting a VPN should be child's play... Pun maybe intended.

[–] LiamMayfair@lemmy.sdf.org 13 points 3 months ago (3 children)

I tried GPT-5 to write some code the other day and was quite unimpressed with how lazy it is. For every single thing, it needed nudging. I'm going back to Sonnet and Gemini. And even so, you're right. As it stands, LLMs are useful at refactoring and writing boilerplate and repetitive code, which does save time. But they're definitely shit at actually solving non-trivial problems in code and designing and planning implementation at a high level.

They're basically a better IntelliSense and automated refactoring tool, but I wouldn't trust them with proper software engineering tasks. All this vibe coding and especially agentic development bullshit people (mainly uneducated users and the AI vendors themselves) are shilling these days, I'm going nowhere near around.

I work in a professional software development team in a business that is pushing the AI coding stuff really hard. So many of my coworkers use agentic development tools routinely now to do most (if not all) of their work for them. And guess what, every other PR that goes in, random features that had been built and working are removed entirely, so then we have to do extra work to literally build things again that had been ripped out by one of these AI agents. smh

[–] LiamMayfair@lemmy.sdf.org 6 points 4 months ago (1 children)

In certain translations of the anime, they're referred to as cousins. Quite the cousins they are

[–] LiamMayfair@lemmy.sdf.org 7 points 4 months ago (9 children)

This hits the nail right on the head. The point of cloud services is to take away all the overheads of building and delivering software solutions that have nothing to do with the actual business problem I'm trying to solve.

If I want to get a new product to market, I want to spend most of my time making my core product better, more marketable, more efficient. I don't want to divert time and resources to just keep the lights on, like having to hire a whole bunch of people whose only jobs is to provision and manage servers and IT infrastructure (or nurse a Kubernetes cluster for that matter). Managing Kubernetes or physical tin servers is not what my business is about. All this tech infrastructure is a means to an end, not the end itself.

That's why cloud services is such a cost efficient proposition for 98% businesses. Hell, if I could run everything using a serverless model (not always possible or cost effective) I'd do it gladly.

[–] LiamMayfair@lemmy.sdf.org 5 points 5 months ago* (last edited 5 months ago)

I was using the Signal "notes to self" too when taking notes during talks and conferences. Taking quick pictures of the slides in context was also a key thing for me. Exporting these unstructured notes into a useful notes archive is a pain as you say, especially if there is media too.

I caught myself doing this so often that I ended up building myself an app for this specific workflow. It's rather simple, just an MVP if you will, but it works well for me. Taking notes works exactly like Signal's "note to self" but it has some QoL stuff on top of that like separate notebooks and exporting notes and pictures to a single PDF archive. I can then import the PDF archive into Notion, which is my main notes repository. Notion can now parse PDF files and import them as regular Notion pages, which closes the loop for me rather nicely. YMMV ofc

I haven't published it to any app stores yet (might do in the future) but the source code is available here if you're technically savvy and happy to build and install it yourself.

 

With evidence mounting on the failure to limit global warming to 1.5C, do you think global carbon emissions will be low enough by 2050 to at least avoid the most catastrophic climate change doomsday scenarios forecast by the turn of the century?

I am somewhat hopeful most developed countries will get there but I wonder if developing countries will have the ability and inclination to buy into it as well.

view more: next ›