[-] vampatori@feddit.uk 8 points 11 months ago

Often the question marked as a duplicate isn't a duplicate, just the person marking it as such didn't spend the time to properly understand the question and realise how it differs. I also see lots of answers to questions mis-understanding the question or trying to force the person asking down their own particular preference, and get tons of votes whilst doing it.

Don't get me wrong, some questions are definitely useful - and some go above-and-beyond - but on average the quality isn't great these days and hasn't been for a while.

[-] vampatori@feddit.uk 7 points 11 months ago

Containers can be based on operating systems that are different to your computer.

Containers utilise the host's kernel - which is why there needs to be some hoops to run Linux container on Windows (VM/WSL).

That's one of the most key differences between VMs and containers. VMs virtualise all the hardware, so you can have a totally different guest and host operating systems; whereas because a container is using the host kernel, it must use the same kind of operating system and accesses the host's hardware through the kernel.

The big advantage of that approach, over VMs, is that containers are much more lightweight and performant because they don't have a virtual kernel/hardware/etc. I find its best to think of them as a process wrapper, kind of like chroot for a specific application - you're just giving the application you're running a box to run in - but the host OS is still doing the heavy lifting.

[-] vampatori@feddit.uk 8 points 11 months ago* (last edited 11 months ago)

As always, it depends! I'm a big fan of "the right tool for the job" and I work in many languages/platforms as the need arises.

But for my "default" where I'm building up the largest codebase, I've gone for the following:

  • TypeScript
    • Strongly-typed (ish) which makes for a nice developer experience
    • Makes refactoring much easier/less error-prone.
    • Runs on back-end (node) and front-end, so only one language, tooling, codebase, etc. for both.
  • SvelteKit
    • Svelte as a front-end reactive framework is so nice and intuative to use, definite the best there is around atm.
    • It's hybrid SSR/CSR is amazing, so nice to use.
    • As the back-end it's "OK", needs a lot more work IMO, but I do like it for a lot of things - and can not use it where necessary.
  • Socket.IO
    • For any real-time/stream based communication I use this over plain web sockets as it adds so much and is so easy to use.
  • PostgreSQL
    • Really solid database that I love more and more the more I use it (and I've used it a lot, for a very long time now!)
  • Docker
    • Easy to use container management system.
    • Everything is reproducible, which is great for development/testing/bug-fixing/and disasters.
    • Single method to manage all services on all servers, regardless of how they're implemented.
  • Traefik
    • Reverse proxy that can be set to auto-configure based on configuration data in my docker compose files.
    • Automatically configuring takes a pain point out of deploying (and allows me to fully automate deployment).
    • Really fast, nice dashboard, lots of useful middleware.
  • Ubuntu
    • LTS releases keep things reliable.
    • Commercial support available if required.
    • Enough name recognition that when asked by clients, this reassures them.
[-] vampatori@feddit.uk 5 points 1 year ago

This is a truly excellent pair of articles, brilliantly written.

Explains the problem, show the solution iterating step by step so we start to build an intuition about it, and goes as far as most people actually need for their applications.

[-] vampatori@feddit.uk 7 points 1 year ago

Containerised everything is the future of computing and I think we'll look back on how we do things currently with horror!

But yes, I am slowly starting to use more contained desktop applications. Server-wise, everything I deploy is now in containers.

[-] vampatori@feddit.uk 6 points 1 year ago

I think all the flexibility and distributed nature of open source is simultaneously it's greatest strength and greatest weakness. It allows us to do so much, to tailor it to our specific needs, to remix and share, and to grow communities around common goals. But at the same time, those communities so rarely come together to agree on standards, we reinvent the wheel over and over, and so we can flounder vs big corporations with more clearly defined leadership. Flexibility and options seems to lead to an inability to compromise.

But also I think open source and standards have become a battleground for Big Tech, with different mega-corps looking to capitalise on their ideas and hinder those of their competitors. Microsoft trying to push TypeScript into the ECMCA Script standard, Google trying to force AMP down our throats, Apple saying fuck-off to web standards/applications, the whole Snaps/Flatpak/Appimage thing, WebAssembley not having access to the DOM, etc.

I think one of the great things that open source does is that it effectively puts the code in people's hands and it's up to them to get value out of that however they can. But so often now it's these mega-corps that can garner the most value out of them - they can best market their offers, collect the most data to drive the software, bring to bare the most compute power, buy up and kill any threats to their business, and ultimately tip the balance very firmly in their favour.

Open source software needs contributors, without them it's nothing - sure you can fork the codebase, but can you fork the team?

Most people do the work because they love it - it's not even because they particularly want to use the software they create, it's the act of creating it that is fun and engaging for them. But I wonder if perhaps we're starting to cross a threshold where more restrictive licenses could start to gain more popularity - to bring back some semblance of balance between the relationship of community contributors and mega-corps.

[-] vampatori@feddit.uk 5 points 1 year ago

He's pushing for a decentralised web, he's specifically focussed on personally owned data through his Solid project. But it feels like maybe this month or so could be a tipping-point, so it would be great to get his input and/or for him to see how we all work away at it!

[-] vampatori@feddit.uk 7 points 1 year ago

Tim Berners-Lee would be interesting I think, given the direction he's gone into personal ownership/control of data.

[-] vampatori@feddit.uk 5 points 1 year ago

This is really interesting, I've never heard of such an approach before; clearly I need to spend more time reading up on testing methodologies. Thank you!

[-] vampatori@feddit.uk 6 points 1 year ago

I use it for basic 2D animation - overlays for videos (captions, title sequences, etc.) and animated diagrams - it works really well when you get used to it (mastering the curves editor is essential!). If you're going to composite what you do onto video outside of Blender (I use Resolve) you need to export as an image sequence in a format that supports transparency (e.g. png).

For more complex 2D work, Marco Bucci has an interesting three-part series here (the third part goes over animation specifically).

[-] vampatori@feddit.uk 5 points 1 year ago

In the original XCOM my brother and I didn't realise you needed to collect and research everything. We thought it was like a horde-survival game, however it could infact be completed. Learning this years after starting to play was one of my best gaming experiences - I came back to my parents for the weekend just to blow my brother's mind!

[-] vampatori@feddit.uk 4 points 1 year ago

The issues with LLM's for coding are numerous - they don't produce good results in my experience, there's plenty of articles on their flaws.

But.. they do highlight something very important that I think we as developers have been guilty of for decades.. a large chunk of what we do is busy work; the model definitions, the api to wrap the model, the endpoint to expose the model, the client to connect to the endpoint, the ui that links to the client, the server-side validation, the client-side validation, etc. On and on.. so much of it is just busy work. No wonder LLM's can offer up solutions to these things so easily - we've all been re-inventing the wheel over and over and over again.

Busy work is the worst and it played a big part in why I took a decade-long break from professional software development. But now I'm back running my own business and I'm spending significant time reducing busy work - for profit but also for my own personal enjoyment of doing the work.

I have two primary high-level goals:

  1. Maximise reuse - As much as possible should be re-usable both within and between projects.
  2. Minimise definition - I should only use the minimum definition possible to provide the desired solution.

When you look at projects with these in mind, you realise that so many "fundamentals" of software development are terrible and inherently lead to busy work.

I'll give a simple example.. let's say I have the following definition for a model of a simple blog:

User:
  id: int generate primary-key
  name: string

Post:
  id: int generate primary-key
  user_id: int foreign-key(User.id)
  title: string
  body: string

Seems fairly straight-forward, we've all done this before - it can be in SQL, prisma, etc. But there's some fundamental flaws right here:

  1. We've tightly coupled Post to User through the user_id field. That means Post is instantly far less reusable.
  2. We've forced an id scheme that might not be appropriate for different solutions - for example a blogging site with millions of bloggers with a distributed database backend may prefer bigint or even some form of UUID.
  3. This isn't true for everything, but is for things like SQL, Prisma, etc. - we've defined the model in a data-definition language that doesn't support many reusability features like importing, extending, mixins, overriding, etc.
  4. We're going to have to define this model again in multiple places.. our API that wraps the database, any clients that consume that API, any endpoints that serve that API up, in the UI, the validation, and so on.

Now this is just a really simple, almost superficial example - but even then it highlights these problems.

So I'm working on a "pattern" to help solve these kinds of problems, but with a reference implementation in TypeScript. Let's look at the same example above in my reference implementation:

export const user = new Entity({
    name: "User",
    fields: [
        new NameField(),
    ],
});

export const post = new Entity({
    name: "Post",
    fields: [
        new NameField("title", { maxLength: 100 }),
        new TextField("body"),
    ],
});

export const userPosts = new ContentCreator({
    name: "UserPosts",
    author: user,
    content: post,
});

export const blogSchema = new Schema({
    relationships: [
        userPosts,
    ],
});

So there's several things to note:

  1. Entities are defined in isolation without coupling to each other.
  2. We have sane defaults, no need to specify an id field for each entity (though you can).
  3. You can't see it here because of the above, but there are abstract id field definitions: IDField and AutoIDField. It's the specific implementation of this schema where you specify the type of ID you want to use, e.g. IntField, BigIntField, UUIDField, etc.
  4. Relationships are defined separately and used to link together entities.
  5. Relationships can bestow meaning - the ContentCreator relationship just extends OneToMany, but adds meta-data from which we can infer things in our UI, authorization, etc.
  6. Fields can be extended to provide meaning and to abstract implementations - for example the NameField extends TextField, but adds meta-data so we know it's the name of this entity, and that it's unique, so we can therefore have UI that uses that for links to this entity, or use it for a slug, etc.
  7. Everything is a separately exported variable which can be imported into any project, extended, overridden, mixed in, etc.
  8. When defining the relationship we sane defaults are used so we don't need to explicitly define the entity fields we're using to make the link, though we can if we want.
  9. We don't need to explicitly add both our entities and relationships to our schema (though we can) as we can infer the entities from the relationships.

There is another layer beyond this, which is where you define an Application which then lets you specify code generation components that to do all the busy work for you, settings like the ID scheme you want to use, etc.

It's early days, I'm still refining things, and there is a ton of work yet to do - but I am now using it in anger on commercial projects and it's saving me time - generating types/interfaces/classes, database definitions, api's, end points, ui components, etc.

But it's less about this specific implementation and more about the core idea - can we maximise reuse and minimise what we need to define for a given solution?

There's so many things that come off the back of it - so much config that isn't reusable (e.g. docker compose files), so many things that can be automatically determined based on data (e.g. database optimisations), so many things that can be abstracted (e.g. deployment/scaling strategies).

So much busy work that needs to be eliminated, allowing us to give LLM's a run for their money!

view more: ‹ prev next ›

vampatori

joined 1 year ago