ayaya

joined 2 years ago
[–] ayaya 46 points 2 years ago* (last edited 2 years ago) (5 children)

I have been using the same Arch installation for about 8 years. The initial installation/configuration is the only time consuming part. Actual day-to-day usage is extremely easy.

Maybe this is no longer the case but I previously used Ubuntu and it was actually much more annoying in comparison, especially when upgrading between major revisions or needing to track down sources/PPAs for packages not in the main repos. Or just when you want something more up-to-date than what they're currently shipping.

The rolling release model + the AUR saves so much time and prevents a lot of headaches.

[–] ayaya 6 points 2 years ago* (last edited 2 years ago) (1 children)

You don't actually need to be durge. If you never interact with the portal Gale is in, you never meet him again and all of the content related to him is skipped. I don't think there's a way to do the same with Wyll though he just randomly shows up if you don't talk to him in the grove.

[–] ayaya 2 points 2 years ago* (last edited 2 years ago)

You're overestimating the power of a PS5. Its GPU is roughly around an RX 6600XT which can be found for ~$200. You could build a full system with it for around $600 and you'd break even in just over 2 years.

[–] ayaya 3 points 2 years ago (2 children)

Wow, that is incredibly unfortunate timing.

[–] ayaya 7 points 2 years ago* (last edited 2 years ago) (1 children)

As long as you still have access to the cli it should be fixable. If you want to still try to get to plasma 6 make sure you also enabled the core-testing and extra-testing repos in addition to kde-unstable as per the wiki

If you enable any other testing repository listed in the following subsections, you must also enable both core-testing and extra-testing

I missed that little snippet when I first swapped over.

If you do yay kf6 you can install all of the framework-related packages which might also help fill out some missing dependencies. For me it's 1-71. You can do the same with yay plasma and then choose the ones from kde-unstable (122-194 for me) but you will have to manually avoid the ones with conflicts like plasma-framework.

But if you want to try and revert theoretically simply removing the testing and unstable repos and doing another sudo pacman -Syu should get you back onto the older versions.

[–] ayaya 7 points 2 years ago (9 children)

You can do sudo pacman -Syudd where the dd is for ignoring dependencies to force it through. But be aware this is basically asking for things to break. Some packages haven't been updated to the latest versions yet. For example dolphin wouldn't launch so I had to switch to dolphin-git from the AUR.

[–] ayaya 3 points 2 years ago* (last edited 2 years ago) (2 children)

I'm honestly not sure what you're trying to say here. If by "it must have access to information for reference" you mean it has access while it is running, it doesn't. Like I said that information is only available during training. Either you're trying to make a point I'm just not getting or you are misunderstanding how neural networks function.

[–] ayaya 7 points 2 years ago

I think you are confused, how does any of that make what I said a lie?

[–] ayaya 21 points 2 years ago* (last edited 2 years ago) (14 children)

The important distinction is that this "database" would be the training data, which it only has access to during training. It does not have access once it is actually deployed and running.

It is easy to think of it like a human taking a test. You are allowed to read your textbooks as much as you want while you study, but once you actually start the test you can only go off of what you remember. Sure you might remember bits and pieces, but it is not the same thing as being able to directly pull from any textbook you want at any time.

It would require you to have a photographic memory (or in the case of ChatGPT, terabytes of VRAM) to be able to perfectly remember the entirety of your textbooks during the test.

[–] ayaya 16 points 2 years ago* (last edited 2 years ago) (6 children)

And even then there is no "database" that contains portions of works. The network is only storing the weights between tokens. Basically groups of words and/or phrases and their likelyhood to appear next to each other. So if it is able to replicate anything verbatim it is just overfitted. Ironically the solution is to feed it even more works so it is less likely to be able to reproduce any single one.

view more: ‹ prev next ›