antimidas

joined 2 years ago
[–] antimidas@sopuli.xyz 1 points 4 months ago* (last edited 4 months ago) (4 children)

Couple things I could think of, that can lead to this sort of behavior:

  1. Using them in a cold environment, like -20 to -30 degrees Celsius.
  2. Headphones dropping down to an older standard for some reason, and connecting e.g. via bt 4.x or using an incorrect codec (would explain why one of them is draining so much faster) – should be possible to check in BT settings
  3. The headphones just have a bad battery, I've run into multiple headphones which just turn off once the battery reaches 50-60 %, especially when it's cold out

But these are just suggestions and speculation, I'm not really an expert on the subject.

[–] antimidas@sopuli.xyz 1 points 4 months ago

Doesn't support WiFi 6 on mikrotik you mean? As I'm currently running Openwrt on some bottom-shelf Asus routers and WiFi 6 works just fine.

[–] antimidas@sopuli.xyz 2 points 4 months ago

Yep, got myself a Jääkäri S after getting fed up with backpacks breaking all the time. This time it actually seems like it can stand up to the test of time and lugging two laptops around everywhere.

Whatever the brand, one thing to keep in mind is the material. Nylon (polyamide) can take much more abuse than e.g. polyester. Good if the bag bottom is as continuous as possible instead of being held up by seams. Savotta also adds reinforcement on the bottom so it doesn't wear as much from weight.

If you happen to be in Finland it's Jääkäri S currently on sale in Motonet for 90 € – not sure if they ship elsewhere in Europe though.

[–] antimidas@sopuli.xyz 2 points 5 months ago

Good link that, I'll have to add those flags to my list of aliases

[–] antimidas@sopuli.xyz 23 points 5 months ago (3 children)

The more frustrated you are when running git blame the more likely the command turns out to be a mirror.

[–] antimidas@sopuli.xyz 7 points 5 months ago (2 children)

Wouldn't be surprised if he'd take the comparison as a compliment

[–] antimidas@sopuli.xyz 6 points 5 months ago* (last edited 5 months ago)
[–] antimidas@sopuli.xyz 1 points 6 months ago

You select the active hob and set the desired timer. Usually the timer is either limited to a single hob at once, in more premium alternatives you might be able to set one for each hob individually.

I've only ever seen this in separate cooktops though, not in stoves.

[–] antimidas@sopuli.xyz 11 points 6 months ago* (last edited 6 months ago)

Actually it is, we do use both network cells and other public beacons for navigation when GPS is unavailable. It's just not available everywhere – you need a map available of cell locations and usually this mandates open datasets for companies to use. Navigation works underground in e.g. Helsinki metro as a personal anecdote. We don't need strict triangulation underground as cells are already so small. The metro tunnel is filled with picocells in practice (smaller than 200m coverage area cross-section).

We also use the cell network to push rough satellite locations to cellphones, in A-GPS, or more generally A-GNSSb as the same functionality is available for other systems as well. This way the phone can pinpoint the required satellites much faster, which is the main reason you can get such quick and accurate readings from your phone after starting to check your location.

Edit: AFAIK location services also enrich the information with databases of publicly visible WiFi SSIDs, using their visibility as a beacon. Scanning WiFi hotspots typically consumes less power than getting a GPS signal that's as accurate, and is also often more reliable in urban settings and higher latitudes where the satellites aren't as visible (though the constellations have enough satellites nowadays that this issue isn't nearly as bad as it used to be)

[–] antimidas@sopuli.xyz 17 points 6 months ago (1 children)

The cursed Linux alternative of this is usually putting things directly in the home folder – I used to do this until I got better. Desktop is simple to keep clean when you don't have one in your "desktop environment" by default.

Some people who've used MacOs before OSX dump everything to the root filesystem out of habit. It works just as poorly as a file management strategy as one might expect, albeit better than putting everything on the desktop. Not sure how often that happens but I've known multiple people to do that.

[–] antimidas@sopuli.xyz 6 points 7 months ago* (last edited 7 months ago)

And this is because audiophiles don't understand why the audio master is 96 kHz or more often 192 kHz. You can actually easily hear the difference between 48, 96 and 192 kHz signals, but not in the way people usually think, and not after the audio has been recorded – because the main difference is latency when recording and editing. Digital sound processing works in terms of samples, and a certain amount of them have to be buffered to be able to transform the signal between time and frequency. The higher the sample rate, the shorter the buffer, and if there's one thing humans are good at hearing (relatively speaking) it's latency.

Digital instruments start being usable after 96 kHz as the latency with 256 samples buffered gets short enough that there's no distracting delay from key press to sound. 192 gives you more room to add effects and such to make the pipeline longer. Higher sample rate also makes changing frequencies, like bringing the pitch down, simpler as there's more to work with.

But after the editing is done, there's absolutely no reason to not cut the published recording to 48 or 44.1 kHz. Human ears can't hear the difference, and whatever equipment you're using will probably refuse to play anything higher than 25 kHz anyways, as e.g. the speaker coils aren't designed to let higher frequency signals through. It's not like visual information where equipment still can't match the dynamic range of the eye, and we're just starting to get to a pixel density where we can no longer see a difference between DPIs.

[–] antimidas@sopuli.xyz 12 points 7 months ago (1 children)

There's an overabundance of competent-ish frontend developers. You most likely need to pay the devs less, compared to someone writing it with e.g. C++, and finding people with relevant experience takes less time. You also get things like a ready-made sandbox and the ability to re-use UI components from other web services, which simplifies application development. So my guess is that this is done to save money.

Also, the more things are running in an embedded browser the more reasons M$ has to bake Edge into the OS, without raising eyebrows as to why they're providing it as a default (look it's a system tool as well, not just a browser).

view more: ‹ prev next ›