[-] Veraxis@lemmy.world 3 points 1 day ago
  1. I don't know much about gnome, sorry!

  2. The main issues to watch out for are driver issues related to certain peripherals like fingerprint scanners, SD card readers, and certain oddball wifi chipsets. Hybrid graphics with both integrated CPU graphics and a dedicated GPU can lead to poor battery life in some systems such as many gaming laptops. In my experience, Linux runs fine on every laptop which I have tried it with, including 2 with hybrid Nvidia graphics. I'm also 2 for 2 on SD card readers and 3/3 on wifi cards as well, despite no prior research on my part.

  3. Arch Linux sounds like it would be the closest to what you are describing. Or try out one of the more preconfigured versions like Endeavour OS or Arcolinux, as the install process for Arch can be a bit involved for someone new to Linux.

  4. Usually not difficult so long as something is not a hard dependency for some other piece of software. Running something as root in Linux is as simple as typing "sudo" before a command and entering your root password

  5. No. Per the above, elevated user privileges are permitted as a normal part of using Linux and do not require you to hack or bypass the OS's security mechanisms like in Android or iOS.

  6. If you install more than one, depending on your login manager it is usually as simple as a dropdown menu to select which DE you want to use when logging in.

  7. Wayland is a window manager/GUI system used in Linux. It has been getting a lot of discussion lately because the Linux community is gradually shifting from the longstanding but now unmaintained X11 system to Wayland. You probably don't need to worry about it.

[-] Veraxis@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

A mix of factors for me. Firstly, privacy concerns, settings reverting themselves after updates, and the looming threat of Windows 11 were I to get a new PC. Stuttery performance on my already 3 year old laptop at the time (I still use the same laptop. It is now 6 years old and still runs great with Linux). General bloat, driver problems, and instability issues.

I did not make the switch all at once, but thankfully my laptop has two NVMe slots, which made dual booting easier while I got more used to using Linux as my daily driver. Within about a year, I was booting into Windows less and less, and eventually hardly ever once I found ways to use Linux for everything I needed.

[-] Veraxis@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

Daily Arch user here. The process of configuring an Arch install is perhaps not as difficult or mystical as you are imagining. I would say it is more like your first analogy: picking what off-the-shelf parts you want for a system and then putting them into a case. I think what you are describing is more like Linux from Scratch.

Installing Arch is effectively taking the steps performed by the installer .iso disks which every distro uses and instead doing it manually with CLI commands. You use CLI commands to partition the drive, create a filesystem, install a basic set of packages, then chroot into your system and use the package manager to install the rest of the packages you want. Aside from editing a couple config files, there is zero coding involved. The exact steps vary from guide to guide, but a basic outline of what I do is as follows:

  • First, I download the Arch iso and write it to a USB.

  • Once I boot up the install USB, I use iwctl to connect to my wifi for the packages I will need to download,

  • then I use fdisk to partition the drive I want to install to with an EFI and linux filesystem partition (You might also make a swap partition at this step but I typically use a swap file on my filesystem partition).

  • then you use mkfs to create filesystems on the EFI and linux filesystem partitions.

  • Then I use genfstab to make the /etc/fstab file

  • Then, I use pacstrap to install the base packages like pacman. Then I mount the filesystem and chroot into the new partition.

  • From there, I basically use pacman to install all the packages I need, including the linux kernel (I use linux-zen), the DE (I use KDE), the boot manager (I use Refind), and everything else. There are a few cleanup steps like setting the locales and time zones, etc. but that is about it.

I suggest watching a guide on youtube, which was how I learned, or installing something like Arcolinux or Endeavour, which simplifies the installer into a series of checkboxes to select what DE you want, etc.

[-] Veraxis@lemmy.world 12 points 3 days ago

I am only a few pages in, but speaking as a Linux user in the 2020s, I am skeptical of the claim that Linux in 1999 would "never, ever break down."

[-] Veraxis@lemmy.world 9 points 4 days ago

I would say that looseleaf green tea would be a big step up from teabags. A strainer basket that goes in a mug is a very low-fuss way to make looseleaf tea, as they are easy to clean and reusable. Looseleaf Japanese sencha was a game-changer for what I thought green tea could be.

I am personally not a huge fan of the mesh balls as tea tends to escape out from the gap along the middle, and for green tea especially, too much particulate can be bitter.

I also have a teapot with a strainer which functions much the same way, but for when I want to brew more than one cup.

[-] Veraxis@lemmy.world 13 points 1 month ago* (last edited 1 month ago)

Two old HP thin client PCs configured as 4TB SFTP file servers using vsftpd on Debian. Each one uses software RAID 1 with both an NVMe and SATA SSD internally, and are in two separate locations with a cron job which syncs one to the other every 24 hours.

People who actually know what they are doing will probably find this silly, but I had fun and learned a lot setting it up.

[-] Veraxis@lemmy.world 16 points 1 month ago

I think you would need to provide more detail to know what you have. Does it have a model number on it anywhere?

[-] Veraxis@lemmy.world 11 points 4 months ago

The Arch installation tutorial I followed originally advised using LVM to have separate root and user logical volumes. However, after some time my root volume started getting full, so I figured I would take 10GB off of my home volume and add it to the root one. Simple, right?

It turns out that lvreduce --size 10G volgroup0/lv_home doesn't reduce the size by 10GB, it sets the absolute size to 10GB, and since I had way more than 10GB in that volume, it corrupted my entire system.

There was a warning message, but it seems my past years of Windows use still have me trained to reflexively ignore dire warnings, and so I did it anyway.

Since then I have learned enough to know that I really don't do anything with LVM, nor do I see much benefit to separate root/home partitions for desktop Linux use, so I reinstalled my system without LVM the next time around. This is, to date, the first and only time I have irreparably broken my Linux install.

[-] Veraxis@lemmy.world 24 points 6 months ago

Blah blah blah blah blah...

tl;dr the author never actually gets to the point stated in the title about what the "problem" is with the direction of Linux and/or how knowing the history of UNIX would allegedly solve this. The author mainly goes off on a tangent listing out every UNIX and POSIX system in their history of UNIX.

If I understand correctly, the author sort of backs into the argument that, because certain Chinese distros like Huawei EulerOS and Inspur K/UX were UNIX-certified by Open Group, Linux therefore is a UNIX and not merely UNIX-like. The author seems to be indirectly implying that all of Linux therefore needs to be made fully UNIX-compatible at a native level and not just via translation layers.

Towards the end, the author points out that Wayland doesn't comply with UNIX principles because the graphics stack does not follow the "everything is a file" principle, despite previously admitting that basically no graphics stack, like X11 or MacOS's graphics stack, has ever done this.

Help me out if I am missing something, but all of this fails to articulate why any of this is a "problem" which will lead to some kind of dead-end for Linux or why making all parts of Linux UNIX-compatible would be helpful or preferable. The author seems to assume out of hand that making systems UNIX-compatible is an end unto itself.

33
submitted 6 months ago by Veraxis@lemmy.world to c/spiders@lemmy.world

I apologize for the sub-optimal lighting in a slightly dark corner of my living room.

Does anyone have any thoughts on what this might be? The location is North Carolina, USA. I'm no expert, but looking around at some photos, my best guess might be a grass spider of the genus Agelenopsis. Hopefully this isn't too mundane of a spider for this community.

The size I would estimate is around 15mm or so. Fortunately, they were a very cooperative photography subject and did not move while I went and grabbed a ruler for the last image below.

[-] Veraxis@lemmy.world 16 points 9 months ago

The Arch installation tutorial I followed originally advised using LVM to have separate root and user logical volumes. However, after some time my root volume started getting full, so I figured I would take 10GB off of my home volume and add it to the root one. Simple, right?

It turns out that lvreduce --size 10G volgroup0/lv_home doesn't reduce the size by 10GB, it sets the absolute size to 10GB, and since I had way more than 10GB in that volume, it corrupted my entire system.

There was a warning message, but it seems my past years of Windows use still have me trained to reflexively ignore dire warnings, and so I did it anyway.

Since then I have learned enough to know that I really don't do anything with LVM, nor do I see much benefit to separate root/home partitions for desktop Linux use, so I reinstalled my system without LVM the next time around. This is, to date, the first and only time I have irreparably broken my Linux install.

[-] Veraxis@lemmy.world 11 points 1 year ago

Not my preference personally, but cool.

2
submitted 1 year ago* (last edited 1 year ago) by Veraxis@lemmy.world to c/linuxquestions@lemmy.zip

I have a new install of Debian 12 Bookworm, and I have added the nonfree firmware sources to my sources list.

However, when I run apt search firmware-linux I see three options

firmware-linux

firmware-linux-free [installed, automatic]

firmware-linux-nonfree

I would like to use nonfree firmware, but I am confused by that first option. what does firmware-linux include or not include that is different from firmware-linux-nonfree? Which should I install?

1

To clarify, I am not talking about making installation media. My installation USB works just fine. What I want to do is install Debian 12 Bookworm to a second USB drive to use as the permanent boot drive for a machine.

As for why I want to do this: I have a small HP elitedesk 800 G3 mini-pc. It has both an NVMe drive and a 2.5" SATA drive. I want to turn it into a file server with RAID 1 between the NVMe and SATA drives, with a USB drive in the back as the boot drive (yes I know about the issues of wear-out from running an OS from a USB drive. I am okay with this).

My procedure so far has been simple: insert both the installation USB and the target USB. I am able to detect and install the OS to the target USB without issue. The system then reboots and I am able to log into the OS from the USB drive (performance depends a lot on the speed of the USB drive being used, I have tried a few different types and settled on an abnormally fast USB drive which performs pretty well as far as I can tell).

However, as soon as I shut down from that first boot and remove the install USB, the next time I boot, the BIOS says "boot device not found" as though it cannot detect any OS. And after that I am completely unable to boot into that drive ever again. I have gone into the BIOS and changed as many settings as I can think of, such as turning off secure boot, turning off fast boot, verifying that the boot order is set to boot from USB. Nothing so far has worked.

Does anyone have any thoughts for what could be wrong? I know sometimes booting from a USB is treated differently from booting from a internal drive, but I am unclear on the exact details of this.

Any help would be much appreciated.

[-] Veraxis@lemmy.world 44 points 1 year ago

Electrical engineer here who also does hobby projects. I'm with you. I think some of the reason may be that modern GaN-type green or blue LEDs are absurdly efficient, so only a couple mA of drive current is enough to make them insanely bright.

When I build LEDs into my projects, for a simple indicator light, I might run them at maybe only a tenth of a milliamp and still get ample brightness to tell whether it is on or not in a lit room. Giving them the full rated 10 or 20mA would be blindingly bright. I also usually design most things with a hard on/off switch so they can be turned all the way off when not in use.

Of things I own normally I also have two power strips with absurdly bright LEDs to indicate the surge protection. It lights up my whole living room with the lights off. If I had to have something like that in my bedroom, I would probably open it up and disconnect the LEDs in some way, or maybe modify the resistor values to run at the lowest current I could get away with.

I feel like designers have lost sight of the fact that these lights are meant to be indicators only-- i.e. a subtle indication of the status of something and not trying to light a room-- and yet they default to driving them at full blast as if they were the super dim older-gen LEDs from 20+ years ago.

view more: next ›

Veraxis

joined 1 year ago