GreatBlueHeron

joined 2 years ago
 

Maybe better to ask this in a Linux group, but trying here first.

I'm running a Linux server with Home Assistant in a VM, and a whole bunch of other stuff.

I recently moved my OpenVPN server onto the same physical box as Home Assistant. OpenVPN runs native on the host OS in tunnel mode.

OpenVPN works fine - clients can get to the host running OpenVPN, to applications running in docker containers on the same host and to other hosts on my network (once I update their routing to send traffic for my VPN network back to the OpenVPN host).

OpenVPN clients cannot get to my Home Assistant VM.

If I use tcpdump to watch the VM network interface (vnet0), from the host, and ping the VM from a VPN client I see the echo request go in and the reply come out. If I do the same, but watch the OpenVPN interface (tun0) I only see the request go in, but no reply. It's like the kernel doesn't know what to do with packets from the VM addressed to the VPN.

There is no firewall running on the host. This is not specific to my Home Assistant VM - I bought up a vanilla Alpine Linux VM and had exactly the same issue.

[–] GreatBlueHeron@lemmy.ca 1 points 4 days ago (1 children)

Well I fully cleaned and serviced my machine yesterday and my coffee today was maybe a little better, but still lacking. That leaves beans and water. There was a drought here last summer and the water table has changed. I really should test my water and service the softener. But, in the short term - I'll buy a jug of water nest time I'm in town and try making coffee with that.

[–] GreatBlueHeron@lemmy.ca 2 points 4 days ago

I can accept that there might be levels of good that can only be reached with a better grinder, but I've had much better than I'm getting now with my current grinder - I'm just trying to get back to that.

[–] GreatBlueHeron@lemmy.ca 2 points 5 days ago* (last edited 5 days ago)

I've cleaned the grinder and I'll see how that goes.

This grinder is only just over 4 years old - I've had the same model for over 10 years but moved county and left one behind and bought a new one.

I know my water isn't perfect - I get a bit of salty residue on the chilled water dispenser in my fridge. I might try some (bulk) bottled water.

I really struggle with the coffee compass - I can't describe (even to myself) what I'm tasting, or relate it to what I'm supposed to be tasting, so I can't relate it to what I need to adjust. I just know if I like it. I'm the same with wine - I did a tasting course and most of it was lost on me.

[–] GreatBlueHeron@lemmy.ca 8 points 5 days ago

It was well past due:

I only drink coffee with breakfast, so I'll report back tomorrow.

[–] GreatBlueHeron@lemmy.ca 4 points 5 days ago

I have done it, but it's been a while. I haven't noticed any change when I've done it in the past, but I'll give it a try.

 

I've used a home espresso machine, with built-in grinder, daily for at least 10 years. I'm generally happy with the results - there's some variability but most everything I make is acceptable, and I fairly regularly get something I feel is good. Recently I've been getting a lot of "acceptable" and it's been a long time (many months) since I've made one that I'd call "good". They're missing that bit of oily lustre that I feel really makes it perfect.

  • I drink a single shot over a small amount of hot water
  • I get my beans from a local (same province) roaster that say they roast to order
  • I like dark roast beans - the roaster calls it their Italian roast
  • my house water comes from a well and is naturally a bit hard, but we have a whole house softener
  • I've never de-scaled the machine because of the water softener - there's no build up or crusting at any orifices
  • I don't make any attempt to get perfect extraction quantity - I grind, tamp and trim with the tool supplied with the machine. When I first got this particular machine (about 4 years ago) I programmed the extraction time, but a can't remember the "recipe" I used
  • I've tried beans from one other local roaster (via a grocery store) with the same results

My experience says it's stale beans, but I'm buying roast to order, so I'm confused.

[–] GreatBlueHeron@lemmy.ca 4 points 1 week ago (1 children)

I've read a lot of fucked up shit in the last few years - that's the first time I've thrown my phone in response!

(My phone's fine - I just threw it onto the sofa beside me, but still..)

[–] GreatBlueHeron@lemmy.ca 1 points 2 weeks ago

What are you using to ship the logs to VL?

That's the reason I'm here asking about logging. I'm in the process of changing and wondering if I should switch it all up. I was using systemd-journal-remote, but I'm switching from Debian to Alpine so - no more systemd.

you should start excluding them before they reach VL

Now that confuses me. As I said in my original post - I had some preconceptions about centralised logging before I set it up, and having a single place to manage filters was certainly something I was hoping to get from it. Also any filtering would only be for reporting. I'd like to keep a full set of log data for potential problem analysis etc.

[–] GreatBlueHeron@lemmy.ca 1 points 2 weeks ago (2 children)

Yeah, I've been doing some more reading. Victoria Logs is doing a good job consolidating my logs and is very lightweight. It's the visualisation that I'm missing. Grafana can do it, but I'm having trouble getting my head around it. That's OK - it's just my home lab and it's mainly a learning exercise - I need to learn some more.

[–] GreatBlueHeron@lemmy.ca 1 points 2 weeks ago (4 children)

I'm already running a grafana instance, so I'll look into elastic/filebeat. Thanks.

 

I run a small home lab - number of servers varies from time to time. Currently five, all Linux.

When I heard about log consolidation I imagined that I would get a nice dashboard type view where I could see a consolidated, real time, view of all my server logs go by. Victoria Logs does that for me. I also imagined that there would be a way to flag particular log entries as "normal, and expected" so they would be excluded in the future - the goal being to get this dashboard to a state where if anything appears, it's probably bad. I can't see a way to do that in Victoria Logs. Do I need to try harder? If Victoria Logs won't do it - is there anything that will?

[–] GreatBlueHeron@lemmy.ca 33 points 3 weeks ago (1 children)

It's fun to point at the crappy performance of current technology. But all I can think about is the amount of power and hardware the AI bros are going to burn through trying to improve their results.

[–] GreatBlueHeron@lemmy.ca 6 points 3 weeks ago (1 children)

Saying Redhat is based on Fedora just seems wrong. I know there was discussion about this when the simpler version was posted and I think I understand that, today, RHEL is downstream of Fedora. But Redhat existed before Fedora so it still feels wrong to say Fedora is based on Redhat.

"Fedora Core 1 was the first version of Fedora and was released on November 6, 2003.[15] It was codenamed Yarrow. Fedora Core 1 was based on Red Hat Linux 9."

[–] GreatBlueHeron@lemmy.ca 15 points 3 weeks ago (1 children)

This is perfectly logical and I agree. Except that this controversy has prompted me to go learn about Lennart Poettering. I've been using systemd forever and I like it - I like journald and remote journald, I like networkd, I even deleted cron off my systems and use systemd timers exclusively. I knew there was some controversy about Lennart, but I didn't really care. Now that I've read a bit about his background and, maybe more importantly, his new company - I don't have a good feeling for the future of systemd.

 

I dumped Windows about 6 months ago and have not had a moment of regret. But our volunteer fire department just started using a Windows (only) tool for managing call outs etc.

I'm running Debian 13.3 and the Wine 10.0 from Debian.

I've played around a bit and worked through a few minor issues and can now run the various apps that ship with Wine - including iexplorer with network.

But when I try to run the one app I need, I get the following text:

[372:0115/144659.223:ERROR:network_change_notifier_win.cc(224)] WSALookupServiceBegin failed with: 0
[372:0115/144659.337:ERROR:network_sandbox.cc(302)] Failed to grant sandbox access to cache directory C:\users\[Redacted]\AppData\Roaming\fireq-desktop\Cache\Cache_Data: Procedure not found. (0x7F)
[372:0115/144659.338:ERROR:network_sandbox.cc(396)] Failed to grant sandbox access to network context data directory C:\users\[Redacted]\AppData\Roaming\fireq-desktop\Network: Success. (0x0)
02a0:err:ole:com_get_class_object class {7ab36653-1796-484b-bdfa-e74f1db7c1dc} not registered
02a0:err:ole:create_server class {7ab36653-1796-484b-bdfa-e74f1db7c1dc} not registered
02a0:err:ole:com_get_class_object no class object {7ab36653-1796-484b-bdfa-e74f1db7c1dc} could be created for context 0x5
023c:err:ole:com_get_class_object class {aa509086-5ca9-4c25-8f95-589d3c07b48a} not registered
023c:err:ole:com_get_class_object class {aa509086-5ca9-4c25-8f95-589d3c07b48a} not registered
023c:err:ole:create_server class {aa509086-5ca9-4c25-8f95-589d3c07b48a} not registered
023c:err:ole:com_get_class_object no class object {aa509086-5ca9-4c25-8f95-589d3c07b48a} could be created for context 0x17
[372:0115/144659.459:ERROR:network_service_instance_impl.cc(265)] Encountered error while migrating network context data or granting sandbox access for C:\users\[Redacted]\AppData\Roaming\fireq-desktop\Network. Result: 11: Success. (0x0)
[636:0115/144659.710:ERROR:network_change_notifier_win.cc(224)] WSALookupServiceBegin failed with: 0
01c8:err:d3d:wined3d_context_gl_set_pixel_format wglSetPixelFormatWINE failed to set pixel format 1 on device context 000000000501005D.
[636:0115/144700.128:ERROR:tcp_socket_win.cc(861)] connect failed: 10051
[636:0115/144700.129:ERROR:tcp_socket_win.cc(861)] connect failed: 10051
[636:0115/144700.129:ERROR:tcp_socket_win.cc(861)] connect failed: 10051
[636:0115/144700.129:ERROR:tcp_socket_win.cc(861)] connect failed: 10051
[704:0115/144711.603:ERROR:crashpad_client_win.cc(844)] not connected

The application opens, but just displays an empty window. I'm guessing it's trying to connect to their server before doing something else.

I believe the app I'm trying to run is electron based and my searches show that this has historically been an issue and I've tried a few of the hints I found - even though they were years old and for old versions of Wine.

I know it's a long shot, but do these error messages suggest anything obvious that I should try?

16
submitted 4 months ago* (last edited 4 months ago) by GreatBlueHeron@lemmy.ca to c/homeassistant@lemmy.world
 

A while back I ditched Windows for Linux desktop (long time Linux user, just not desktop) because I've learned to hate Microsoft.

I have 2 Sengled WiFi bulbs that I thought were useless now that Sengled is dead (although the app seems to be able to login again now, I'll never trust it). But then I found Sengled Tools which, among other things, documents a very simple way to communicate with Sengled bulbs using JSON over UDP. The sample light custom component is only ~100 lines of Python and adding the UDP and JSON from Sengled Tools would be maybe 50-100 more. I took this as an invitation to improve my Python and rescue the bulbs so I started reading up on Home Assistant development.

I now have this overwhelming VS Code install with devcontainers etc. etc. which seems crazy overkill for the task at hand and I really resent AI being shoved in my face every time I try to do something - especially when the main purpose of the exercise is to learn.

I run Home Assistant in a VM and I worked out I can virsh console hass and then docker exec -it homeassistant sh. I think there's maybe a sshd addon I could use and there is also the File Editor addon.

I guess I've answered my own question, and maybe I just wanted to have a rant about being "forced" back into the Microsoft ecosystem in order to develop for Home Assistant - but I would be interested to learn about other options.

Edit to add my solution for anyone that might come across this post in the future:

As usual, I rushed in without reading the documentation properly. I just started reading from the top and following the VS Code instructions. If I had scrolled down the page a bit I would have found the "Manual Environment" section. There are no instructions for my specific distribution, but it was clear enough that it could easily be adapted. I now have a copy of Home Assistant that I can simply run in a terminal and kill and restart etc. without impacting my "production environment". I've already installed vscodium, so will probably keep using it, but if I read the instructions properly I would probably just use vi.

 

I've been running Home Assistant for a while and have wifi, zwave and zigbee networks. My zigbee is on a ZBT-1. I was happy until this week.

I bought some ESP32-C6 development boards to learn about ESP32 etc. with the goal of making some zigbee lock sensors (mechanical switch to report if a deadbolt is closed).

When I put the sample zigbee code on the board it won't connect to the network from my study, but if I take it closer to the coordinator it will connect and it continues working if I take it back to my study. The desk in my study is only about 16' from the coordinator but it is through 3 wood framed, gyprock lined, walls.

I know the answer is "probably, maybe", but I'd be interested in any insight people have about optimizing Zigbee networks. I could remove one of the walls from the equation by using a longer USB cable and bringing the ZBT-1 out of the utility closet? I already have routers close to 2 of the 3 doors I want to put my sensors in - I could maybe add a Zigbee lamp near the 3rd location?

 

I've got an IKEA Tradfri LED driver and a Rodret dimmer. When I first installed them I thought it would be good to also control some non-IKEA pendant lights with the same dimmer, in sync with the cabinet lights connected to the Tradfri - so, I created automations in Home Assistant corresponding to each of the actions the dimmer can perform and this is working fine. However, we've decided not to control the pendant lights in sync with the cabinet lights so it's now unnecessarily complicated. I plan to remove the automations and link the Rodret direct to the Tradfri again.

I understand that I can do this by following the IKEA procedure to pair the devices. But I'm also curious about the option in Home Assistant to bind devices.

Finally to my question - are these two methods to achieve the same result, or is IKEA pairing somehow different than Zigbee binding?

 

I've just installed Interstellar and think it looks great. I've been using Jerboa and browser for Lemmy. The announcement of piefed.ca suggested this as a Piefed app, so here I am.

First impression is that I think I'm going to like it a lot - almost feels like RiF that I've been missing for some time. But - I'm finding scrolling really unpleasant - it's really jumpy, or jittery and hard to look at. I'm finding it so bad that I'm surprised it's not mentioned here or on GitHub, and I'm wondering if it's just me?

Version 0.9.3 on Android 15 on a Pixel 8 Pro.

 

I'm a retired Unix admin. I've been using Linux since I installed Slackware 3.1 from several boxes of 1.44MB diskettes. But, working in a corporate environment with lots of M$ Office requirements meant that my work desktop has always been Windows. I know it sounds crazy, but I was really hesitant to switch to away from Windows - I guess after 30+ years I'd developed a bit of Stockholm syndrome. But, Copilot and the looming Recall were enough to push me over the edge.

Anyway - I spent a while making sure I got all my data off OneDrive etc. and then installed Debian 12 with LXDE - my laptop is an older i7 with 16GB of RAM, but lightweight and minimal really appeals to me. Everything just worked and I was happy for a day or two. Then I started noticing video tearing - especially on my 2nd monitor. I did a bit of research and found a suggestion to enable TearFree in the X11 configuration - X wouldn't even start when I did that. So, I did some more reading and now think I understand that the lightweight window managers don't have vsync and this causes the tearing. Apparently the real solution is to use a compositing window manager (I don't understand what that means..) with OpenGL. Oh well, I can't have minimal lightweight - so, I installed KDE. It's very clean and no video tearing. I still don't have it doing power management for my monitors the way I want, but other than that - I'm very happy. It was noticeably sluggish compared to LXDE, but I'm used to that already after only a day.

It's only been a few days, but I have not regretted the switch for one second.

7
submitted 11 months ago* (last edited 11 months ago) by GreatBlueHeron@lemmy.ca to c/joplinapp@sopuli.xyz
 

Edit - I just went to the sync status page in the Windows client and hit "Retry All" on he failed objects again - and it worked! I have not changed anything since last time it failed - but for now I'm happy!

There's probably a lot of overlap between this community and Selfhosted@lemmy.world so some of you might recall my post from yesterday sharing some frustration about Nextcloud. Well today is Joplin's turn :-)

I've been using Joplin on Android for a little while now as a proof of concept - only a 6 notes so far, each only a page or so. One of my reasons for re-trying Nextcloud was because Joplin supports it as a sync method. After the discussion about Nextcloud yesterday I decided to try some of the suggested alternatives.

First I setup Syncthing and got that working so I have some folders syncing between Android, Linux and Windows. Then I setup Joplin to sync to filesystem - into one of the folders that Syncthing is managing. Joplin on Android sync'd everything to the filesystem, but when I tried to sync that filesystem to Joplin on Windows the attachments (photos) were missing from my notes. I can see the files (by id) in the .resource folder of the filesystem sync target but the Windows Joplin client won't pull them in.

I figured this multiple sync (Joplin <-> filesystem <-> Syncthing <-> filesystem <-> Joplin) might be an issue so I decided to try WebDAV. I configured a WebDAV folder my apache2 server, setup Joplin on Android to sync to WebDAV then went to the Windows Joplin, cleared the local data and setup WebDAV sync. Same thing - no photos in my notes. I can see the files are on the WebDAV server and there are no errors in the server logs so I guess the Windows client was able to pull them - but they don't show in the notes.

I tried searching and see several very similar issues on Discourse with no resolution.

Does this work for anyone else?

Edit - I just created a test note in the Windows client with an embedded image and this sync'd correctly to Android.

  • Joplin 3.2.13 (prod, win32)
  • Joplin Mobile 3.2.7 (prod, android)
 

I tried Nextcloud a while back and was not impressed - I had issues withe the speed of the Windows sync that were determined to be "normal" with no roadmap to getting fixed. I'm now planning to move off Windows desktop so that won't be an issue - so I thought I'd try again.

I went to nextcloud.com, clicked on Download-> Nextcloud server -> All-in-one -> Docker image - Setup AIO. This took me to the github README at Docker section. I'm already running docker for other things so I read the instructions, setup a new filesystem for my data directory and ran the suggested docker command with an appropriate "--env NEXTCLOUD_DATADIR=". I'm then left with a terminal running docker in the foreground - not a great way to run a background server but ok, I've been around for a while and can figure out how to make it autostart in the background ongoing. So I move on to the next step - open my browser at the appropriate URL and I'm presented with a simple page asking me to "Log in using your Nextcloud AIO passphrase:". I don't have a Nextcloud AIO passphrase and nothing I've read so far has mentioned it. When I search for it I get some results on how to reset it, but not much help. I could probably figure that out too, but after reading some more I found that Nextcloud requires a public hostname and can't work with a local name or IP address. I'm already running my home LAN with OpenVPN and access it from anywhere as "local" - I don't really want to create a new path into my home network just for Nextcloud.

I'm sorry - I know this sounds like a disgruntled rant and I guess it is. I just want to check that I'm not missing obvious things before I give up again. All I want is a simple file sync setup like onedrive but without the microsoft.

 

I'm a retired Unix (AIX) admin and I run some Linux servers at home. But, I'm still using Windows as a desktop. This whole Windows recall thing is the final straw - I'm switching to Linux for desktop. I've done a bit of research and believe Debian is the best fit for me. So, I recently installed it on one of my small servers.

I like it but I find the "half baked" approach to systemd a bit confusing. My default minimal server install has both cron jobs and systemd timers configured for basic system maintenance tasks. For example logrotate is fired twice a day - once by /etc/cron.daily/logrotate and once by /lib/systemd/system/logrotate.service. I'm tempted confirm that everything cron does is actually also done by systemd and then apt purge cron\* && rm -rf /etc/cron*. But, I suspect that might break future package installs and updates?

I'm also not excited by ifup/ifdown - why not just use the capability already included with systemd? This is just a minor thing for me as there's no real duplication I guess.

Is the a Debian based "pure systemd" distro??

view more: next ›