833
you are viewing a single comment's thread
view the rest of the comments
[-] BlueKey@fedia.io 82 points 2 weeks ago

Turned out that the bug ocurred randomly. The first tries I just had the "luck" that it only happened when the breakpoints were on.
Fixed it by now btw.

[-] Skullgrid@lemmy.world 91 points 2 weeks ago

bug ocurred randomly.

Fixed it by now btw.

someone's not sharing the actual root cause.

[-] BlueKey@fedia.io 59 points 2 weeks ago

I'm new to Go and wanted to copy some text-data from a stream into the outputstream of the HTTP response. I was copying the data to and from a []byte with a single Read() and Write() call and expexted everything to be copied as the buffer is always the size of the while data. Turns out Read() sometimes fills the whole buffer and sometimes don't.
Now I'm using io.Copy().

[-] dfyx@lemmy.helios42.de 51 points 2 weeks ago

Note that this isn't specific to Go. Reading from stream-like data, be it TCP connections, files or whatever always comes with the risk that not all data is present in the local buffer yet. The vast majority of read operations returns the number of bytes that could be read and you should call them in a loop. Same of write operations actually, if you're writing to a stream-like object as the write buffers may be smaller than what you're trying to write.

[-] bl_r@lemmy.dbzer0.com 13 points 2 weeks ago

Iโ€™ve run into the same problem with an API server I wrote in rust. I noticed this bug 5 minutes before a demo and panicked, but fixed it with a 1 second sleep. Eventually, I implemented a more permanent fix by changing the simplistic io calls to ones better designed for streams

[-] dfyx@lemmy.helios42.de 7 points 2 weeks ago

The actual recommended solution is to just read in a loop until you have everything.

[-] xthexder@l.sw0.com 3 points 2 weeks ago

Ah yes... several years ago now I was working on a tool called Toxiproxy that (among other things) could slice up the stream chunks into many random small pieces before forwarding them along. It turned out to be very useful for testing applications for this kind of bug.

[-] dunz@feddit.nu 15 points 2 weeks ago

I had a bug like that today . A system showed 404, but about 50% of the time. Turns out I had two vhosts with the same name, and it hit them roughly evenly ๐Ÿ˜ƒ

[-] vortexsurfer@lemmy.world 2 points 2 weeks ago

Had a similar thing at work not long ago.

A newly deployed version of a component in our system was only partially working, and the failures seemed to be random. It's a distributed system, so the error could be in many places. After reading the logs for a while I realized that only some messages were coming through (via a message queue) to this component, which made no sense. The old version (on a different server) had been stopped, I had verified it myself days earlier.

Turns out that the server with the old version had been rebooted in the meantime, therefore the old component had started running again, and was listening to the same message queue! So it was fairly random which one actually received each message in the queue ๐Ÿ˜‚

Problem solved by stopping the old container again and removing it completely so it wouldn't start again at the next boot.

this post was submitted on 03 Sep 2024
833 points (99.4% liked)

Programmer Humor

32042 readers
1299 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS