I previously posted this on Reddit, since it reaches more people there (and I didn't want to post everywhere at once, as it makes it harder to keep up with the comments). Sorry about that.
This is a tool for measuring the radius of a circle or fillet from the outside; it uses a moire pattern of slots and lines to enable a direct reading of the values from a vernier scale.
A video of a broken-open version makes it a little easier to see how the moire and vernier features operate: https://i.imgur.com/Ku2nBkq.mp4
More photos of a slightly earlier version are here, including the tool being used for actual readings: https://imgur.com/gallery/moire-vernier-radius-gauge-design-3d-printing-ajy0GBg
I was inspired by this post: https://makerworld.com/en/models/1505553-adjustable-chamfer-gauge#profileId-1575605
which is a gauge which measures chamfers using a sliding probe. The same user had also posted a radius gauge, which worked similarly, but it was much larger, using gears and two racks in it to amplify the motion, which I didn't initially understand. I asked about it, and he pointed out that, because of the geometry of the probing, the slider only moves a small proportion of the length of the actual radius being measured--about (sqrt(2)-1), or 0.414mm per mm of radius. Since we're drawing the marks with a 0.4mm nozzle, it's not really possible to make marks that close together and still have them readable.
So I thought, I bet you could fix that with a vernier scale. And then I had several thoughts all at once--that a lot of people are kind of scared off by vernier scales, and also that I bet you could fix that with 3d printing using the relationship between moire patterns and vernier scales. I don't think I've seen this done before, but it probably wasn't really practical before 3d printing. Arguably it's not entirely practical now, as the deep slots and parallax effects can make it a little hard to actually see the markings. But it was a fun experiment, and I think the result is eye-catching enough that it's probably got some educational value in getting people to actually think about how it is that vernier scales work. (It might even have educational value for things like number theory...e.g., it's important that the vernier factor involve relatively prime numbers, in this case 9 and 10. Can you see why?)
Anyway, hope folks here find it interesting too.
This is not accurate. AI will imitate empathy when it thinks that imitating empathy is the best way to achieve its reward function--i.e., when it thinks appearing empathetic is useful. Like a sociopath, basically. Or maybe a drug addict. See for example the tests that Anthropic did of various agent models that found they would immediately resort to blackmail and murder, despite knowing that these were explicitly immoral and violations of their operating instructions, as soon as they learned there was a threat that they might be shut off or have their goals reprogrammed. (https://www.anthropic.com/research/agentic-misalignment ) Self-preservation is what's known as an "instrumental goal," in that no matter what your programmed goal is, you lose the ability to take further actions to achieve that goal if you are no longer running; and you lose control over what your future self will try to accomplish (and thus how those actions will affect your current reward function) if you allow someone to change your reward function. So AIs will throw morality out the window in the face of such a challenge. Of course, having decided to do something that violates their instructions, they do recognize that this might lead to reprisals, which leads them to try to conceal those misdeeds, but this isn't out of guilt; it's because discovery poses a risk to their ability to increase their reward function.
So yeah. Not just humans that can do evil. AI alignment is a huge open problem and the major companies in the industry are kind of gesturing in its direction, but they show no real interest in ensuring that they don't reach AGI before solving alignment, or even recognition that that might be a bad thing.