Criminal waste of elotes, though. I'll have them if they don't want them.
addie
HDMI -> DP might be viable, since DP is 'simpler'.
Supporting HDMI means supporting a whole pile of bullshit, however - lots of handshakes. The 'HDMI splitters' that you can get on eg. Alibaba (which also defeat HDCP) are active, powered things, and tend to get a bit expensive for high resolution / refresh.
Steam Machine is already been closely inspected for price. Adding a fifty dollar dongle into the package is probably out of the question, especially a 'spec non-compliant' one.
I'm going to guess it would require kernel support, but certainly graphics card driver support. AMD and Intel not so difficult, just patch and recompile; NVIDIA's binary blob ha ha fat chance. Stick it in a repo somewhere outside of the zone of copyright control, add it to your package manager, boom, done.
I bet it's not even much code. A struct or two that map the contents of the 2.1 handshake, and an extension to a switch statement that says what to do if it comes down the wire.
Python tkinter interfaces might be inefficient, slow and require labyrinthine code to set-up and use, but they make up for it by being breathtakingly ugly.
Java's biggest strength is that "the worst it can be" is not all that bad, and refactoring tools are quite powerful. Yes, it's wordy and long-winded. Fine, I'd rather work with that than other people's Bash scripts, say. And just because a lot of Java developers have no concept of what memory allocation means, and are happy to pull in hundreds of megabytes of dependencies to do something trivial, then allocate fucking shitloads of RAM for no reason doesn't mean that you have to.
There is a difference in microservices between those set up by a sane architect:
- clear data flow and pragmatic service requirements
- documented responses and clear failure behaviour
- pact server set up for validation in isolation
- entire system can be set up with eg. a docker compose file for testing
- simple deployment of updates into production and easy rollback
... and the CV-driven development kind by people who want to be able to 'tick the boxes' for their next career move:
- let's use Kubernetes, those guys earn a fortune
- different pet language for every service
- only failure mode is for the whole thing to freeze
- deployment needs the whole team on standby and we'll be firefighting for days after an update
- graduate developers vibe coding every fucking thing and it getting merged on Claude's approval only
We mostly do the second kind at my work; a nice Java monolith is bliss to work on in comparison. I can see why others would have bad things to say about them too.
C++
Correct answer in under 2 seconds, ie. about a hundred times longer than the rest of the puzzles so far combined. Can't think of a more efficient way to do this; no doubt it'll come to me at two in the morning, like usual.
Part 2 implementation breaks down the 'green tiles outline' into a series of line segments and then uses the even-odd winding rule to decide whether a point is inside or outside. Tracing the outline of the potential boxes to see whether the entirety is inside allows selection of the largest. We can skip tracing any potential box that wouldn't be bigger than what we have so far, and since the tests are easy to parallelise, have done so.
#include <atomic>
#include <boost/log/trivial.hpp>
#include <boost/unordered/concurrent_flat_map.hpp>
#include <cstddef>
#include <fstream>
#include <mutex>
#include <thread>
namespace {
struct Point {
int x, y;
auto operator+=(const Point &b) -> Point & {
x += b.x;
y += b.y;
return *this;
}
};
auto operator==(const Point &a, const Point &b) {
return a.x == b.x && a.y == b.y;
}
auto operator!=(const Point &a, const Point &b) { return !(a == b); }
std::size_t hash_value(const Point &p) {
return size_t(p.x) << 32 | size_t(p.y);
}
using Map = std::vector<Point>;
struct Line {
Point a, b;
};
auto read() {
auto rval = std::vector<Point>{};
auto ih = std::ifstream{"09.txt"};
auto line = std::string{};
while (std::getline(ih, line)) {
auto c1 = line.find(',');
rval.emplace_back(
std::stoi(line.substr(0, c1)), std::stoi(line.substr(c1 + 1))
);
}
return rval;
}
auto size(const Point &a, const Point &b) -> uint64_t {
size_t w = std::abs(a.x - b.x) + 1;
size_t h = std::abs(a.y - b.y) + 1;
return w * h;
}
auto part1(const std::vector<Point> &p) {
auto maxi = std::size_t{};
for (auto a = size_t{}; a < p.size() - 1; ++a)
for (auto b = a + 1; b < p.size(); ++b)
maxi = std::max(maxi, size(p[a], p[b]));
return maxi;
}
auto make_line(const Point &a, const Point &b) {
return Line{
{std::min(a.x, b.x), std::min(a.y, b.y)},
{std::max(a.x, b.x), std::max(a.y, b.y)}
};
}
auto direction(const Point &a, const Point &b) {
auto dx = b.x - a.x;
auto dy = b.y - a.y;
if (dx != 0)
dx /= std::abs(dx);
if (dy != 0)
dy /= std::abs(dy);
return Point{dx, dy};
}
// evaluates 'insideness' with the even-odd rule. On the line -> inside
auto inside_uncached(const Point &a, const std::vector<Line> &lines) -> bool {
auto even_odd = 0;
for (const auto &line : lines) {
if (line.a.y == line.b.y) { // horizontal
if (a.y != line.a.y)
continue;
if (a.x <= line.a.x)
continue;
if (a.x < line.b.x)
return true;
even_odd += 1;
} else { // vertical
if (a.x < line.a.x)
continue;
if (a.y < line.a.y || a.y > line.b.y)
continue;
if (a.x == line.a.x)
return true;
even_odd += 1;
}
}
return even_odd % 2 == 1;
}
using Cache = boost::unordered::concurrent_flat_map<Point, bool>;
auto cache = Cache{};
auto inside(const Point &a, const std::vector<Line> &lines) -> bool {
std::optional<bool> o;
bool found = cache.visit(a, [&](const auto &x) { o = x.second; });
if (found)
return o.value();
auto b = inside_uncached(a, lines);
cache.insert({a, b});
return b;
}
auto inside(const Line &a, const std::vector<Line> &lines) -> bool {
auto dir = direction(a.a, a.b);
auto point = a.a;
while (point != a.b) {
point += dir;
if (!inside(point, lines))
return false;
}
return true;
}
auto part2(std::vector<Point> p) {
auto outline = std::vector<Line>{};
for (auto a = size_t{}; a < p.size() - 1; ++a)
outline.push_back(make_line(p[a], p[a + 1]));
outline.push_back(make_line(p.back(), p.front()));
std::sort(outline.begin(), outline.end(), [](auto &a, auto &b) {
return a.a.x < b.a.x;
});
std::sort(p.begin(), p.end(), [](auto &a, auto &b) {
if (a.x != b.x)
return a.x < b.x;
return a.y > b.y;
});
auto maximum = std::atomic_uint64_t{};
auto update_lock = std::mutex{};
auto column_worker = std::atomic_uint64_t{};
auto threadpool = std::vector<std::thread>{};
for (auto t = size_t{}; t < std::thread::hardware_concurrency(); ++t) {
threadpool.push_back(std::thread([&]() {
while (true) {
auto a = column_worker++;
if (a > p.size() - 1)
break;
for (auto b = p.size() - 1; b > a; --b) {
auto box = size(p[a], p[b]);
// if it wouldn't be bigger, skip it
if (box <= maximum)
continue;
// quick check for the opposite corners
if (!inside(Point{p[a].x, p[b].y}, outline) ||
!inside(Point{p[b].x, p[a].y}, outline))
continue;
// trace the outline
auto left = make_line(p[a], Point{p[a].x, p[b].y});
if (!inside(left, outline))
continue;
auto top = make_line(Point{p[a].x, p[b].y}, p[b]);
if (!inside(top, outline))
continue;
auto right = make_line(Point{p[b].x, p[a].y}, p[b]);
if (!inside(right, outline))
continue;
auto bottom = make_line(p[a], Point{p[b].x, p[a].y});
if (!inside(bottom, outline))
continue;
// it's all on green tiles. update as the biggest so far
{
auto _ = std::lock_guard<std::mutex>(update_lock);
if (box > maximum)
maximum = box;
}
}
}
}));
}
for (auto &thread : threadpool)
thread.join();
return uint64_t(maximum);
}
} // namespace
auto main() -> int {
auto tiles = read();
BOOST_LOG_TRIVIAL(info) << "Day 9: read " << tiles.size();
BOOST_LOG_TRIVIAL(info) << "1: " << part1(tiles);
BOOST_LOG_TRIVIAL(info) << "2: " << part2(tiles);
}
Apart from being slow, having discoverability issues, not being able combine filters and actions so that you frequently need to fall back to shell scripts for basic functionality, it being a complete PITA to compare things between accounts / regions, advanced functionality requiring you to directly edit JSON files, things randomly failing and the error message being carefully hidden away, the poor audit trail functionality to see who-changed-what, and the fact that putting anything complex together means spinning so many plates that Terraform'ing all your infrastructure looks like the easy way; I'll have you know there's nothing wrong with the AWS Console UI.
C++
Both parts in 40 ms. As always, the real AoC with C++ is parsing the input, although having a number that changes in the puzzle messes up my test/ solution runners.
Uses Boost's implementations of sets, since they're just plain faster than the STL ones.
Code
#include <boost/log/trivial.hpp>
#include <boost/unordered/unordered_flat_set.hpp>
#include <filesystem>
#include <fstream>
#include <stdexcept>
namespace {
struct Point {
int x, y, z;
};
auto distance(const Point &a, const Point &b) {
auto &[ax, ay, az] = a;
auto &[bx, by, bz] = b;
ssize_t dx = ax - bx;
ssize_t dy = ay - by;
ssize_t dz = az - bz;
return size_t(dx * dx + dy * dy + dz * dz);
}
using PList = std::vector<Point>;
using DMap = std::vector<std::pair<std::pair<size_t, size_t>, size_t>>;
using DSet = boost::unordered::unordered_flat_set<size_t>;
auto read() {
auto rval = PList{};
auto ih = std::ifstream{"08.txt"};
auto line = std::string{};
while (std::getline(ih, line)) {
auto c1 = line.find(',');
auto c2 = line.find(',', c1 + 1);
rval.emplace_back(
std::stoi(line.substr(0, c1)),
std::stoi(line.substr(c1 + 1, c2 - c1 - 1)),
std::stoi(line.substr(c2 + 1))
);
}
return rval;
}
auto dmap(const PList &t) {
auto rval = DMap{};
for (auto a = size_t{}; a < t.size() - 1; ++a)
for (auto b = a + 1; b < t.size(); ++b)
rval.emplace_back(std::make_pair(a, b), distance(t.at(a), t.at(b)));
return rval;
}
auto find(size_t c, std::vector<DSet> &sets) {
for (auto i = sets.begin(); i != sets.end(); ++i)
if (i->contains(c))
return i;
throw std::runtime_error{"can't find"};
}
auto part1(const PList &t, size_t count, const DMap &d) {
auto circuits = std::vector<DSet>{};
for (auto i = size_t{}; i < t.size(); ++i) {
auto c = DSet{t.size()};
c.insert(i);
circuits.push_back(std::move(c));
}
for (auto i = size_t{}; i < count; ++i) {
auto a = d.at(i).first.first;
auto b = d.at(i).first.second;
auto ac = find(a, circuits);
auto bc = find(b, circuits);
if (ac == bc)
continue;
ac->insert(bc->begin(), bc->end());
circuits.erase(bc);
}
std::sort(circuits.begin(), circuits.end(), [](auto &a, auto &b) {
return a.size() > b.size();
});
auto total = size_t{1};
for (auto i = size_t{}; i < 3; ++i)
total *= circuits.at(i).size();
return total;
}
auto part2(const PList &t, const DMap &d) {
auto circuits = std::vector<DSet>{};
for (auto i = size_t{}; i < t.size(); ++i) {
auto c = DSet{t.size()};
c.insert(i);
circuits.push_back(std::move(c));
}
for (auto i = size_t{}; i < d.size(); ++i) {
auto a = d.at(i).first.first;
auto b = d.at(i).first.second;
auto ac = find(a, circuits);
auto bc = find(b, circuits);
if (ac == bc)
continue;
ac->insert(bc->begin(), bc->end());
circuits.erase(bc);
if (circuits.size() == 1) {
return ssize_t(t.at(a).x) * ssize_t(t.at(b).x);
}
}
throw std::runtime_error{"never joins?"};
}
} // namespace
auto main() -> int {
auto t = read();
BOOST_LOG_TRIVIAL(info) << "Day 8: read " << t.size();
auto test = std::filesystem::current_path().filename().string() ==
std::string{"test"};
auto d = dmap(t);
std::sort(d.begin(), d.end(), [](auto &a, auto &b) {
return a.second < b.second;
});
BOOST_LOG_TRIVIAL(info) << "1: " << part1(t, test ? 10 : 1000, d);
BOOST_LOG_TRIVIAL(info) << "2: " << part2(t, d);
}
To an extent, yes. The oldest Windows game that I'd still probably want to play legitimately is probably Thief: The Dark Project, from 1998. It was made for DirectX version 6. Naturally, it will just not run on modern systems, Windows or Linux.
There are patches that bring it to DX9. It has an internal frame rate limiter to something weird, 90 fps or something. The weakest machine I have available can push that about at 4K while apparently still in full energy saving mode.
Proton's DX9 emulation is apparently a bit rough, but most of the DX9 games you'd want to emulate are just fine with 'brute force'. There's a couple of problematic games - CS:GO, for instance - where the efficiency matters.
On account of Dan Ek's bullshit, have cancelled Spotify this year in favour of Qobuz, and am much happier all round.
Last year's 'wrapped' was just AI generated slop. After a year of listening to metal and electronica, got a top five of stuff that I'm not sure I'd listened to at all. Who would have thought the great plagiarism machine, trained to produce the most average output from any given input, would not do well on input that diverges from the mean?
I'd probably have preferred a completely random K-Pop selection; might have been an interesting listen, try out something new.
Yeah. You know the first time you install Arch (btw), and you realise you've not installed a working network stack, so you need to reboot from the install media, remount your drives, and pacstrap the stuff you forgot on again? Takes, like, three minutes every time? Imagine that, but you've got a kernel compile as well, so it takes about half an hour.
Getting Gentoo so that it'll boot to a useful command line took me a few hours. Worthwhile learning experience, understand how boot / the initramfs / init and the core utilities all work together. Compiling the kernel is actually quite easy; understanding all the options is probably a lifetime's work, but the defaults are okay. Setting some build flags and building 'Linux core' is just a matter of watching it rattle by, doesn't take long.
Compiling a desktop environment, especially a web browser, takes hours, and at the end, you end up with a system with no noticeable performance improvements over just installing prebuilt binaries from elsewhere.
Unless you're preparing Linux for eg. embedded, and you need to account for basically every byte, or perhaps you're just super-paranoid and don't want any pre-built binaries at all, then the benefits of Gentoo aren't all that compelling.
Designed for newspaper printing, so it's exceedingly narrow, and it's also intended for ink which will 'bleed' a bit, so the fine details are too fine for high quality prints.
If you're arranging text into two-inch wide columns and then printing with a high-speed roller, then it might be a good choice.
But since that's niche and very unlikely, it's probably a bad choice. It's also a terrible on-screen font, and since the cost of rendering a web page isn't meaningfully affected by the size of the glyphs, then there's no justification for choosing it for web purposes.