abstracting away determinism /s
This part from the article supports this sentiment:
In a pleasant surprise, reactions have been positive. Throttled organizations were "surprised and apologetic," mistaking issues for malice rather than "ignorance, unawareness."
I sneakily changed our pipeline to pull from the in-house docker registry, and for pipelines to require pulling from package repos only when locks changed. Our CI is faster than every other team, but nobody noticed.
So yeah, charge the companies! Please!
How come this is not an obvious improvement opportunity that materializes in other teams too, and visibly so, rather than "sneakily" hidden?
Isn't it better not only for performance but also for reliability?
Yeah, I was quite irritated copying that for the quoting 😅
The article doesn't even mention this critical risk and history. Huge gap.
Think about whether TODOs will be revisited, and how you can guarantee that. What do you gain and lose by replacing warnings with TODOs.
In my projects and work projects, I advocate for:
- Warnings and TODOs are fine only in initial development before release/stability and in feature branches during development
- TODOs are almost never revisited, so document state and information instead of hypotheticals; document opportunities over TODOs, document known shortcomings and risks, etc
- If there is good reason to keep and ignore warnings, document the reasoning, and we can update our CI/Jenkins quality gate to a new baseline of accepted warnings instead of suppressing them (this pretty much never happens)
Dotnet warning suppression attributes have a Justification property. Editorconfig severity, disabling, suppression can have a comment.
If it's your own project and you know when and how you will revisit, what do you gain by dropping the warning? A no-warning, but then you have TODOs with the same uncertainties?
I do. But I'm very selective and critical in choosing and trusting the right ones. They're also not my only source.
I don't think YouTube reviews are any worse than other forms of reviews. There are plenty of bad text reviews out there, too.
It's a fund you donate to; they invest the money, then fund open source with the investment gains.
I posted a comment on this other post that summarizes the most relevant (because it wasn't clear to me either, and as a note/explanation to myself too).
Data-driven grant model. There’s no perfect model for distributing OSS grants. Our approach is an open, measurable, algorithmic (but not automatic) model, […] We’re finalizing the first version of the selection model after the public launch, and its high-level description is at osendowment/model.
The fund invests all donations in a low-risk portfolio and uses only the investment income for grants, making it independent of annual budgets and market volatility. Even a modest $10M fund at this rate would generate ~$500K every year — enough for $10K grants to 50 critical open source projects.
Currently standing at $700k.
Regarding the model:
We aim to focus our support on the core of open-source ecosystems — like ~1% of packages accounting for 99% of downloads and dependencies. Our model shall be a data-driven approximation of the global usage of the open-source supply chain, helping to detect its most critical but underfunded elements.
Screenshots of both:
| "Classic" | "New" | |
|
Well, you can see I use dark color scheme, which apparently got lost. Make a guess how much better I like that.
It's not my full monitor width because of vertical browser tabs, but even then the horizontal distance between left nav bar and top right nav toolbar is horrendous.
The spacing is wasteful, the sizing is unnecessarily big.
It's worse in every way. Less accessible, less readable, less scannable, less overview.
I wish they would simply drop their new design draft completely.
For anyone visiting the site thinking "looks like before for me" like I did, at the top there's a link to "try out the new site".
Their blog post, research blog post, previous community feedback, feedback form.
We onboarded our team with VS integrated Copilot.
I regularly use inline suggestions. I sometimes use the suggestions that go beyond what VS suggested before Copilot license.. I am regularly annoyed at the suggestions moving off code, even greyed out sometimes being ambiguous with grey text like comma and semicolon, and control conflicting with basic cursor navigation (CTRL+Right arrow)
I am very selective about where I use Copilot. Even for simple systematic changes, I often prefer my own editing, quick actions, or multi cursor, because they are deterministic and don't require a focused review that takes the same amount of time but with worse mental effect.
Probably more than my IDE "AI", I use AI search to get information. I have the knowledge to assess results, and know when to check sources anyway, in addition, or instead.
My biggest issue with our AI is in the code some of my colleagues produce and give me for review, and that I don't/can't know how much they themselves thought about the issues and solution at hand. A lack of description, or worse, AI generated summaries, are an issue in relation to that.



Glad to see them mention dialog is in proposal for improvements. If popover covers more accessibility than dialog, that seems like a significant, surprising, and obvious shortcoming. Surely there's technical and/or historical reasons for that, but still.