Kissaki

joined 2 years ago
MODERATOR OF
 

Uiua () is a general-purpose array-oriented programming language with a focus on simplicity, beauty, and tacit code.

Uiua lets you write code that is as short as possible while remaining readable, so you can focus on problems rather than ceremony.

The language is not yet stable, as its design space is still being explored. However, it is already quite powerful and fun to use!

Uiua uses special characters for built-in functions that remind you what they do!

⚂ # Random number
⇡8 # Range up to
⇌ 1_2_3_4 # Reverse

cross-posted from: https://programming.dev/post/46403010

Sample with fibonacci:

⍥◡+9∩1 is the fibonacci in this language


Commenter maegul writes on the Programming community post:

I tried to go through the tutorial a year or so ago.

I can’t recall when, but there’s a point at which doing something normal/trivial in an imperative language requires all sorts of weirdness in Uiua. But they try to sell it as especially logical while to me they came off as completely in a cult.

It’s this section, IIRC: https://www.uiua.org/tutorial/More%20Argument%20Manipulation#-planet-notation-

When they declare

And there you have it! A readable syntax juggling lots of values without any names!

For

×⊃(+⊙⋅⋅∘|-⊃⋅⋅∘(×⋅⊙⋅∘)) 1 2 3 4

Which, if you can’t tell, is equivalent to

f(a,b,c,x) = (a+x)(bx-c)

With arguments 1, 2, 3, 4.

I wanted to like this, and have always wanted to learn APL or J (clear influences). But I couldn’t take them seriously after that.

[–] Kissaki@programming.dev 4 points 7 hours ago* (last edited 7 hours ago)

Data-driven grant model. There’s no perfect model for distributing OSS grants. Our approach is an open, measurable, algorithmic (but not automatic) model, […] We’re finalizing the first version of the selection model after the public launch, and its high-level description is at osendowment/model.

The fund invests all donations in a low-risk portfolio and uses only the investment income for grants, making it independent of annual budgets and market volatility. Even a modest $10M fund at this rate would generate ~$500K every year — enough for $10K grants to 50 critical open source projects.

Currently standing at $700k.

Regarding the model:

We aim to focus our support on the core of open-source ecosystems — like ~1% of packages accounting for 99% of downloads and dependencies. Our model shall be a data-driven approximation of the global usage of the open-source supply chain, helping to detect its most critical but underfunded elements.

[–] Kissaki@programming.dev 5 points 1 day ago* (last edited 1 day ago) (1 children)

Screenshots of both:

| "Classic" | "New" | |


|


| | | |

Well, you can see I use dark color scheme, which apparently got lost. Make a guess how much better I like that.

It's not my full monitor width because of vertical browser tabs, but even then the horizontal distance between left nav bar and top right nav toolbar is horrendous.

The spacing is wasteful, the sizing is unnecessarily big.

It's worse in every way. Less accessible, less readable, less scannable, less overview.

I wish they would simply drop their new design draft completely.


For anyone visiting the site thinking "looks like before for me" like I did, at the top there's a link to "try out the new site".

Their blog post, research blog post, previous community feedback, feedback form.

[–] Kissaki@programming.dev 3 points 1 day ago* (last edited 1 day ago)

We onboarded our team with VS integrated Copilot.

I regularly use inline suggestions. I sometimes use the suggestions that go beyond what VS suggested before Copilot license.. I am regularly annoyed at the suggestions moving off code, even greyed out sometimes being ambiguous with grey text like comma and semicolon, and control conflicting with basic cursor navigation (CTRL+Right arrow)

I am very selective about where I use Copilot. Even for simple systematic changes, I often prefer my own editing, quick actions, or multi cursor, because they are deterministic and don't require a focused review that takes the same amount of time but with worse mental effect.

Probably more than my IDE "AI", I use AI search to get information. I have the knowledge to assess results, and know when to check sources anyway, in addition, or instead.

My biggest issue with our AI is in the code some of my colleagues produce and give me for review, and that I don't/can't know how much they themselves thought about the issues and solution at hand. A lack of description, or worse, AI generated summaries, are an issue in relation to that.

/edit: Here is my comment on the post four months ago.

[–] Kissaki@programming.dev 10 points 1 day ago

And it's so popular! It must be good!

[–] Kissaki@programming.dev 1 points 2 days ago

Many times I've used piefed, wrote a comment, some longer some shorter, and without fail, it denied posting after writing it out but without telling me specifically why I can't post. Just no permission. Consequently, it never stuck to me.

[–] Kissaki@programming.dev 2 points 2 days ago* (last edited 2 days ago)

I’ve been using TortoiseGit since the beginning, but it's Windows-only.

In TortoiseGit, the Log view is my single entry point to all regular and semi-regular operations.

Occasionally, I use native git CLI to manage refs (archive old tags into a different ref path, mass-remote-delete, etc).

Originally, it was a switch from TortoiseSVN to TortoiseGit, and from then on, no other GUI or TUI met my needs and wants. I explored/tried out many alternative GUIs and TUIs over the years, but none felt as intuitive, gave as much overview, or capabilities. Whenever I'm in Visual Studio and use git blame, I'm reminded that it is lacking - in the blame view you can't blame the previous versions to navigate backwards through history within a code view. I can do that in TortoiseGit.

I've also tried out Gitbutler and jj, which are interesting in that they're different. Ultimately, they couldn't convince me for regular use when git works well enough and additional tooling can introduce new complexities and issues when you don't make a full switch. I remember Gitbutler added refs making git use impractical. jj had a barrier to entry, to understand and follow the concepts and process, which I think I simply did not pass yet to have a more accurate assessment.

I did explore TUIs also as no-install-required fallback alternatives, but in practice, I never needed them. When I do use the console, I'm familiar with native git to cover my needs. Remote shell: native git, locally: Nushell on top of native git for mass queries and operations.

[–] Kissaki@programming.dev 3 points 2 days ago* (last edited 2 days ago)

pen and paper is decentralized storage too, but the push and fetch sync protocols are a lot of work

[–] Kissaki@programming.dev 3 points 3 days ago

I expected alpha becoming beta, but the download has no such label at all. Is it considered stable now?

Their news doesn't say much about the drop of and about the new status either.

The Release 28 is our first release without the Alpha label: our development process has matured, our releases are more frequent, and our commitment to quality has never been higher.

[–] Kissaki@programming.dev 1 points 5 days ago

This doesn't seem programming-related. Am I missing something?

[–] Kissaki@programming.dev 1 points 5 days ago

When I was researching keyboards recently, I stumbled over a pro gamer (I believe) YouTuber who was quite vocal about pretty much all gear marketed as "gaming gear" is overpriced marketing bullshit. Apparently, they tested dozens of keyboards, mice, and headsets over the years. It certainly matched my impression of reading tests about products previously.

"Gamer" chairs are racecar chairs meant to keep you from sliding sideways, not being fit for long sitting sessions on a PC. Prefer a good or decent office chair. "Gamer" headsets are worse and more expensive than other headsets. Keyboards and mice are mostly marketing. etc.

Regarding input, they made a point about physical human limitations and state like sleep and caffeine intake having much more of an effect than the hardware you use.

2022 update

So this article is quite old. There are keyboard switches now that activate as soon as you activate the key, and that can recognize lift and press without passing a trigger point. If you want that kind of edge, those are the top performers right now. I'd be more interested in the technology and maybe playful capabilities than the performance they add.

I'm always way too thorough when researching products before buying…

 

The reasons behind this rise of the latency is mainly that systems have become more and more complex and developers often don't know or don't understand each part that can impact the latency.

This website has been made to help developers and consumers better understand the latency issues and how to tackle them.

[–] Kissaki@programming.dev 6 points 1 week ago (4 children)

I thought I remembered a standardized metadata file format you can place on your website, but I can't find it.

GitHub defines FUNDING

Brave webbrowser attempted something like that with Brave Rewards, but through ads, and basically collected for themselves until the websites actually signed up for Brave Rewards.

I remember Flattr.

[–] Kissaki@programming.dev 2 points 1 week ago

Claims that it can, but no evidence or anecdotal examples of how it worked in practice.

 

After working on my weird shooter game for 5 years, I realized I'm never going to be finishing this project. In this video I explain why I've decided to quit my game and what is next.

 

From the README:

What is KORE?

KORE is a self-hosting programming language that combines the best ideas from multiple paradigms:

Paradigm Inspiration KORE Implementation
Safety Rust Ownership, borrowing, no null, no data races
Syntax Python Significant whitespace, minimal ceremony
Metaprogramming Lisp Code as data, hygienic macros, DSL-friendly
Compile-Time Zig comptime execution, no separate macro language
Effects Koka/Eff Side effects tracked in the type system
Concurrency Erlang Actor model with message passing
UI/Components React/JSX Native JSX syntax, components, hot reloading
Targets Universal WASM, LLVM native, SPIR-V shaders, Rust transpilation

Example

// Define a function with effect tracking
fn factorial(n: Int) -> Int with Pure:
    match n:
        0 => 1
        _ => n * factorial(n - 1)

// Actors for concurrency
actor Counter:
    var count: Int = 0

    on Increment(n: Int):
        count = count + n

    on GetCount -> Int:
        return count

fn main():
    let result = factorial(5)
    println("5! = " + str(result))
 

By streaming CSS updates/appends through an open HTTP connection

 

Girard's insight was that communities resolve internal conflict through scapegoating: the selection of a victim to bear collective guilt, whose expulsion or destruction restores social cohesion. The scapegoat need not be guilty of the crime attributed to it; it need only be acceptable as a target.

Some dangerous individuals, however, institutionalize such ritualistic practices into what I call Casus Belli Engineering: the use of perceived failure as pretext to replace established systems with one's preferred worldview. The broken feature is the crisis that demands resolution. The foundation becomes the scapegoat, selected not for its actual guilt but for its vulnerability and the convenience of its replacement. And in most cases, this unfolds organically, driven by genuine belief in the narrative.

The danger is not the scapegoating itself; humans will scapegoat. The danger lies in those who have learned to trigger the mechanism strategically, who can reliably convert any failure into an opportunity to destroy what exists and build what they prefer.

The linked article title is “Casus Belli Engineering: The Sacrificial Architecture”, which I didn't find particularly descriptive. I used the second headline, “The Scapegoat Mechanism”. It doesn't include the architecture or strategy aspects, but serves well as a descriptor and entry point in my eyes.

 

There exists a peculiar amnesia in software engineering regarding XML. Mention it in most circles and you will receive knowing smiles, dismissive waves, the sort of patronizing acknowledgment reserved for technologies deemed passé. "Oh, XML," they say, as if the very syllables carry the weight of obsolescence. "We use JSON now. Much cleaner."

 

In our previous post “Reinventing how .NET Builds and Ships”, Matt covered our recent overhaul of .NET’s building and shipping processes. A key part of this multi-year effort, which we called Unified Build, is the introduction of the Virtual Monolithic Repository (VMR) that aggregates all the source code and infrastructure needed to build the .NET SDK. This article focuses on the monorepo itself: how it was created and the technical details of the two-way synchronization that keeps it alive.

 

Users are not allowed to create Issues directly in this repository - we ask that you create a Discussion first.

Unlike some other projects, Ghostty does not use the issue tracker for discussion or feature requests. Instead, we use GitHub discussions for that. Once a discussion reaches a point where a well-understood, actionable item is identified, it is moved to the issue tracker. This pattern makes it easier for maintainers or contributors to find issues to work on since every issue is ready to be worked on.

This approach is based on years of experience maintaining open source projects and observing that 80-90% of what users think are bugs are either misunderstandings, environmental problems, or configuration errors by the users themselves. For what's left, the majority are often feature requests (unimplemented features) and not bugs (malfunctioning features). Of the features requests, almost all are underspecified and require more guidance by a maintainer to be worked on.

Any Discussion which clearly identifies a problem in Ghostty and can be confirmed or reproduced will be converted to an Issue by a maintainer, so as a user finding a valid problem you don't do any extra work anyway. Thank you.

 

On January 1, 2026, GitHub will reduce the price of GitHub-hosted runners by up to 39% depending on the machine type used. The free usage minute quotas will remain the same.

On March 1, 2026, GitHub will introduce a new $0.002 per minute GitHub Actions cloud platform charge that will apply to self-hosted runner usage. Any usage subject to this charge will count toward the minutes included in your plan, as explained in our GitHub Actions billing documentation.

Runner usage in public repositories will remain free. There will be no changes in price structure for GitHub Enterprise Server customers.

We are increasing our investment into our self-hosted experience to ensure that we can provide autoscaling for scenarios beyond just Linux containers.

Historically, self-hosted runner customers were able to leverage much of GitHub Actions’ infrastructure and services at no cost.

view more: next ›