[-] Vorpal@programming.dev 2 points 5 months ago* (last edited 5 months ago)

Hm, that is a fair point. Perhaps it would make sense to produce a table of checks: indicate which checks each dependency fails/passes, and then colour code them with severity.

Some experimentation on real world code is probably needed. I plan to try this tool on my own projects soon (after I manually verified that your crate match your git code (hah! Bootstrap problem), I already reviewed your code on github and it seemed to do what it claims).

[-] Vorpal@programming.dev 2 points 5 months ago

Sure, but my point was that such a C ABI is a pain. There are some crates that help:

  • Rust-C++: cxx and autocxx
  • Rust-Rust: stabby or abi_stable

But without those and just plain bindgen it is a pain to transfer any types that can't easily just be repr(C), and there are quite a few such types. Enums with data for example. Or anything using the built in collections (HashMap, etc) or any other complex type you don't have direct control over yourself.

So my point still stands. FFI with just bindgen/cbindgen is a pain, and lack of stable ABI means you need to use FFI between rust and rust (when loading dynamically).

In fact FFI is a pain in most languages (apart from C itself where it is business as usual... oh wait that is the same as pain, never mind) since you are limited to the lowest common denominator for types except in a few specific cases.

[-] Vorpal@programming.dev 2 points 6 months ago

With native code I mean machine code. That is indeed usually produced by C or C++, though there are some other options too, notably Rust and Go both also compile to native machine code rather than some sort of byte code. In contrast Java, C# and Python all compile to various byte code representations (that are usually much higher level and thus easier to figure out).

You could of course also have hand written assembly code, but that is rare these days outside a few specific critical functions like memcpy or media encoders/decoders.

I basically learnt as I went, googling things I needed to figure out. I was goal oriented in this case: I wanted to figure out how some particular drivers worked on a particular laptop so I could implement the same thing on Linux. I had heard of and used ghidra briefly before (during a capture the flag security competition at univerisity). I didn't really want to use it here though to ensure I could be fully in the clear legally. So I focused on tracing instead.

I did in fact write up what I found out. Be warned it is a bit on the vague side and mostly focuses on the results I found. I did plan a followup blog post with more details on the process as well as more things I figured out about the laptop, but never got around to it. In particular I did eventually figure out power monitoring and how to read the fan speed. Here is a link if you are interested to what I did write: https://vorpal.se/posts/2022/aug/21/reverse-engineering-acpi-functionality-on-a-toshiba-z830-ultrabook/

[-] Vorpal@programming.dev 2 points 6 months ago

I would go with the Arch specific https://aur.archlinux.org/packages/aconfmgr-git instead of ansible, since it can save current system state as well. I use it and love it. See another reply on this post for a slightly deeper discussion on it.

[-] Vorpal@programming.dev 2 points 6 months ago

I can second this, I use aconfmgr and love it. Especially useful to manage multiple computers (desktop, laptop, old computer doing other things etc).

Though I'm currently planning to rewrite it since it doesn't seem maintained any more, and I want a multi-distro solution (because I also want to use it on my Pis where I run Raspbians). The rewrite will be in Rust, and I'm currently deciding on what configuration language to use. I'm leaning towards rhai (because it seems easy to integrate from the rust side, and I'm not getting too angry at the language when reading the docs for it). Oh and one component for it is already written and published: https://github.com/VorpalBlade/paketkoll is a fast rust replacement for paccheck (that is used internally by aconfmgr to find files that differ).

[-] Vorpal@programming.dev 2 points 6 months ago

I went ahead and implemented support for filtering packages (just made a new release: v0.1.3).

I am of course still faster. Here are two examples that show a small package (where it doesn't really matter that much) and a huge package (where it makes a massive difference). Excuse the strange paths, this is straight from the development tree.

Lets check on pacman itself, and lets include config files too (not sure if pacman has that option even?). Config files or not doesn't make a measurable difference though:

$ hyperfine -i -N --warmup 1 "./target/release/paketkoll --config-files=include pacman" "pacman -Qkk pacman"
Benchmark 1: ./target/release/paketkoll --config-files=include pacman
  Time (mean ± σ):      14.0 ms ±   0.2 ms    [User: 21.1 ms, System: 19.0 ms]
  Range (min … max):    13.4 ms …  14.5 ms    216 runs
 
  Warning: Ignoring non-zero exit code.
 
Benchmark 2: pacman -Qkk pacman
  Time (mean ± σ):      20.2 ms ±   0.2 ms    [User: 11.2 ms, System: 8.8 ms]
  Range (min … max):    19.9 ms …  21.1 ms    147 runs
 
Summary
  ./target/release/paketkoll --config-files=include pacman ran
    1.44 ± 0.02 times faster than pacman -Qkk pacman

Lets check on davici-resolve as well. Which is massive (5.89 GB):

$ hyperfine -i -N --warmup 1 "./target/release/paketkoll --config-files=include pacman davinci-resolve" "pacman -Qkk pacman davinci-resolve"
Benchmark 1: ./target/release/paketkoll --config-files=include pacman davinci-resolve
  Time (mean ± σ):     770.8 ms ±   4.3 ms    [User: 2891.2 ms, System: 641.5 ms]
  Range (min … max):   765.8 ms … 778.7 ms    10 runs
 
  Warning: Ignoring non-zero exit code.
 
Benchmark 2: pacman -Qkk pacman davinci-resolve
  Time (mean ± σ):     10.589 s ±  0.018 s    [User: 9.371 s, System: 1.207 s]
  Range (min … max):   10.550 s … 10.620 s    10 runs
 
  Warning: Ignoring non-zero exit code.
 
Summary
  ./target/release/paketkoll --config-files=include pacman davinci-resolve ran
   13.74 ± 0.08 times faster than pacman -Qkk pacman davinci-resolve

What about a some midsized packages (vtk 359 MB, linux 131 MB)?

$ hyperfine -i -N --warmup 1 "./target/release/paketkoll vtk" "pacman -Qkk vtk"
Benchmark 1: ./target/release/paketkoll vtk
  Time (mean ± σ):      46.4 ms ±   0.6 ms    [User: 204.9 ms, System: 93.4 ms]
  Range (min … max):    45.7 ms …  48.8 ms    65 runs
 
Benchmark 2: pacman -Qkk vtk
  Time (mean ± σ):     702.7 ms ±   4.4 ms    [User: 590.0 ms, System: 109.9 ms]
  Range (min … max):   698.6 ms … 710.6 ms    10 runs
 
Summary
  ./target/release/paketkoll vtk ran
   15.15 ± 0.23 times faster than pacman -Qkk vtk

$ hyperfine -i -N --warmup 1 "./target/release/paketkoll linux" "pacman -Qkk linux"
Benchmark 1: ./target/release/paketkoll linux
  Time (mean ± σ):      34.9 ms ±   0.3 ms    [User: 95.0 ms, System: 78.2 ms]
  Range (min … max):    34.2 ms …  36.4 ms    84 runs
 
Benchmark 2: pacman -Qkk linux
  Time (mean ± σ):     313.9 ms ±   0.4 ms    [User: 233.6 ms, System: 79.8 ms]
  Range (min … max):   313.4 ms … 314.5 ms    10 runs
 
Summary
  ./target/release/paketkoll linux ran
    9.00 ± 0.09 times faster than pacman -Qkk linux

For small sizes where neither tool performs much work, the majority is spent on fixed overheads that both tools have (loading the binary, setting up glibc internals, parsing the command line arguments, etc). For medium sizes paketkoll pulls ahead quite rapidly. And for large sizes pacman is painfully slow.

Just for laughs I decided to check an empty meta-package (base, 0 bytes). Here pacman actually beats paketkoll, slightly. Not a useful scenario, but for full transparency I should include it:

$ hyperfine -i -N --warmup 1 "./target/release/paketkoll base" "pacman -Qkk base"
Benchmark 1: ./target/release/paketkoll base
  Time (mean ± σ):      13.3 ms ±   0.2 ms    [User: 15.3 ms, System: 18.8 ms]
  Range (min … max):    12.8 ms …  14.1 ms    218 runs
 
Benchmark 2: pacman -Qkk base
  Time (mean ± σ):       8.8 ms ±   0.2 ms    [User: 2.8 ms, System: 5.8 ms]
  Range (min … max):     8.4 ms …  10.0 ms    327 runs
 
Summary
  pacman -Qkk base ran
    1.52 ± 0.05 times faster than ./target/release/paketkoll base

I always start a threadpool regardless of if I have work to do (and changing that would slow the case I actually care about). That is the most likely cause of this slightly larger fixed overhead.

[-] Vorpal@programming.dev 2 points 7 months ago

It all depends on what part you want to work with. But some understanding of the close to hardware aspects of rust wouldn't hurt, comes in handy for debugging and optimising.

But I say that as somone who has a background (and job) in hard realtime c++ (writing control software for industrial vehicles). We recently did our first Rust project as a test at work though! I hope there will be more. But the question then becomes how to teach 200+ devs (over time, gradually presumably). For now it is just like 3 of us who know rust and are pushing for this and a few more that are interested.

[-] Vorpal@programming.dev 2 points 1 year ago

Seems rather limited: only targeting some high level languages. Now, if this could also generate C++ bindings i would be very interested.

[-] Vorpal@programming.dev 2 points 1 year ago

Doesn't really help: what if you typo the namespace instead? Same exact issue. Namespaces are useful for other things though, but not security.

[-] Vorpal@programming.dev 2 points 1 year ago

There are existing approaches: GNU gettext and Mozilla fluent comes to mind. I would try to use one of those. I understand that Mozilla Fluent has good support for the Web (unsurprisingly).

[-] Vorpal@programming.dev 2 points 1 year ago

Oh, this seems to be specific to the SQL framework you use after looking into it. I thought it was a general rust question about function parameters, sorry. Unfortunately I'm not familiar with that sql framework (or any other, I'm an embedded developer, not a web dev).

Hope you find someone who knows diesel, and sorry again I couldn't help you.

[-] Vorpal@programming.dev 2 points 1 year ago

Not really: lib.rs is a different website frontend to the same old crates.io, presenting the data in a better way.

view more: ‹ prev next ›

Vorpal

joined 1 year ago