On 2023-02-17 at 21:38:34, Emily Shaffer wrote: > For example, I seem to remember you saying during the SHA-256 series > that the next hashing algorithm would also be painful to implement; > would that still be true if the hashing algorithm is encapsulated well > by a library interface? Or is it for a different reason? Right now, most of the code for a future hash algorithm wouldn't be too difficult to implement, I'd think, because we can already support two of them. If we decide, say, to implement SHA-3-512, we basically just add that algorithm, update all the entries in the tests (which is kind of a pain since there's a lot of them, but not really difficult), and then move on with our lives. The difficulty is dealing with interop work, which is basically switching from dealing with just one algorithm to rewriting things between the two on the fly. I think _that_ work would be made easier by library work because sometimes it involves working with submodules, such as when updating the submodule commit, and being able to deal with both object stores more easily at the same time would be very helpful in that regard. I can imagine there are other things that would be easier as well, and I can also imagine that we'll have better control over memory allocations and leak less, which would be nice. If we can get leaks low enough, we could even add CI jobs to catch them and fail, which I think would be super valuable, especially since I find even after over two decades of C that I'm still not very good about catching all the leaks (which is one of the reasons I've mostly switched to Rust). We might also be able to make nicer steps on multithreading our code as well. Personally, I'd like to see some sort of standard error type (whether integral or not) that would let us do more bubbling up of errors and less die(). I don't know if that's in the cards, but I thought I'd suggest it in case other folks are interested. -- brian m. carlson (he/him or they/them) Toronto, Ontario, CA