jack: (Default)
[personal profile] jack
I was pleasantly surprised how easy it was to contribute to rust. It seems like there's a combination of things.

I don't exactly know who the driving forces are in the project. But I think several people are employed by Mozilla to work on rust development, which means there is some full-time work, not only scrabbled moments.

There seems to be a genuine commitment to providing an easy on-ramp. Everything in github seems fairly up-to-date, which makes it a lot easier to get an idea of what's what. Bugs/issues are sorted into many categories, including ones that are easy, suitable for newcomers, which is very welcoming.

There is a bot which takes pull requests and assigns them to a reviewer, so most don't just languish with no-one accepting or rejecting them. The reviewer is chosen randomly from a pool appropriate to the component, and reassigns it if someone else would be better.

Even just spending a couple of days pushing the equivalent of a "hello world" patch through (what is the term for "the effort to make a one-line change with no significant code change"?), it felt like I was part of a project, with ongoing activity about my contribution, not someone screaming well-meaning suggestions into a void.

This isn't rust-specific, but it was the first time I used github for much more than browsing, and it was interesting to see how all the bits, code history, pull requests, etc interacted in practice.

Rust itself had an interesting model. A reviewer posts an approval on the pull request. *Then* a bot runs tests on all approved requests in descending order of priority, and merges them if they pass.

That means, the default assumption is that if a commit to master fails a test for some platform, nothing needs to be rolled back -- further pull requests continued to be tested and merged (assuming they don't gain any conflicts). And "master" is always guaranteed to pass tests.

Currently patches are either tested individually, or ones with inconsequential risks (documentation changes and the like) are tested in a batch. It seems to work well. It relies on the idea that most patches are independent, that they can be merged in any order, which usually seems to be true.

If you took the idea further, you can imagine ways of making it less of a bottleneck. Rather than just testing all patches which happen to be submitted at the same time, you can easily imagine a tier system. Maybe priority. Or maybe, have minor tests (eg. just that everything compiles and some basic quick tests of functionality which is known to have changed) to gate things through a first stage, and find problems quickly, and then a second stage which catches obscure errors but is ok to test multiple patches at once, because it doesn't usually fail.

In fact, I can't imagine working *without* such a system. At work we have a nightly build, but it would have been easy to add a tag for "most recent working version", and that never quite occurred to me, even as I suggested other process improvements.

Date: 2017-01-30 06:13 pm (UTC)
mtbc: photograph of me (Default)
From: [personal profile] mtbc
Yeah, at work easy tests run automatically on every GitHub PR once opened, and PRs from those outside our organization need a label added to be included in the merge build for the daily heavyweight integration testing. We also have separate merge-test systems for larger-scale/disruptive changes so they don't mess up the regular stuff (my current work's off on one of those at the moment as I'm fiddling deep in the server's permissions system).

We're looking a little at GitLab as an alternative, incidentally.

Date: 2017-01-31 12:07 pm (UTC)
mtbc: photograph of me (Default)
From: [personal profile] mtbc
Yes, all that is automated: a morning chore is to check what failed and assign accordingly, which can even be jobs that do website linkchecking for docs, etc. We've also now written a bunch of Ansible stuff to fire up a whole multi-VM Jenkins-based CI in OpenStack that tests our software -- https://github.com/openmicroscopy/infrastructure -- so we can duplicate the most popular parts of the process for rolling out these "separate merge-test systems" at a push of a button. As linked from http://www.openmicroscopy.org/site/support/contributing/continuous-integration.html our main Jenkins stuff is still on an older system at https://ci.openmicroscopy.org/ -- the *-merge jobs that merge in PRs are using https://github.com/openmicroscopy/snoopycrimecop to work the GH API accordingly from Python. As you'll see from our PRs, we're using Travis for the "simple test" stuff -- typically: build, run unit tests, run code style stuff.

Date: 2017-02-01 08:57 am (UTC)
mtbc: photograph of me (Default)
From: [personal profile] mtbc
It certainly sucks much staff time in creation and tending. Personally I would have strung it all together a little more manually with less third-party stuff and more shell scripts but I think the prevailing feeling is that it's worth the effort and it's become very unlikely that we accidentally ship a version that's fundamentally broken. (Even for our file-format reading stuff we are continually running the latest merge-build over a large corpus of old files to check that there are no surprises in how they are now read.) Happy to provide any further detail if I know it!

Date: 2017-02-11 03:26 pm (UTC)
mtbc: photograph of me (Default)
From: [personal profile] mtbc
That I don't know for I've not been sufficiently involved with researching alternatives. If you have scripting expertise in-house, whether zsh or python or whatever, I'd say go ahead and use that if you are finding that you can do a lot of what you want without the scripts getting too awful. If not, I'd take a serious look at products like Concourse CI and GitLab CI and if you end up falling back to Jenkins then look at stuff like Jenkins Job Builder for managing job configurations. Plan for it to be an ongoing overhead though.

Date: 2017-02-01 08:52 am (UTC)
mtbc: photograph of me (Default)
From: [personal profile] mtbc
And, whoops, GitLab just seem to have had a falling over of major incompetence!