jack: (Default)
[personal profile] jack
I tweeted this in one sentence, but I thought it deserved revisiting.

In many ways, this is obvious, but I didn't have it all laid down in my head together till now.

When you write a program, there's any number of ways it might change. You might have included provisional "fail when input is invalid" but need to make that be handled more gracefully. You might have hard-coded an integer size, and maybe need to expand that later. You might need to change it from a stand-alone program to a library that other programs can link against, or vice versa.

The point is that for any of these cases, there's a spectrum of choices. You can ignore the problem, you can write the program so that the expansion is easy to add on later, or you can add the expansion in right now.

Sometimes "do it right now" is right. Sometimes the 'right' way is just as easy to write as the 'wrong' way, and equally clear even to inexperienced programmers, so you should just do it always.

I've previously talked about the times when the middle path is right. If you're not sure, it's usually a good call.

And there's always a trade-off between quickness now and maintenance cost.

But today, I want to talk about ignoring the problem. If the cost of making it extensible is real, and the chance of needing it is low, ignoring the problem is likely correct. Think carefully before tying yourself to 16-bit integers, if they may one day overflow and lead to much pain. But accepting 32-bit integers is, a lot of the time, fine, and the cost of making each integer into a type which could hold larger integers is that a lot of code becomes less clear (and in many languages, slower).

Mark Dominus pointed this out in both code and documentation for "small" libraries, especially provided as part of a language's standard set of libraries. The cost of any change, even just explaining that something isn't there, it that all users of the library take *slightly* longer to understand it. And if unchecked, eventually all the "you should clarify this" or "this change is really small, why not?" mount up and turn a small, lightweight library into a heavy, general purpose library. And then maybe the cycle starts again.

He also pointed out that when he was writing small command line utilities mostly for his own use, he often added extra command line options, because when he was originally writing it, the options were clearer in his mind. But he didn't *implement* them, because he'd probably never use them.

I was thinking of the habit of writing coding exercises. My instinct always used to be to look for the "right" solution. But actually, the right solution depends not on the current state of the program, but it's future evolution. If it's ACTUALLY not likely to change, leaving it clearly imperfect may be the right solution. If it will acquire many more users, spending developer effort on making the interface cleaner may become worthwhile. If much future code will be build to interface with it, get the interface right NOW, or it will become locked in.

It's not a matter of what TO do, but equally much what NOT to do.

Date: 2018-09-23 01:51 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
Another case of this is when you abstract something out wrongly. You spot that a piece of code is duplicated in five places, turn it into a subroutine or sub-object or some other kind of centralised facility, and replace the five duplicates with calls to the centralised version. But it turns out that four of them are definitionally equivalent and always want to be done the same way, whereas the fifth just happened to look the same in the current state of the world but will need to diverge from the other four as soon as some as-yet-unseen circumstance arises – and now when that circumstance does come up, your first job is to partially undo your de-duplication so that you can decouple the subsets that need to change separately.

tl;dr: use the same function call in two places not just because they do the same thing now, but because you're confident they'll want to continue doing the same thing in future.

Date: 2018-09-23 02:42 pm (UTC)
seekingferret: Two warning signs one above the other. 1) Falling Rocks. 2) Falling Rocs. (Default)
From: [personal profile] seekingferret
I don't really program but face very similar questions in laying out 3D CAD models, so I always appreciate your posts on the subject. The idea of planning out future changes but not implementing them is a helpful way of framing some things for me.

Date: 2018-09-23 03:42 pm (UTC)
mtbc: photograph of me (Default)
From: [personal profile] mtbc
Often there's a low-cost halfway house that makes it less of a pain in the ass to extend / grandify the code in the future. A simple example is, maybe you don't yet want to make some parameter an external documented configuration option, but at least put the parameter as some obvious limited-scope variable that is multiply referenced rather than copying its value into multiple places if it looks like they ought to be taking the same value. A big step is simply to be aware of what your shortcuts are making difficult for the future, even if it just means you simply add comments that will help someone else do that extra work someday.

It's also important to avoid the trap of coding something in a more complex way for an imagined benefit. E.g., the naive implementation might be imperceptibly slower on your data than using some cunning algorithm or data structure: in which case, just do it simply so that the code is at least more obviously correct. If it does need fixing later, at least what's already there will be easily understood so the fix can more confidently not break existing behavior.

Date: 2018-09-25 04:38 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
Although that makes me twitch immediately, because I've done that kind of thing too many times and later found I'd referred to the wrong one of those two names in one or two locations, and of course when they initially have the same value, no amount of debugging will spot it.

My standard example of this is 'always make a new puzzle-collection game default to a rectangular rather than square grid'. If you do all your early testing on the 10×10 setting, you'll never spot the bugs in which you absentmindedly wrote the wrong one of width and height. Test on 12×10 and you catch them much faster!

Active Recent Entries