jack: (Default)
[personal profile] jack
Many of us grow up with some rules, that then become possibly outdated. When is it better to stick with them, and when is it better to abandon them?

1. If you don't have time to think about it now you've absorbed the habit, and it's not obviously harmful, just keep ignoring it.

2. If it DOES take time, or it COULD be harmful, or if it increasingly generates friction with other people who have the opposite instinct, it may be worth considering if it's still valid or not and making a positive distinction to stick with the rule, or abandon it.

I find it satisfying to decide, and lose a nagging sense that I'm "not quite doing the right thing" whatever I do. For instance, IIRC, two friends had a disagreement about whether lightbulbs on dimmer switches still drew maximum power when they were dimmed (or maybe it was whether to turn the light off when they left the room, or something else). But rather than go on nagging each other and attributing the different habits to intransigence rather than the same lingering sense of obligation to a different internalised rule, they looked up the answer. (IIRC, dimmer switches used to waste the energy, now they don't.)

3. Does it provide a small potential benefit? Then you might as well keep it.

If you always do small to medium arithmetic problems in your head, then it's probably worth keeping the habit, because sometimes it's useful, and if you're good at it, it's equally reliable to using a calculator (just with different sorts of errors).

If you don't, it's probably not worth learning. Everyone would benefit from being able to do "6x7", or "20% of $150" or "to the nearest hundred, what's 27*54" in their head. But the benefits of calculating "67*43" or "345/72 to 2dp" are small. It's easy to learn, and learning it is nice, but it's probably not actually going to save you much time if you normally have a computer handy.

4. Does it provide an illusory benefit that's likely to make you keep the rule even at the expense of clarity? This is the difficult one, but it's best to let the rule go.

In the old days, computers did integer arithmetic much faster than floating point arithmetic. Now, on a modern computer, this isn't really true. Using integers is often somewhat useful, because you can make sure if you have precise answers, but if what you really mean is floating point numbers, you should understand and use floating point numbers.

Now, software engineers are having similar introspection about more modern rules. For instance, most people still keep to rules of thumb that may be invalid when computers have large memory caches.

The difficulty is deciding whether the benefit is real or not. If you have a lot of effort invested in something, it's always tempting to think the rule is a benefit, even if a small one. And the knowledge is always good, even if not immediately useful. But sometimes you need to decide if you should keep going it or not.

Controversially, I think think this category includes mixing greek and latin roots. What benefit does this provide? The benefit is that if there are people in Britain who speak ancient greek or latin, but don't even know the common English-adopted prefixes of the other, they can understand some non-mixed words. But there are no such people. It's still somewhat elegant to keep the borrowings separate, because it compartmentalises words, which is NEATER whether or not it's ever used. But this has to be balanced against the benefits of mixing roots, which is that it often gives useful words. Now, while I appreciate both benefits, I think the usefulness of words like "television" definitely outweighs the other.

It's possible there's some other benefit to keeping the old rule, but if you want to say that, I think you have to know what it is. I think it's no longer enough to simply say "it's correct" or "it's normal", or "arbitrary rules are good because they let us measure people's progress in learning" you have to make a positive decision of what you are gaining.

Date: 2010-09-15 07:59 am (UTC)
ewx: (Default)
From: [personal profile] ewx
Your crossposting to LJ seems to have stopped happening.

fp vs int

Date: 2010-09-15 08:08 am (UTC)
ewx: (Default)
From: [personal profile] ewx

Seems to me that FP is still noticeably slower.

chymax$ cat t.c
int main() {
  for(int i = 0; i < 1000000000; ++i)
    ;
}
chymax$ cc -o t -std=c99 t.c
chymax$ time ./t

real	0m2.670s
user	0m2.667s
sys	0m0.003s
chymax$ cat u.c
int main() {
  for(double i = 0; i < 1000000000; ++i)
    ;
}
chymax$ cc -o u -std=c99 u.c
chymax$ time ./u

real	0m3.765s
user	0m3.760s
sys	0m0.003s

You might say: it's because I'm using 32-bit ints and 64-bit doubles? Rather unlikely on a 64-bit CPU but we're talking about intuitions being wrong, so:

chymax$ cat tt.c
#include <assert.h>

int main() {
  assert(sizeof (long) == 8);
  for(long i = 0; i < 1000000000; ++i)
    ;
}
chymax$ cc -o tt -std=c99 tt.c
chymax$ time ./tt

real	0m2.683s
user	0m2.679s
sys	0m0.002s

(Obviously if you use 32-bit floats you never get an answer at all l-)

mixing greek and latin roots

Date: 2010-09-15 08:11 am (UTC)
ewx: (Default)
From: [personal profile] ewx
I thought that was a made-up excuse to be grumpy about TV, not something anyone seriously proposed as a rule. Given (i) the Romans happily borrowed Greek words and (ii) the lack of complaint about mixing (say) Latin and Germanic roots, it would be a remarkably stupid thing to propose as a general rule.

Re: fp vs int

Date: 2010-09-15 10:23 am (UTC)
ewx: (Default)
From: [personal profile] ewx

1000000000 will be converted to float accurately. The reason it never terminates is:

$ cat t.c
#include <stdio.h>

int main(int argc, char **argv) {
  float i = atoi(argv[1]);
  ++i;
  return printf("%d\n", (int)i);
}
$ cc -o t t.c
$ ./t 16777215
16777216
$ ./t 16777216
16777216