I *want* it to be a bit random. This lock is a bit harder than it looked from the outside. You get a bit of lockpick jammed in it.
The interesting thing for me is that these two failure modes are very different sources of randomness.
In the first one, the underlying model is that locks vary independently at random, rather than attempts; so you can try as many times as you like at the same door, but you'll get the same result every time, so there's no point trying more than once. You could imagine dealing with this using a sort of random-oracle model in which the first time you find a given door the GM rolls to generate the stats for its particular lock, but once they're generated, they won't change on future encounters with that same lock.
But in the second, the model is that the attempt actually makes matters worse – if a mediocre lockpicker breaks off a piece of pick in the lock, then not only do they fail to open it, not only is it pointless for them to try again, but they've now messed up the lock to the point where perhaps even a very skilled lockpicker would fail to open it, even if they could have succeeded if the mediocre person had never touched it.
Put another way, in the second case, the Markov chain of what happens in repeated attempts has more than one absorbing state. The 'take 10' rule makes sense if the only absorbing state is the 'success, lock is now open' state; in that case it really is true that with enough repeated attempts you'll inevitably get there in the end, and it makes sense to simply have a crude model of about how long it will take you (which might be done by a mathematically justifiable model of probabilistic expectation, but might be the really simple 'just don't try it in the middle of mortal combat' which is all the gameplay requires in practice.) But if there are two absorbing states, then what needs to be modelled is not just the expected time to hit one of them, but also, the relative probabilities of which one you hit. So you get to do a single roll that models your whole series of attempts at the task, and the outcome of that roll is that you either eventually end up opening the lock or eventually end up jamming it beyond recovery. And again, you could do the actual Markov-chain theory to figure out a mathematically justifiable account of what the distribution ought properly to be for the length of 'eventually' and the choice of absorbing state, or you could go with a crude approximation if that seems more sensible.
no subject
Date: 2017-05-03 09:26 am (UTC)The interesting thing for me is that these two failure modes are very different sources of randomness.
In the first one, the underlying model is that locks vary independently at random, rather than attempts; so you can try as many times as you like at the same door, but you'll get the same result every time, so there's no point trying more than once. You could imagine dealing with this using a sort of random-oracle model in which the first time you find a given door the GM rolls to generate the stats for its particular lock, but once they're generated, they won't change on future encounters with that same lock.
But in the second, the model is that the attempt actually makes matters worse – if a mediocre lockpicker breaks off a piece of pick in the lock, then not only do they fail to open it, not only is it pointless for them to try again, but they've now messed up the lock to the point where perhaps even a very skilled lockpicker would fail to open it, even if they could have succeeded if the mediocre person had never touched it.
Put another way, in the second case, the Markov chain of what happens in repeated attempts has more than one absorbing state. The 'take 10' rule makes sense if the only absorbing state is the 'success, lock is now open' state; in that case it really is true that with enough repeated attempts you'll inevitably get there in the end, and it makes sense to simply have a crude model of about how long it will take you (which might be done by a mathematically justifiable model of probabilistic expectation, but might be the really simple 'just don't try it in the middle of mortal combat' which is all the gameplay requires in practice.) But if there are two absorbing states, then what needs to be modelled is not just the expected time to hit one of them, but also, the relative probabilities of which one you hit. So you get to do a single roll that models your whole series of attempts at the task, and the outcome of that roll is that you either eventually end up opening the lock or eventually end up jamming it beyond recovery. And again, you could do the actual Markov-chain theory to figure out a mathematically justifiable account of what the distribution ought properly to be for the length of 'eventually' and the choice of absorbing state, or you could go with a crude approximation if that seems more sensible.