Vlad Miller writes from Russia (translated by Daniel Nagy):
We can invent any algorithm, develop any protocol, build any system, but, no matter how secure and reliable they are, it is the human taking the final decision that remains the last link of security. And, taking into account the pecularities of human nature, the least reliable link, at that, limiting the security of the entire system. All of this has long been an axiom, but I would like to share a curious case, which serves as yet another confirmation of this fact.
We all visit banks. Banks, in addition to being financial organizations attracting and investing their clients' funds, are complex systems of informational, physical and economic defenses for the deposited cash and account money. Economic defenses are based on procedures of confirming and controlling transactions, informational defenses -- on measures and procedures guarding the information about transactions, personal, financial and other data, while physical defenses comprise the building and maintenance of a secure physical perimeter around the guarded objects: buildings, rooms and valuable items.
Yet, regardless of the well-coordinated nature of the whole process, final decisions are always taken by humans: the guard decides whether or not to let the employee that forgot his ID through the checkpoint; the teller decides whether a person is indeed the owner of the passport and the account he claims to own; the cashier decides whether or not there is anything suspicious in the presented order. A failure can occur at any point, and not only as a consequence of fraudulent activities, but also due to carelessness or lack of attention on the part of the bank's employee, a link of the security system.
Not too long ago, I was in my bank to deposit some cash on my account. The teller checked my passport, compared my looks to the photo within, took my book and signed a deposit order for the given amount. The same data were duplicated in the bank's information system and the order with my book were passed on to the cashier. Meanwhile, I was given a token with the transaction number, which I should have presented to the cashier so that she could process the corresponding order. Everybody is familiar with this procedure; it may differ a bit from bank to bank, but the general principles are the same.
Walking over to the cashier, I have executed my part of the protocol by handing over the token to the cahsier (but I did not put the cash into the drawer before having been asked to do so). She looked at my order, affixed her signature to it and to my book and ... took a few decks of banknotes out of the safe and started feeding them to the counting machine. I got curious how long it would take for the young lady to realize the error in her actions, and did not interrupt her noble thrust. And only when she turned around to put the cash into the drawer did I delicately remark that I did not expect such a present for March 8 and that I came to deposit some cash, not to withdraw. For a few seconds, the yound lady gave me a confused look, then, after looking at the order and crossing herself, thanked me for saving her from being fired.
The banking system relies a great deal on governmental mechanisms of prevention, control and reaction. Had I not, in computer-speak, interrupted the execution of the miscarried protocol, but instead left the bank with the doubled amount of money, it would not have lead to anything except for the confiscation of the amount of my "unfounded enrichment". The last link of security is unreliable: it fails at random and is strongly vulnerable to various interferences and influences. This is why control and reaction are no less important than prevention of attacks and failures.Posted by iang at October 3, 2006 11:36 AM | TrackBack