Your account is probably compromised

"Dear Customer, your computer is probably compromised and several criminals probably have full access to your all your accounts. Neither me, nor my company are particularly concerned. It almost certainly doesn't matter, and will never affect you if we do our job properly."

This is the conversation everyone who does online finance would, if they were honest, eventually have with their customer. It's one we avoid, which is hardly surprising! Customers do not have a conception of somewhat, mostly, or subject to conditions, secure. Things are secure or not. Either someone has their own level of control over an account, or none. As anyone who's tried to use their credit card in an "odd" country knows, this conception is simply wrong. Retail works on a calculated risk basis. Every customer instruction carries a fraud risk, the overwhelming majority have a negligible risk, a few transactions carry a high risk. Low risk transactions are never checked; high risk ones might simply be denied. But it's a can of worms we all hate to open.

Percunix HYIP Warning

Despite the blizzard of high security phrases thrown around customers, in real-life [ex. "high-grade SSL encryption"], currently the only realistic best practise is 'limit the damage to something you're willing to pay compensation for'. Most of the actual security can never be in the technology. It is in the business processes that control value transfers.

Responsible security means identifying exploitable functions, hardening access them and having fall-backs outside of normal operations. For example, at a brokerage house, creating false trades in S&P 500 companies won't allow an attacker to extract value from an account, so little security ought to be applied to this. Withdrawals the user's bank account should be controlled [they might have lost control over it], but a one day delay and an SMS notification should stop most attacks. Large withdrawals to third-party accounts would be so exploitable that they probably shouldn't even be offered. If allowed, they should trigger a series of offline money laundering checks. It's about converting a tiny number of catastrophic breaches [life savings wired to Lativa, law suits, bad press] to a large number of ops incidents [phone calls to check money shouldn't go to Lativa, worrying customers who use un-patched browsers, etc].

This works because criminals are amoral, rather than evil. They're not interested in hurting you, they're not interested in hurting providers, they just want money and don't care. There are worse things a person could be. (It's not good, but there are worse things.) An attacker will only try to spoof transactions that can benefit him. The limit on fraud losses isn't the availability of cracked financial accounts (which cost $250 at time of writing), but the limited capacity of channels to get that money "out". Something that criminals are having to come up with some very inventive schemes for. (The scarcity of these channels means that the most common use of a hijacked computer is still sending spam.)

And spoofing transactions from a customer's computer often isn't hard. The blackmarket price of a hijacked PC is barely 16p. Most regular email users aren't aware that the 'send' line on an email is arbitrary, so anyone can attempt to send an email from obama@whitehouse.gov . I doubt they ever will. (There are check that can stop this, but they are not used by every mail provider.) Even educated users can be caught by trick emails tailored specifically to them. I could write several essays on how bad it all is. Basically, any instruction from a home PC is suspect.

Bonzai Buddy

Often this is the customer's own "fault" - home PC are badly maintained. But perceived responsibility always limits itself at the edge of someone's understanding, not their actual control. Which puts providers in an awkward position of being expected to take on risks they can't hope to control or monitor. I'm (relatively) unlikely to be hacked compared to a first-time user with a second-hand Windows 95 laptop. But that has little correlation to what I'm worth as a customer. And if you block that Windows 95 user from your site, they're going to expect your help desk to spend 3 hours helping them upgrade that PC. For the cost of 3 hours of tech support @$35/hour it might make more sense (financially) to just leave the risk out there. If your non-tech safeguards are good, they'll put a low limit on that risk.

Credit card companies excel in understanding this. Actual control over your credit account is diffused to every human being who handles, or has access to, your card number over its 3 year lifespan. That could easily be thousands of people! Your account itself is barely controlled (making it more convenient for you to spend), all the security is applied to vetting individual transactions. Card companies build profiles of legitimate transactions that they use to filter new ones. Most transactions are repeats, are with people the customer is tied to [employer, local corner shop, etc] or simply aren't exploitable for gain. Card companies know that they're responsible for clearing that fraud up, and, for the most part, customers are happy. Each transaction is a gamble to the card company. They take 2-4% of a card transaction, and lose 0.5% on average to fraud. If halving expected fraud on a £100 transaction costs more than 25p, it's too much.

These are the facts of life in the industry right now. It isn't callous indifference that makes providers weigh up risks this way. If they didn't, a huge range of services would be available only to customers profitable enough to warrant dedicated terminals. The "evils" are the forgivable sins of omission in trying to avoid "scaring" customers with risks. And the, less forgivable, sins of dumping the clean-up costs of fraud on customers. (Like the time a statement bounced from my postal address, causing my bank to freeze all my accounts without contacting me. Or clearing $700k in random transfers from a local school's accounts without realising something was wrong.)

Going back to our credit card company. Say that a cloned card generates £700 of fake transactions before being cancelled. That fraud will cause £500 in losses to stores in charge-backs, £100 of losses to the card firm, £10 of support time in contacting the customer, and £5 re-issuing the card - so a total of £115. The customer will loose access to their credit card for a week [say £5 of inconvenience], and spend half a day of their time @ £100/day sorting out the mess. So the total social cost is 43% higher than the provider's. Shying away from the company's "fair" share of this cost is the evil bit, not making the calculation itself.

Which leads to the question of what the provider's fair share of mitigating fraud should be. A large body of experience (and case law) exists for the "paper world". We know that not reporting a missing cheque book for a week makes us reckless, that providers don't usually check signatures, but they should, etc. So there's a common standard for how careful and what sort of careful we're expected to be. This doesn't exist on the internet yet. Not as a social consensus. If you were to get a dozen 20-something technology graduates together, we'd probably reach a consensus, but it would be a very different one to our mum's. Which interestingly means that my domain registrar's website [name.com] offers stronger security than my broker. Name.com can assume I understand the implications of restricting my account to certain IP addresses; my brokerage can't. Most brokerage customers would just lock themselves out of their accounts (generating expensive support calls) if they were offered that feature!

Customer 404

As the cost of inducting and equiping users for secured channels is usually too high to make sense for a single service, I can see a big business providing them as infrastructure. That's how name.com provide their 2-factor fobs - and how "edgier" sites handle DDoS attacks. There are a few firms trying to sell highly secure OpenID [a web standard for shared login services] services. It's hard to say today what form that industry is going to take, but the economics are compelling.

For now, incumbents are stuck between their desire to push risk to their unsophisticated customers, and customer's desire to avoid risks they can't understand. And even though the current state of affairs might be a low point, it isn't the first low and won't be the last. People have been forging letter of credit, bouncing cheques, impersonating brokers, tying extra knots in the braid, etc, since before writing existed. It's always been dealt with. The problems that are happening now stem from incumbents neglecting to transfer to the online world, the skills they've built up in the off-line world. When a brokerage can't take responsibility for their accounts, it's right to question what they're being paid for.

The current state, where we have billions of dollars running through malware infested computers with responsibility for that being bounced between clients and providers might be manageable (it's being managed, so it's manageable). But hopefully it's just too inefficient to not be a transitionary phase.

There are some techniques that are going to help make customers more secure:

  • IP Location: You can make a good guess on which ISP and country a user is logging in from. So someone who's travelled from Brighton to Odessa in 3 hours ought to raise some questions.
  • Telephone and mail out-of-band confirmations/alerts: They cost money. But an automated call is only a few pence with a good VoIP provider. A postcard is only 30p. It's worth it to check important things.
  • Secure boot disks: If a customer can cope with rebooting their computer, a secure boot disk can cut risk by loading only enough s/ware (and all the vulnerabilities in it) for a particular service.
  • Dedicated terminals: Edging toward being cost effective (a low-end netbook is less than £70), especially if multiple providers shared the costs.
  • 2-factor fobs: Offer limited protection, a hijacked PC can switch the authorisation from a good to a bad transaction, but stop fully automated attacks. To the banking industry's credit, most of them are now rolling this out.

To most companies, the cost of these improvements is training consumers and staff to use them. So the bigger the organisation, the greater the reluctance to make improvements. That's why Paypal pioneered CAPTCHAs and automatic fraud detection; WebMoney in Russia allows users to set cash limits on their exposure to each other; Percunix can lock accounts to specific internet connections, etc .