Ricardo is now cloud-enabled.
Which I hasten to add, is not the same thing as cloud-based, if your head is that lofty. Not the same thing, at all, no sir, feet firmly placed on planet earth!
Here's the story. Apologies in advance for this self-indulgent rant, but if you are not a financial cryptographer, the following will appear to be just a lot of mumbo jumbo and your time is probably better spent elsewhere... With that warning, let's get our head up in the clouds for a while.
As a client-server construction, much like a web arrangement, and like Bitcoin in that the client is in charge, the client is of course vulnerable to loss/theft. So a backup of some form is required. Much analysis revealed that backup had to be complete, it had to be off-client, and also system provided.
That work has now taken shape and is delivering backups in bench-conditions. The client can backup its entire database into a server's database using the same point-to-point security protocol and the same mechanics as the rest of the model. The client also now has a complete encrypted object database using ChaCha20 as the stream cipher and Poly1305 as the object-level authentication layer. This gets arranged into a single secured stream which is then uploaded dynamically to the server. Which latter offers a service that allows a stream to be built up over time.
Consider how a client works: Do a task? Make a payment? Generate a transaction!
Remembering always it's only a transaction when it is indeed transacted, this means that the transaction has to be recorded into the database. Our little-database-that-could now streams that transaction onto the end of its log, which is now stream-encrypted, and a separate thread follows the appends and uploads additions to the server. (Just for those who are trying to see how this works in a SQL context, it doesn't. It's not a SQL database, it follows the transaction-log-is-the-database paradigm, and in that sense, it is already stream oriented.)
In order to prove this client-to-server and beginning to end, there is a hash confirmation over the local stream and over the server's file. When they match, we're golden. It is not a perfect backup because the backup trails by some amount of seconds; it is not therefore /transactional/. People following the latency debate over Bitcoin will find that amusing, but I think this is possibly a step too far in our current development; a backup that is latent to a minute or so is probably OK for now, and I'm not sure if we want to try transactional replication on phone users.
This is a big deal for many reasons. One is that it was a quite massive project, and it brought our tiny startup to a complete standstill on the technical front. I've done nothing but hack for about 3 months now, which makes it a more difficult project than say rewriting the entire crypto suite.
Second is the reasoning behind it. Our client side asset management software is now going to be using in a quite contrary fashion to our earlier design anticipations. It is going to manage the entire asset base of what is in effect a financial institution (FI), or thousands of them. Yet, it's going to live on a bog-standard Android phone, probably in the handbag of the Treasurer as she roves around the city from home to work and other places.
Can you see where this is going? Loss, theft, software failure, etc. We live in one of the most crime ridden cities on the planet, and therefore we have to consider that the FI's entire book of business can be stolen at any time. And we need to get the Treasurer up and going with a new phone in short order, because her customers demand it.
Add in some discussions about complexity, and transactions, and social networking in the app, etc etc and we can also see pretty easily that just saving the private keys will not cut the mustard. We need the entire state of phone to be saved, and recovered, on demand.
But wait, you say! Of course the solution is cloud, why ever not?
No, because, cloud is insecure. Totally. Any FI that stores their customer transactions in the cloud is in a state of sin, and indeed it is in some countries illegal to even consider it. Further, even if the cloud is locally run by the institution, internally, this exposes the FI and the poor long suffering customer to fantastic opportunities for insider fraud. What I failed to mention earlier is that my user base considers corruption to be a daily event, and is exposed to frauds continually, including from their FIs. Which is why Ricardo fills the gap.
When it comes to insider fraud, cloud is the same as fog. Add in corruption and it's now smog. So, cloud is totally out, or, cloud just means you're being robbed blind like you always were, so there is no new offering here. Following the sense of Digicash from 2 decades earlier, and perhaps Bitcoin these days, we set the requirement: The server or center should not be able to forge transactions, which, as a long-standing requirement (insert digression here into end-to-end evidence and authentication designs leading to triple entry and the Ricardian Contract, and/or recent cases backing FIs doing the wrong thing).
To bring these two contradictions together however was tricky. To resolve, I needed to use a now time-honoured technique theorised by the capabilities school, and popularised by amongst others Pelle's original document service called wideword.net and Zooko's Tahoe-LAFS: the data that is uploaded over UDP is encrypted to keys only known to the clients.
And that is what happens. As my client software database spits out data in an append-only stream (that's how all safe databases work, right??) it stream-encrypts this and then sends the stream up to the server. So the server simply has to offer something similar to the Unix file metaphor: create, read, write, delete *and append*. Add in a hash feature to confirm, and we're set. (It's similar enough to REST/CRUD that it's worth a mention, but different enough to warrant a disclaimer.)
A third reason this is a big deal is because the rules of the game have changed. In the 1990s we were assuming a technical savvy audience, ones who could manage public keys and backups. The PGP generation, if you like. Now, we're assuming none of that. The thing has to work, and it has to keep working, regardless of user foibles. This is the Apple-Facebook generation.
This benchmark also shines adverse light on Bitcoin. That community struggles to deal with theft, lurching from hack to bug to bankruptcy. As a result of their obsession over The Number One Criminal (aka the government) and with avoiding control at the center, they are blinded to the costly reality of criminals 2 through 100. If Bitcoin hypothetically were to establish the user-friendly goal that they can keep going in the face of normal 2010s user demands and failure modes, it'd be game over. They basically have to handwave stuff away as 'user responsibility' but that doesn't work any more. The rules of the game have changed, we're not in the 1990s anymore, and a comprehensive solution is required.
Finally, once you can do things like cloud, it opens up the possibilities for whole new features and endeavours. That of course is what makes cloud so exciting for big corporates -- to be able to deliver great service and features to customers. I've already got a list of enhancements we can now start to put in, and the only limitation I have now is capital to pay the hackers. We really are at the cusp of a new generation of payment systems; crypto-plumbing is fun again!Posted by iang at March 6, 2014 08:28 AM | TrackBack