Subject: Re: cgd and replay
To: Pawel Jakub Dawidek <>
From: Daniel Carosone <>
List: tech-security
Date: 08/22/2005 11:20:58
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

You're still missing the point about transactions; you can't overwrite
(and thus invalidate) any currently valid data with partial new data
(such as updating a MAC block). You need to be able to have the end
result be all-or-nothing.

If you can't get individual atomic updates from the underlying
hardware, you need to add some indirection, so that you can recover
and revalidate either the old or the new version of the data at least
until the transaction is fully complete.

There are many implementation models, including journal with replay
(many journalled filesystems), log-structured storage and transaction
counters with garbage collection (LFS or PostgreSQL's MVCC),
copy-on-write with cascading index updates (NetApp's WAFL, perhaps
Sun's ZFS though I've not looked at that; it may have yet another
solution).  As always, each has different performance and other
attributes to bring to the tradeoff.

I don't think (unsubstantiated) probabilistic arguments and manual
repair are of much interest to your potential users, especially in the
face of a threat model where cryptographic replay integrity is
important.  How does a user know whether the block data they're
"repairing" is correct, or whether the failure they saw was a
deliberate disruption from an attacker whose subtitute data they're
now accepting?


Content-Type: application/pgp-signature
Content-Disposition: inline

Version: GnuPG v1.4.1 (NetBSD)