Subject: Storage Security (was Re: NetBSD iSCSI HOWTOs)
To: Bill Studenmund <wrstuden@netbsd.org>
From: Daniel Carosone <dan@geek.com.au>
List: current-users
Date: 03/02/2006 11:59:40
--FhvelBhrd33NvMcY
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable


The problem with iSCSI from my perspective, as is often the case with
things that rely on a loose layering of various (individually good)
components, is that the configuration and administration are harder
than they should be, and that the assurance that things are working as
intended (if users actually get that far).

If the standard mandates that iSCSI implementations include IPsec
implementations as well, and vendors sell one without the other,
that's a clear issue and problem, and those vendors deserve all the
business they don't get as a result.  Leave them aside and lets
restrict ourselves to offerings that include the full complement of
iSCSI and IPsec mecahnisms.

As an illustration (not a criticism of our current target).  I can
configure the netbsd target to respond to iscsi requests. I can
configure whatever the host O/S running the target to use IPsec for
that traffic.  I can configure whatever client OS / HBA is running the
initiator to also use IPsec.  Hopefully it all works at that point,
and I don't give up before I get there. However, even then, there's
little assurance that this is working, because there are very loose
bindings between the app and IPsec: I can't configure the target
serving up the data to use the relevant socket controls to check that
the traffic actually was IPsec encrypted, let alone use any of the
other information from the SA for stronger restrictions on access by
(say) authenticated hostname.

This is true of a lot of applications with IPsec, made worse because
every platform and OS has its own way to configure IPsec policy.  The
problem isn't relying on the IPsec transport as much as it is on the
administration.

It's fine for iSCSI (and other applications) to defer to IPsec
transport, rather than duplicate implementation, as Thor described.
But it's inadequate for them to just ignore transport security
entirely and make it SEP by relying on IPsec without at least defining
some useful interface bindings between the two (because it almost
guarantees people won't use it, or will use it incorrectly).

iSCSI has several forms of names already, but no form where the name
can be assured by a corresponding ipsec SA.  I understand that IPsec
is not used for iSCSI authentication, but the inability to have any
kind of correlation between the two limits the usefulness of both.

> What exactly is wrong with CHAP?

For one, the reflection/oracle attack described.  For two, the fact
that it has no persistence with the session beyond TCP, leaving the
following protocol session vulnerable to hijacking.

Say I have an iSCSI target, serving different LUNs to different
initiators. They shouldn't see eachother's LUNs, but each can attempt
to hijack the other's iSCSI sessions, even inside the IPsec i have
carefully set up with each. It's all to loose and haphazard to be
really worth doing, let alone provide much assurance.

However, with all that said, I'm still not sure that building stronger
pipes is really the right place to solve the problem anyway:
concentrating on the transport and data-in-motion aspects misses
several other issues.  It deals with trust of the path to the storage
server, but not with trust issues I might have with the storage server
itself.

I may rely on the vault and network in between for availability;
network and storage designers and service teams do at least understand
and know how to provision for availability, and might even get that
part right sometimes, especially if I can stop confusing and burdening
them with privacy and integrity responsibilities they can't fully
discharge anyway. Or I might mirror between vaults to try and gain
some further protection there, too (eg, across sites, and/or across
SAN administrative domains).

If I want privacy of my data held on some managed-by-the-storage-team
vault in the data centre (or out on the internet), I'm going to use a
mechanism like cgd(4) to ensure that I only give them encrypted blocks
(just like I do with local disks that might remap sectors and prevent
me scrubbing them later).

If I want integrity protection (including against replay) some time
later when I fetch the blocks back from the vault, I need a
block-level (or fs-level) integrity mechanism as well, which I
currently lack (below the application layer).

Because much of this stuff has been designed by the storage vendors
from the disk-side perspective, trust in the storage system is largely
assumed.  That's fine, but these kinds of systems highlight the
deficiencies in relying-party mechanisms from the OS and filesystem's
perspective.  Suns's ZFS is a good partial illustration of the kinds
of improvements necessary at the other end, and at least steps in the
right direction.  It's also clearly true that the storage vendors are
making feeble and over-sold attempts to help implement compensating
controls for the real world we inhabit in the meantime.

In storage, end-to-end security shouldn't be between host and disk; it
should be between write and read, from the host via the disk and back
again.=20

--
Dan.

--FhvelBhrd33NvMcY
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (NetBSD)

iD8DBQFEBkN8EAVxvV4N66cRAoBCAJ96pahvn2/+QyKF5XOOo3xH5vvu+wCg27f3
bnQ5egryyxUkvTfK83I3C1w=
=7poc
-----END PGP SIGNATURE-----

--FhvelBhrd33NvMcY--