tech-pkg archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Deciding on wich variant(s) of OpenBLAS library to install



On 03/02/18 03:05, Dr. Thomas Orgis wrote:
As I really would like to get going implementing something, people are
waiting for updated software …

Am Tue, 27 Feb 2018 09:18:05 -0600
schrieb Jason Bacon <bacon4000%gmail.com@localhost>:

As long as I can make a dependent package use any of the available blas
implementations and I can install
them all on the same cluster, I'm happy.
Is it enough for you to be able to override a global default in your
install of a dependent package (mk.conf) or do we have a hard
disagreement here about the depending packages defaulting to different
BLAS libs?

I hope I made my point clear in the other lengthy mails. Regarding
compatibility of different BLAS implementations to dependent packages:
Disregarding parallelization choices, where we should never impose a
multithreaded default on the user, IMHO, as it depends to much on the
use case after all (a slight change from my initial opinion,
considering that HPC installations are _not_ the norm), any
compatibility issues are about telling the package to use the correct
library name. The standards for BLAS and LAPACK API are so stable, with
straight-forward mathematical correctness tests, that we really should
not consider some package being incompatible with a certain BLAS once
we hacked a generic choice of BLAS LDFLAGS into the build, which most
builds explicitly offer. Exactly because the long-term practice in HPC
is to provide your BLAS linking flags for the system at hand.

If we still disagree, I am eagerly awaiting your arguments to the
contrary, but I would prefer finally getting on with adding the
switches to pkgsrc and building a refreshed software stack with
consistent BLAS for our users.

I'll forget about openblas cmake files 'n stuff for now. I'll happily
start with the openblas-devel package and add the second parallel
variant to it.


Alrighty then,

Thomas

Sorry, I've been completely inundated for the past week, emails coming in twice as fast as I can answer them...

I'm OK with any setup that allows all BLAS implementations to be installed simultaneously and for dependent packages to select a non-default BLAS.  I think a switchable default is fine, as long as it doesn't lock every dependent package into the same implementation.

My main concern is the ability to work around inevitable regressions quickly without risk to other dependent packages.

Yes, the API is very stable, but bugs will exist in some implementations and not others.

E.g. while one dependent package may work well with openblas today, another may hit an obscure bug and need a quick workaround to keep research moving.  Time is critical and man-hours are in short supply in HPC and waiting for the bug to be properly fixed may mean missing a grant deadline.  Switching the global default could mean breaking another dependent package, worst of all in the middle of a long-running parallel job.  The ability to recompile that one dependent package using another BLAS implementation will avoid impacting anything else and make pkgsrc look like a Godsend to researchers.  The inability to do so would make it look like ball and chain.

The most common fear I hear from researchers unfamiliar with pkgsrc is flexibility.  They're used to doing cave man installs, having total freedom, and don't want to sacrifice that despite the huge cost of time and outright failure to get their software installed in many cases.  They're only familiar with binary package managers like Yum and Debian packages, and the whole idea of a package manager building from source with multiple build options is usually foreign to them.

If we can show that pkgsrc reduces software deployment man-hours by an order of magnitude *and* allows users a great deal of freedom to control builds, then it will be accepted and become popular in HPC.

As BLAS is a centerpiece in much of HPC, how we implement it will be critical to how pkgsrc is perceived.

Thanks again for all your hard work exploring this.


Home | Main Index | Thread Index | Old Index