[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: RFC: NUMA support
On Mon, Nov 24, 2008 at 12:47:41PM -0600, David Young wrote:
> Are pci0 and pci8 and the other peripheral buses more properly attached
> to a NUMA node?
> Currently, numa0..numaN are just aggregations of CPUs, at least as far
> as pmf(9) is concerned. AFAICT, a NUMA node is a real physical entity
> with RAM attached. Looking ahead, what will it mean for a NUMA node
> to be suspended? Will the system vacate that node's RAM and turn off
> DRAM refresh? What, for that matter, will it mean to detach a NUMA node?
It is interesting to look at the SGI Origin 3000 and Altix and consider
the range of system possibilities. This is because these systems, which
have a logical architecture basically identical to the old O2000 systems,
have a completely flexible physical architecture such that you can actually
build any strange logical configuration you can think of.
They are divided into "bricks": "C-bricks" have processors and memory;
"P-bricks" have PCI slots; "M-bricks" have only memory; "I-bricks" are
switches for the NUMA interconnect. You can build a system with all
your memory right next to the PCI buses, interconnect-wise, and all your
processors way out at the other end -- if that's what you want. Of course
power management wasn't much of a concern when these were designed.
But it does point out that there are situations where you really do want
to know which PCI device is local to which memory and which CPU. This is
also the case on a lot of embedded multicore designs, albeit with respect
to L2 cache, not main memory. We should find a way to express and ideally
to use this kind of topology information, if we can.
Thor Lancelot Simon
"Even experienced UNIX users occasionally enter rm *.* at the UNIX
prompt only to realize too late that they have removed the wrong
segment of the directory structure." - Microsoft WSS whitepaper
Main Index |
Thread Index |