Subject: Re: Should Alpha PCI code manage latency timers?
To: None <firstname.lastname@example.org>
From: List Mail User <track@Plectere.com>
Date: 01/24/2005 14:58:41
>From email@example.com Mon Jan 24 12:53:06 2005
>Date: Mon, 24 Jan 2005 15:52:47 -0500
>From: Thor Lancelot Simon <firstname.lastname@example.org>
>To: List Mail User <track@Plectere.com>
>Cc: email@example.com, firstname.lastname@example.org
>Subject: Re: Should Alpha PCI code manage latency timers?
>On Mon, Jan 24, 2005 at 08:05:50AM -0800, List Mail User wrote:
>> This is from my *very* old PCI 1.0 and 1.1 drafts and specs, but...
>> The latency timer, min and max grant and irq fields in the PCI config
>> space are all reserved for either the BIOS or OS which does the initial
>> resource allocation. It is/was strictly a violation of the (original)
>Of course, the problem is that we run on lots of platforms where the
>firmware does only partial or even no "initial resource allocation".
Yes, this is the typical BIOS setting for PnP OS on most x86 PCs.
And what has happened is that the subset does *not* qualify as initialization,
i.e. the OS really does have to still walk the bus(es) and potentially
reallocate all the resources.
>Even on many modern x86 PCs, it is not uncommon these days to find that
>the BIOS has only done enough configuration work to find some small set
>of what it considers to be plausible boot devices, e.g. interrupt,
>memory, and latency settings for devices on the primary PCI bus and
>nothing at all for devices behind bridges.
>In such cases it's a little difficult to see what counts as "initial";
>if it's the first time the latency timer has been written to since bus
>reset, it seems to me what the isp driver is doing isn't, strictly
>speaking, wrong (or, at least, no more wrong than it would be to
>consistently set latency for all devices in the MD code for PCI buses
>on this port).
Here I think youve almost hit on the correct answer. In fact the
MD code *should* set latency for all the devices (this is not incorrect,
but in fact one of the original design goals - from memory of meetings
almost 14 years ago).
>Complicating matters further is that it seems that a warm reboot of
>this machine doesn't reset the PCI bus -- since the user found that
>after a reboot back to his original kernel (which didn't set the
>latency timer for the IDE device) the latency value his test kernel
>had set had "stuck". How _can_ we do "the right thing" in such a
This is because the specific field I mentioned are intended to be
read-only after allocation, a warm reboot should *not* change them - The
values *should* stick.
>One thing I'm curious about: is a latency timer value of 0x00 legal?
>If so, what does it mean? In a similar vein, Reinoud's machine seems
>to have a device in it which claims maximum latency 0x00 but which
>powers up after bus reset with latency timer value 0xff (this seems
>to violate the specification, to say the least). How should we
>handle such illegal cases, in your opinion?
A latency timer value of zero is indeed legal, it means the device has
no particular requirements of its own - still it needs to be updated to account
for the other devices on the bus (actually, if the device doesn't have any
bus-master capabilities, it likely just ignores any value anyway).
On the other hand, I believe that the original specs had a maximum
allowed value of 0xC0 (or some other value less than 0xFF), and a device which
reads 0xFF likely doesn't implement the register as required by the specs (a
simple test would be to try to write a different value and check what you get
when reading back afterwards). The register is always supposed to be both
read/write, but can only be written at an early point during initialization
(safely/properly); Individual devices are supposed to initialize the register
to reflect their own particular requirements at power on and system software
(i.e. BIOS or OS) is supposed to update every device to the maximum value that
any particular device requires (exactly to avoid contention issues and allow
devices to dynamically determine buffer size needs).
Once Cardbus was added into the mix, there should have been some
clarification or changes, but I don't know what (if anything) they were;
Also hot-plug PCI has the same issue.
In a similar vein, the "interrupt" field is supposed to always be
unused by the chip entirely and exists just for the sake of a driver writer.
(It is supposed to power up to zero, but many old devices only implemented
the lower 4 bits - incorrectly, but assuming a PC centric world - pre-APICs,
it worked). I remember at an early PCI-SIG plug-fest, Sun telling us that
we were one of the few devices which actually implemented the full 8 bits
- that was either `91 or `92 - before any released Sun products used PCI
instead of Sbus).
I'm not certain if a read-only value of zero would be legal for a
device with no bus-master capabilities or not (it seems it should, but I'd
have to go back and check the exact wording). With any luck the 0xFF device
falls into this category and can be treated as if it had a value of zero (I
don't know exactly what device we are talking about though, so it is not
obvious what the intent of the designer(s) was).
As an example of an old device which relied on this was the old
Adaptec 2940's which would walk the entire PCI tree and update all the devices
(because there were too many busted BIOSs back in the early to mid '90s). This
was one of the reasons those old Adaptec boards would add about a minute to the
machine's boot time (the actual code can be seen by disassembling the ROM on
one of them). Without the update, they would regularly hang the bus (I tested
this on an scope and probe by disabling part of the PCI ROM in that time frame).
Hopefully, someone on this list has more recent copies of the specs
than mine, or has current access to the PCI-SIG ftp server.