tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: x86 MD MSI/MSI-X implementation
Hi,
On 2015/04/10 1:59, Matt Thomas wrote:
>> On Apr 9, 2015, at 3:44 AM, Kengo NAKAHARA <k-nakahara%iij.ad.jp@localhost> wrote:
>> I implement x86 MD MSI/MSI-X support code. Here is the patch:
>> http://www.netbsd.org/~knakahara/md-msi-msix/x86-md-msi-msix.patch
>>
>> Furthermore, here is the usage example of if_wm:
>> http://www.netbsd.org/~knakahara/md-msi-msix/if_wm-msi-msix.patch
>>
>> I believe this MD implementation help to decide the MI APIs.
>> One of MI MSI/MSI-X API is dyoung@n.o's bus_msi(9),
>> http://mail-index.netbsd.org/tech-kern/2014/06/06/msg017209.html
>> another is matt@n.o's API,
>> http://mail-index.netbsd.org/tech-kern/2011/08/05/msg011130.html
>> The other is mine, above patch includes my API manual.
>>
>> I want feedback from various device driver MSI/MSI-X implementations,
>> such as usability, portablity, performance, and potential issues.
>> So, I would like to commit above patch. If no one opposites, I commit
>> above patch after one or two weeks.
>> # Of course, I will commit by dividing each component.
>>
>> Could you comment this implementation?
>
> PCI_MSI_MSIX should be __HAVE_PCI_MSI_MSIX
OK, I will change PCI_MSI_MSIX kernel option to __HAVE_PCI_MSI_MSIX build
flag.
> void *vih;
>
> That should be pci_msix_handle_t or pci_msi_handle_t unless you want
> still use pci_intr_handle_t.
>
> + mutex_enter(&cpu_lock);
> + error = intr_distribute(vih, affinity, NULL);
> + mutex_exit(&cpu_lock);
>
>
> Why the cpu_lock? shouldn't intr_distribute handle that itself?
intr_distribute() update multiple cpu_info, so it require cpu_lock.
Furthermore, getting cpu_lock in the out of intr_distribute() is
required to unify the same style as intrctl(8).
# intrctl(8) is near future work
intrctl(8) needs to get the handle for intr_distribute() before doing
intr_distribute(), since userland processes know the interrupt name
only. To prevent race conditions among multiple intrctl(8) processes,
it required cpu_lock like below code:
====================
intr_set_affinity_sysctl()
{
mutex_enter(&cpu_lock);
ich = intr_get_handler("interrupt name from userland");
error = intr_distribute(ich, newcpuset, oldcpuset);
mutex_exit(&cpu_lock);
}
====================
To unify with above code, it get cpu_lock in the out of intr_distribute().
> This should a pci_intr_distribute and be done before the establish, not after.
This reason is similar to above description, that is, to unify the same
as intrctl(8) which run after device established.
> Why do you assume you will only allocate one slot per msi/msix call?
> Some device will use multiple contiguous slots so you have to alllocate
> them as a set.
Well..., could you tell me what do you mean "slot"? MSI-X table entry?
If it means MSI-X table entry, pci_msix_alloc() and pci_msix_alloc_exact()
allocate contiguous MSI-X table entries. In contrast, pci_msix_alloc_map()
can allocate sparse MSI-X table entries.
MSI always use contiguous vector number.
Thanks,
--
//////////////////////////////////////////////////////////////////////
Internet Initiative Japan Inc.
Device Engineering Section,
Core Product Development Department,
Product Division,
Technology Unit
Kengo NAKAHARA <k-nakahara%iij.ad.jp@localhost>
Home |
Main Index |
Thread Index |
Old Index