NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kern/45917: Unsupported VT6415 PCIE IDE single-channel controller



The following reply was made to PR kern/45917; it has been noted by GNATS.

From: Matthew Mondor <mm_lists%pulsar-zone.net@localhost>
To: gnats-bugs%NetBSD.org@localhost
Cc: 
Subject: Re: kern/45917: Unsupported VT6415 PCIE IDE single-channel
 controller
Date: Tue, 21 Feb 2012 16:03:34 -0500

 On Fri,  3 Feb 2012 02:45:00 +0000 (UTC)
 Matthew Mondor <mm_lists%pulsar-zone.net@localhost> wrote:
 
 > ppb1 at pci0 dev 28 function 0: vendor 0x8086 product 0x1c10 (rev. 0xb5)
 > ppb1: unsupported PCI Express version
 > pci2 at ppb1 bus 2
 > pci2: i/o space, memory space enabled, rd/line, wr/inv ok
 > ppb2 at pci0 dev 28 function 4: vendor 0x8086 product 0x1c18 (rev. 0xb5)
 > ppb2: unsupported PCI Express version
 > pci3 at ppb2 bus 3
 > pci3: i/o space, memory space enabled, rd/line, wr/inv ok
 > pciide0 at pci3 dev 0 function 0
 > pciide0: vendor 0x1106 product 0x0415 (rev. 0x00)
 > pciide0: bus-master DMA support present, but unused (no driver support)
 > pciide0: primary channel wired to native-PCI mode
 > pciide0: using ioapic0 pin 16 for native-PCI interrupt
 > atabus0 at pciide0 channel 0
 > pciide0: secondary channel wired to native-PCI mode
 > atabus1 at pciide0 channel 1
 
 Another interesting consequence of using PIO was that as transfers were
 happenning (with at most 1.2MB/s transfer rates) a lot of CPU time
 would get accumulated on any active process, it seems.  Of course,
 during such transfers the system was also much less responsive.
 
 For now I could fortunately enable DMA by forcing the 0x0001 flag to
 pciide(4), and transfers are much more stable and efficient using the
 PATA interface.  The SATA interfaces work well.  Of course, ideally
 viaide(4) should be fixed, but at least I have a working system.
 
 There is a new issue that I discovered when using the disks with DMA
 this way.  For certain operations, such as using dd and cgd to scrub a
 drive (/dev/zero being fed through aes with a random key), performance
 was decent at first (12MB/sec) yet steadily decreasing down to about
 2MB/s at which point the whole system was less responsive.  If I
 suspended and resumed the dd process, the drive would write out its
 cache at full speed, and performance would be decent again at 12MB/sec
 to again drop gradually.  I used a script to stop/restart the process
 at regular interavals for the procedure to finally complete.  It's like
 if there was a cache related issue where general system performance
 drops as the cache is filling.  I have been wondering if this issue
 could in any way be related to the one I had with re(4) (kern/45928),
 but unlike when using re(4) the system was still usable.
 
 I have yet to do more testing with ACPI disabled soon.
 -- 
 Matt
 


Home | Main Index | Thread Index | Old Index