Port-arm archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: [PATCH] ARM: VIPT cache coherency fix
On Wed, Aug 06, 2008 at 03:48:22PM -0700, Matt Thomas wrote:
>
> On Aug 6, 2008, at 2:55 AM, Imre Deak wrote:
>
>> Currently page loaning (at least) is broken on ARM VIPT platforms.
>> The reason is that possible dirty cache lines are not taken into
>> account
>> at the moment, following is a fix for this.
>>
>> The fix is validated against the latest breakage related to the socket
>> page loaning reported earlier on the list.
>
> This is close to what I checked in but actually has a few missing corner
> cases. The version I checked works for the always failing
> scp sdp:/lib/libc.so .
I have a boot-time crash with your patch see a follow-up mail about it.
I checked your patch from yesterday, and I have the following questions
about it:
diff --git a/sys/arch/arm/arm32/pmap.c b/sys/arch/arm/arm32/pmap.c
index 2b62477..a23e0aa 100644
--- a/sys/arch/arm/arm32/pmap.c
+++ b/sys/arch/arm/arm32/pmap.c
[...]
@@ -1833,130 +1853,177 @@ pmap_vac_me_harder(struct vm_page *pg, pmap_t pm,
vaddr_t va)
[...]
bad_alias = true;
}
pg->mdpage.pvh_attrs |= PVF_WRITE;
+ if (!bad_alias)
+ pg->mdpage.pvh_attrs |= PVF_DIRTY;
+ } else {
+ pg->mdpage.pvh_attrs &= ~PVF_WRITE;
}
/* If no conflicting colors, set everything back to cached */
if (!bad_alias) {
- PMAPCOUNT(vac_color_restore);
+#ifdef DEBUG
+ if ((pg->mdpage.pvh_attrs & PVF_WRITE)
+ || ro_mappings < 2) {
+ SLIST_FOREACH(pv, &pg->mdpage.pvh_list, pv_link)
+ KDASSERT(((tst_mask ^ pv->pv_va) &
arm_cache_prefer_mask) == 0);
+ }
+
+#endif
+ if (ro_mappings > 1
+ && (pg->mdpage.pvh_attrs & PVF_DIRTY))
+ pmap_flush_page(pg, false);
Why is this necessary? We got here by PVF_DIRTY being initially unset and
possibly set again above by re-enabling caching for a r/w mapping.
Thus we'll always do a flush if we have r/w mapping and more than 1 r/o
mapping, although we're just switching on caching, so there can't be any
dirty cache lines.
[...]
return;
} else if (!pmap_is_page_colored_p(pg)) {
/* not colored so we just use its color */
+ KASSERT(pg->mdpage.pvh_attrs & (PVF_WRITE|PVF_DIRTY));
I think for a first r/o mapping this shouldn't hold.
PMAPCOUNT(vac_color_new);
pg->mdpage.pvh_attrs &= PAGE_SIZE - 1;
pg->mdpage.pvh_attrs |= PVF_COLORED
- | (va & arm_cache_prefer_mask);
- if (pg->mdpage.urw_mappings + pg->mdpage.krw_mappings > 0)
- pg->mdpage.pvh_attrs |= PVF_WRITE;
- KASSERT((pg->mdpage.urw_mappings + pg->mdpage.krw_mappings ==
0) == !(pg->mdpage.pvh_attrs & PVF_WRITE));
+ | (va & arm_cache_prefer_mask)
+ | (rw_mappings > 0 ? PVF_WRITE : 0);
+ KASSERT((rw_mappings == 0) == !(pg->mdpage.pvh_attrs &
PVF_WRITE));
Wouldn't it be a better idea to set PVF_DIRTY in this (and only this) function
together with PVF_WRITE? The same goes for PVF_WRITE which I believe would
make the code more understandable.
return;
} else if (((pg->mdpage.pvh_attrs ^ va) & arm_cache_prefer_mask) == 0) {
bad_alias = false;
- if (pg->mdpage.urw_mappings + pg->mdpage.krw_mappings > 0) {
+ if (rw_mappings > 0) {
/*
* We now have writeable mappings and more than one
* readonly mapping, verify the colors don't clash
* and mark the page as writeable.
*/
- if (pg->mdpage.uro_mappings + pg->mdpage.kro_mappings >
1
- && (pg->mdpage.pvh_attrs & PVF_WRITE) == 0) {
+ if (ro_mappings > 1
+ && (pg->mdpage.pvh_attrs & PVF_WRITE) == 0
+ && arm_cache_prefer_mask) {
arm_cache_prefer_mask is tested against 0 already at function entry.
tst_mask = pg->mdpage.pvh_attrs &
arm_cache_prefer_mask;
- for (pv = pg->mdpage.pvh_list;
- pv && !bad_alias;
- pv = pv->pv_next) {
+ SLIST_FOREACH(pv, &pg->mdpage.pvh_list,
pv_link) {
/* if there's a bad alias, stop
checking. */
- if (tst_mask != (pv->pv_va &
arm_cache_prefer_mask))
+ if (((tst_mask ^ pv->pv_va) &
arm_cache_prefer_mask) == 0) {
bad_alias = true;
+ break;
+ }
This condition is incorrect, it should be negated.
[...]
+ } else if (rw_mappings == 0
&& (pg->mdpage.pvh_attrs & PVF_KMPAGE) == 0) {
KASSERT((pg->mdpage.pvh_attrs & PVF_WRITE) == 0);
+
/*
- * If all the mappings are read-only, don't do anything.
+ * If the page has dirty cache lines, clean it.
*/
- PMAPCOUNT(vac_color_blind);
- KASSERT((pg->mdpage.urw_mappings + pg->mdpage.krw_mappings ==
0) == !(pg->mdpage.pvh_attrs & PVF_WRITE));
+ if (pg->mdpage.pvh_attrs & PVF_DIRTY)
+ pmap_flush_page(pg, false);
So now we're setting up a r/o mapping with unmatching color and from a
previous r/w mapping there is a dirty cache line. You're doing only a
write-back but don't invalidate the line. After this, let's assume the
following scenario:
1. All r/o mappings removed, except one. Cache remains on.
2. w/r mapping added with matching color. Cache remains on, data is
modified.
3. w/r mapping removed. Cache remains on, only one r/o mapping.
4. r/o mapping added with the same color as the original w/r mapping had
for which the write-back was made above. Cache is still on, since
there are only r/o mappings.
At this moment the r/o mapping from 4. will have a stale data value.
[...]
@@ -2196,6 +2266,19 @@ pmap_clearbit(struct vm_page *pg, u_int maskbits)
pmap_syncicache_page(pg);
PMAPCOUNT(exec_synced_clearbit);
}
+ /*
+ * If we are changing this to read-only, we ned to call vac_me_harder
+ * so we can change all the read-only pages to cacheable. We pretend
+ * this as a page deletion.
+ */
+ if (need_vac_me_harder) {
+ if (pg->mdpage.pvh_attrs & PVF_NC)
+ pmap_vac_me_harder(pg, NULL, 0);
Isn't the pv_list protected by the pm lock? It's released at this point.
[...]
@@ -3673,7 +3790,7 @@ pmap_fault_fixup(pmap_t pm, vaddr_t va, vm_prot_t ftype,
int user)
printf("pmap_fault_fixup: mod emul. pm %p, va 0x%08lx, pa
0x%08lx\n",
pm, va, VM_PAGE_TO_PHYS(pg)));
- pg->mdpage.pvh_attrs |= PVF_REF | PVF_MOD;
+ pg->mdpage.pvh_attrs |= PVF_REF | PVF_MOD | PVF_DIRTY;
This should only be the case if we the mapping is cached. Setting
PVF_DIRTY asynchronously doesn't make sense anyway, since we use it to
set up the cacheability for mappings correctly.
--Imre
Home |
Main Index |
Thread Index |
Old Index