Bug 160431 - [busdma] [patch] Disable interrupts during busdma cache sync operations.
Summary: [busdma] [patch] Disable interrupts during busdma cache sync operations.
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: arm (show other bugs)
Version: Unspecified
Hardware: Any Any
: Normal Affects Only Me
Assignee: freebsd-arm (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2011-09-03 18:20 UTC by Ian Lepore
Modified: 2012-12-31 21:10 UTC (History)
0 users

See Also:


Attachments
diff.tmp (916 bytes, text/plain)
2011-09-03 18:20 UTC, Ian Lepore
no flags Details
arm_busdma_machdep_c.diff (2.79 KB, patch)
2011-10-16 20:57 UTC, marktinguely
no flags Details | Diff
arm_busdma.diff (2.52 KB, patch)
2012-04-21 17:01 UTC, Ian Lepore
no flags Details | Diff
arm_busdma.diff (2.79 KB, patch)
2012-04-21 17:22 UTC, Ian Lepore
no flags Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Ian Lepore 2011-09-03 18:20:12 UTC
Data can be corrupted when an interrupt occurs while busdma_sync_buf() is 
handling a buffer that partially overlaps a cache line.  One scenario, seen
in the real world, was a small IO buffer allocated in the same cache line
as an instance of a struct intr_handler.  The dma sync code copied the non-DMA
data (the intr_handler struct) to a temporary buffer prior to the cache sync,
then an interrupt occurs that results in setting the it_need flag in the 
struct.  When control returns to the dma sync code it finishes by copying 
the saved partial cache line from the temporary buffer back to the 
intr_handler struct, restoring the it_need flag to zero, and resulting in
a threaded interrupt handler not running as needed.

Similar sequences can be imagined that would lead to corruption of either
the DMA'd data or non-DMA data sharing the same cache line, depending on the
timing of the interrupt, and I can't quite convince myself that the problem
only occurs in this partial-cacheline-overlap scenario.  For example, what
happens if execution is in the middle of a cpu_dcache_wbinv_range() operation
and an interrupt leads to a context switch wherein the pmap code decides to
call cpu_dcache_inv_range()?  So to be conservatively safe, this patch 
disables interrupts for the entire duration bus_dmamap_sync_buf(), not just
when partial cache lines are being handled.

Fix: Problem was discovered in an 8.2 environment, but this diff is to -current.
How-To-Repeat: It would be very difficult to set up a repeatable test of this condition.  We
were lucky (!) enough to have it happen repeatably enough to diagnose.
Comment 1 marktinguely 2011-09-03 20:05:32 UTC
On 9/3/2011 12:19 PM, Ian Lepore wrote:
>> Number:         160431
>> Category:       arm
>> Synopsis:       [patch] Disable interrupts during busdma cache sync operations.
>> Confidential:   no
>> Severity:       critical
>> Priority:       high
>> Responsible:    freebsd-arm
>> State:          open
>> Quarter:
>> Keywords:
>> Date-Required:
>> Class:          sw-bug
>> Submitter-Id:   current-users
>> Arrival-Date:   Sat Sep 03 17:20:12 UTC 2011
>> Closed-Date:
>> Last-Modified:
>> Originator:     Ian Lepore<freebsd@damnhippie.dyndns.org>
>> Release:        FreeBSD 8.2-RC3 arm
>> Organization:
> none
>> Environment:
> FreeBSD dvb 8.2-RC3 FreeBSD 8.2-RC3 #49: Tue Feb 15 22:52:14 UTC 2011     root@revolution.hippie.lan:/usr/obj/arm/usr/src/sys/DVB  arm
>
>> Description:
> Data can be corrupted when an interrupt occurs while busdma_sync_buf() is
> handling a buffer that partially overlaps a cache line.  One scenario, seen
> in the real world, was a small IO buffer allocated in the same cache line
> as an instance of a struct intr_handler.  The dma sync code copied the non-DMA
> data (the intr_handler struct) to a temporary buffer prior to the cache sync,
> then an interrupt occurs that results in setting the it_need flag in the
> struct.  When control returns to the dma sync code it finishes by copying
> the saved partial cache line from the temporary buffer back to the
> intr_handler struct, restoring the it_need flag to zero, and resulting in
> a threaded interrupt handler not running as needed.
>
> Similar sequences can be imagined that would lead to corruption of either
> the DMA'd data or non-DMA data sharing the same cache line, depending on the
> timing of the interrupt, and I can't quite convince myself that the problem
> only occurs in this partial-cacheline-overlap scenario.  For example, what
> happens if execution is in the middle of a cpu_dcache_wbinv_range() operation
> and an interrupt leads to a context switch wherein the pmap code decides to
> call cpu_dcache_inv_range()?  So to be conservatively safe, this patch
> disables interrupts for the entire duration bus_dmamap_sync_buf(), not just
> when partial cache lines are being handled.
>
>> How-To-Repeat:
> It would be very difficult to set up a repeatable test of this condition.  We
> were lucky (!) enough to have it happen repeatably enough to diagnose.
>
>> Fix:
> Problem was discovered in an 8.2 environment, but this diff is to -current.
>
> --- diff.tmp begins here ---
> --- busdma_machdep.c.orig	2010-03-11 14:16:54.000000000 -0700
> +++ busdma_machdep.c	2011-09-03 10:15:16.000000000 -0600
> @@ -1091,6 +1091,14 @@ static void
>   bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
>   {
>   	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
> +	uint32_t intr;
> +
> +	/* Interrupts MUST be disabled when handling partial cacheline flush
> +	 * and most likely should be disabled for all flushes.  (I know for
> +	 * certain interrupts can cause failures on partial flushes, and suspect
> +	 * problems could also happen in other scenarios.)
> +	 */
> +	intr = intr_disable();
>
>   	if ((op&  BUS_DMASYNC_PREWRITE)&&  !(op&  BUS_DMASYNC_PREREAD)) {
>   		cpu_dcache_wb_range((vm_offset_t)buf, len);
> @@ -1129,6 +1137,8 @@ bus_dmamap_sync_buf(void *buf, int len,
>   			    arm_dcache_align - (((vm_offset_t)(buf) + len)&
>   			arm_dcache_align_mask));
>   	}
> +
> +	intr_restore(intr);
>   }
>
>   static void
> --- diff.tmp ends here ---
>
>> Release-Note:
>> Audit-Trail:
>> Unformatted:
> _______________________________________________
>

Which processor are you using (for my curiosity)?

If this is easily reproducible, would you please put the interrupt 
disable/restore just around the  BUS_DMASYNC_POSTREAD option? (for my 
curiosity again).

Thank-you

--Mark.
Comment 2 Ian Lepore 2011-09-03 20:43:38 UTC
On Sat, 2011-09-03 at 14:05 -0500, Mark Tinguely wrote:
> On 9/3/2011 12:19 PM, Ian Lepore wrote:
> > [bug report]
> 
> Which processor are you using (for my curiosity)?
> 
> If this is easily reproducible, would you please put the interrupt 
> disable/restore just around the  BUS_DMASYNC_POSTREAD option? (for my 
> curiosity again).
> 
> Thank-you
> 
> --Mark.

It's an Atmel at91rm9200.  It's been weeks since we were actively
working this problem (I'm just way behind on submitting fixes back to
the community), so it would be pretty hard at this point to get back to
where the problem is easily reproducible.  But when we were still
investigating it, I remember instrumenting the code to conclusively
prove it was the handling of partially-overlapping cache lines within
the POSTREAD block that was leading to the ih_need flag of the nearby
intr_handler struct getting reset to zero.  

We actually discovered all this on code that was mostly 6.2 with some
6.4 stuff merged in.  We're now almost up and running on 8.2 on our
embedded arm products, so the whole 6.2 thing is feeling like a bad
nightmare that won't quite fade from memory. :)

My putting the intr_disable() at the next-outer layer of code is just an
abundance of caution.  But given that the pmap code and the busdma code
can both lead to writeback and/or invalidate activity, and that those
two pieces of code don't know what each other are doing, I've had a
growing quesy feeling about this stuff for a while.  For example, the
pmap code can decide to do a writeback that could overwrite
DMA-in-progress data, especially in the cases of partial overlap of a
DMA operation with a cache line.  But I didn't want to start raising any
alarms until I've learned more about the code in its current form, and
I'm still working on learning that.

-- Ian
Comment 3 marktinguely 2011-10-16 20:57:50 UTC
Ian and I have been sending emails this week about this problem. He and 
I have examples where turning off interrupts will not be enough, but I 
think this is a good start.

If we cache align the allocation sizes that are less than a PAGE_SIZE in 
bus_dmamem_alloc(), then it may help avoid this routine. Attached is a 
crude alignment concept code.

--Mark.
Comment 4 Ian Lepore 2012-04-21 17:01:06 UTC
Here is the latest and most-tested patch for this problem.  

Mark Tinguely had suggested that the scope of having interrupts disabled
be narrowed to just the time spent doing a partial cacheline flush, as
opposed to disabling them for the entire bus_dmamap_sync_buf() function
as my original patch did.  I made that change and we've been shipping
products using this new patch since September 2011, but I apparently
neglected to submit the updated patch (which also fixes some line
wrapping and other style(9) issues).

-- Ian
Comment 5 Ian Lepore 2012-04-21 17:22:47 UTC
Arrgh!  One more time, with the actual correct patch attached (and this
time validated correctly by building kernel rather than world, doh!).

Comment 6 dfilter service freebsd_committer freebsd_triage 2012-04-22 01:58:20 UTC
Author: marius
Date: Sun Apr 22 00:58:04 2012
New Revision: 234561
URL: http://svn.freebsd.org/changeset/base/234561

Log:
  Interrupts must be disabled while handling a partial cache line flush,
  as otherwise the interrupt handling code may modify data in the non-DMA
  part of the cache line while we have it stashed away in the temporary
  stack buffer, then we end up restoring a stale value.
  
  PR:		160431
  Submitted by:	Ian Lepore
  MFC after:	1 week

Modified:
  head/sys/arm/arm/busdma_machdep.c

Modified: head/sys/arm/arm/busdma_machdep.c
==============================================================================
--- head/sys/arm/arm/busdma_machdep.c	Sun Apr 22 00:43:32 2012	(r234560)
+++ head/sys/arm/arm/busdma_machdep.c	Sun Apr 22 00:58:04 2012	(r234561)
@@ -1091,14 +1091,16 @@ static void
 bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
 {
 	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
+	register_t s;
+	int partial; 
 
 	if ((op & BUS_DMASYNC_PREWRITE) && !(op & BUS_DMASYNC_PREREAD)) {
 		cpu_dcache_wb_range((vm_offset_t)buf, len);
 		cpu_l2cache_wb_range((vm_offset_t)buf, len);
 	}
+	partial = (((vm_offset_t)buf) | len) & arm_dcache_align_mask;
 	if (op & BUS_DMASYNC_PREREAD) {
-		if (!(op & BUS_DMASYNC_PREWRITE) &&
-		    ((((vm_offset_t)(buf) | len) & arm_dcache_align_mask) == 0)) {
+		if (!(op & BUS_DMASYNC_PREWRITE) && !partial) {
 			cpu_dcache_inv_range((vm_offset_t)buf, len);
 			cpu_l2cache_inv_range((vm_offset_t)buf, len);
 		} else {
@@ -1107,27 +1109,32 @@ bus_dmamap_sync_buf(void *buf, int len, 
 		}
 	}
 	if (op & BUS_DMASYNC_POSTREAD) {
-		if ((vm_offset_t)buf & arm_dcache_align_mask) {
-			memcpy(_tmp_cl, (void *)((vm_offset_t)buf & ~
-			    arm_dcache_align_mask),
-			    (vm_offset_t)buf & arm_dcache_align_mask);
-		}
-		if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
-			memcpy(_tmp_clend, (void *)((vm_offset_t)buf + len),
-			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
-			   arm_dcache_align_mask));
+		if (partial) {
+			s = intr_disable();
+			if ((vm_offset_t)buf & arm_dcache_align_mask)
+				memcpy(_tmp_cl, (void *)((vm_offset_t)buf &
+				    ~arm_dcache_align_mask),
+				    (vm_offset_t)buf & arm_dcache_align_mask);
+			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
+				memcpy(_tmp_clend, 
+				    (void *)((vm_offset_t)buf + len),
+				    arm_dcache_align - (((vm_offset_t)(buf) +
+				    len) & arm_dcache_align_mask));
 		}
 		cpu_dcache_inv_range((vm_offset_t)buf, len);
 		cpu_l2cache_inv_range((vm_offset_t)buf, len);
-
-		if ((vm_offset_t)buf & arm_dcache_align_mask)
-			memcpy((void *)((vm_offset_t)buf &
-			    ~arm_dcache_align_mask), _tmp_cl, 
-			    (vm_offset_t)buf & arm_dcache_align_mask);
-		if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
-			memcpy((void *)((vm_offset_t)buf + len), _tmp_clend,
-			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
-			   arm_dcache_align_mask));
+		if (partial) {
+			if ((vm_offset_t)buf & arm_dcache_align_mask)
+				memcpy((void *)((vm_offset_t)buf &
+				    ~arm_dcache_align_mask), _tmp_cl, 
+				    (vm_offset_t)buf & arm_dcache_align_mask);
+			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
+				memcpy((void *)((vm_offset_t)buf + len), 
+				    _tmp_clend, arm_dcache_align - 
+				    (((vm_offset_t)(buf) + len) &
+				    arm_dcache_align_mask));
+			intr_restore(s);
+		}
 	}
 }
 
_______________________________________________
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
Comment 7 dfilter service freebsd_committer freebsd_triage 2012-05-26 10:14:33 UTC
Author: marius
Date: Sat May 26 09:13:24 2012
New Revision: 236085
URL: http://svn.freebsd.org/changeset/base/236085

Log:
  MFC: r234561
  
  Interrupts must be disabled while handling a partial cache line flush,
  as otherwise the interrupt handling code may modify data in the non-DMA
  part of the cache line while we have it stashed away in the temporary
  stack buffer, then we end up restoring a stale value.
  
  PR:		160431
  Submitted by:	Ian Lepore

Modified:
  stable/9/sys/arm/arm/busdma_machdep.c
Directory Properties:
  stable/9/sys/   (props changed)
  stable/9/sys/amd64/include/xen/   (props changed)
  stable/9/sys/boot/   (props changed)
  stable/9/sys/boot/i386/efi/   (props changed)
  stable/9/sys/boot/ia64/efi/   (props changed)
  stable/9/sys/boot/ia64/ski/   (props changed)
  stable/9/sys/boot/powerpc/boot1.chrp/   (props changed)
  stable/9/sys/boot/powerpc/ofw/   (props changed)
  stable/9/sys/cddl/contrib/opensolaris/   (props changed)
  stable/9/sys/conf/   (props changed)
  stable/9/sys/contrib/dev/acpica/   (props changed)
  stable/9/sys/contrib/octeon-sdk/   (props changed)
  stable/9/sys/contrib/pf/   (props changed)
  stable/9/sys/contrib/x86emu/   (props changed)
  stable/9/sys/dev/   (props changed)
  stable/9/sys/dev/e1000/   (props changed)
  stable/9/sys/dev/ixgbe/   (props changed)
  stable/9/sys/fs/   (props changed)
  stable/9/sys/fs/ntfs/   (props changed)
  stable/9/sys/modules/   (props changed)

Modified: stable/9/sys/arm/arm/busdma_machdep.c
==============================================================================
--- stable/9/sys/arm/arm/busdma_machdep.c	Sat May 26 09:11:45 2012	(r236084)
+++ stable/9/sys/arm/arm/busdma_machdep.c	Sat May 26 09:13:24 2012	(r236085)
@@ -1091,14 +1091,16 @@ static void
 bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
 {
 	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
+	register_t s;
+	int partial; 
 
 	if ((op & BUS_DMASYNC_PREWRITE) && !(op & BUS_DMASYNC_PREREAD)) {
 		cpu_dcache_wb_range((vm_offset_t)buf, len);
 		cpu_l2cache_wb_range((vm_offset_t)buf, len);
 	}
+	partial = (((vm_offset_t)buf) | len) & arm_dcache_align_mask;
 	if (op & BUS_DMASYNC_PREREAD) {
-		if (!(op & BUS_DMASYNC_PREWRITE) &&
-		    ((((vm_offset_t)(buf) | len) & arm_dcache_align_mask) == 0)) {
+		if (!(op & BUS_DMASYNC_PREWRITE) && !partial) {
 			cpu_dcache_inv_range((vm_offset_t)buf, len);
 			cpu_l2cache_inv_range((vm_offset_t)buf, len);
 		} else {
@@ -1107,27 +1109,32 @@ bus_dmamap_sync_buf(void *buf, int len, 
 		}
 	}
 	if (op & BUS_DMASYNC_POSTREAD) {
-		if ((vm_offset_t)buf & arm_dcache_align_mask) {
-			memcpy(_tmp_cl, (void *)((vm_offset_t)buf & ~
-			    arm_dcache_align_mask),
-			    (vm_offset_t)buf & arm_dcache_align_mask);
-		}
-		if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
-			memcpy(_tmp_clend, (void *)((vm_offset_t)buf + len),
-			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
-			   arm_dcache_align_mask));
+		if (partial) {
+			s = intr_disable();
+			if ((vm_offset_t)buf & arm_dcache_align_mask)
+				memcpy(_tmp_cl, (void *)((vm_offset_t)buf &
+				    ~arm_dcache_align_mask),
+				    (vm_offset_t)buf & arm_dcache_align_mask);
+			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
+				memcpy(_tmp_clend, 
+				    (void *)((vm_offset_t)buf + len),
+				    arm_dcache_align - (((vm_offset_t)(buf) +
+				    len) & arm_dcache_align_mask));
 		}
 		cpu_dcache_inv_range((vm_offset_t)buf, len);
 		cpu_l2cache_inv_range((vm_offset_t)buf, len);
-
-		if ((vm_offset_t)buf & arm_dcache_align_mask)
-			memcpy((void *)((vm_offset_t)buf &
-			    ~arm_dcache_align_mask), _tmp_cl, 
-			    (vm_offset_t)buf & arm_dcache_align_mask);
-		if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
-			memcpy((void *)((vm_offset_t)buf + len), _tmp_clend,
-			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
-			   arm_dcache_align_mask));
+		if (partial) {
+			if ((vm_offset_t)buf & arm_dcache_align_mask)
+				memcpy((void *)((vm_offset_t)buf &
+				    ~arm_dcache_align_mask), _tmp_cl, 
+				    (vm_offset_t)buf & arm_dcache_align_mask);
+			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
+				memcpy((void *)((vm_offset_t)buf + len), 
+				    _tmp_clend, arm_dcache_align - 
+				    (((vm_offset_t)(buf) + len) &
+				    arm_dcache_align_mask));
+			intr_restore(s);
+		}
 	}
 }
 
_______________________________________________
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
Comment 8 dfilter service freebsd_committer freebsd_triage 2012-05-26 10:14:38 UTC
Author: marius
Date: Sat May 26 09:13:38 2012
New Revision: 236086
URL: http://svn.freebsd.org/changeset/base/236086

Log:
  MFC: r234561
  
  Interrupts must be disabled while handling a partial cache line flush,
  as otherwise the interrupt handling code may modify data in the non-DMA
  part of the cache line while we have it stashed away in the temporary
  stack buffer, then we end up restoring a stale value.
  
  PR:		160431
  Submitted by:	Ian Lepore

Modified:
  stable/8/sys/arm/arm/busdma_machdep.c
Directory Properties:
  stable/8/sys/   (props changed)
  stable/8/sys/amd64/include/xen/   (props changed)
  stable/8/sys/boot/   (props changed)
  stable/8/sys/cddl/contrib/opensolaris/   (props changed)
  stable/8/sys/contrib/dev/acpica/   (props changed)
  stable/8/sys/contrib/pf/   (props changed)
  stable/8/sys/dev/e1000/   (props changed)

Modified: stable/8/sys/arm/arm/busdma_machdep.c
==============================================================================
--- stable/8/sys/arm/arm/busdma_machdep.c	Sat May 26 09:13:24 2012	(r236085)
+++ stable/8/sys/arm/arm/busdma_machdep.c	Sat May 26 09:13:38 2012	(r236086)
@@ -1091,14 +1091,16 @@ static void
 bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
 {
 	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
+	register_t s;
+	int partial; 
 
 	if ((op & BUS_DMASYNC_PREWRITE) && !(op & BUS_DMASYNC_PREREAD)) {
 		cpu_dcache_wb_range((vm_offset_t)buf, len);
 		cpu_l2cache_wb_range((vm_offset_t)buf, len);
 	}
+	partial = (((vm_offset_t)buf) | len) & arm_dcache_align_mask;
 	if (op & BUS_DMASYNC_PREREAD) {
-		if (!(op & BUS_DMASYNC_PREWRITE) &&
-		    ((((vm_offset_t)(buf) | len) & arm_dcache_align_mask) == 0)) {
+		if (!(op & BUS_DMASYNC_PREWRITE) && !partial) {
 			cpu_dcache_inv_range((vm_offset_t)buf, len);
 			cpu_l2cache_inv_range((vm_offset_t)buf, len);
 		} else {
@@ -1107,27 +1109,32 @@ bus_dmamap_sync_buf(void *buf, int len, 
 		}
 	}
 	if (op & BUS_DMASYNC_POSTREAD) {
-		if ((vm_offset_t)buf & arm_dcache_align_mask) {
-			memcpy(_tmp_cl, (void *)((vm_offset_t)buf & ~
-			    arm_dcache_align_mask),
-			    (vm_offset_t)buf & arm_dcache_align_mask);
-		}
-		if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
-			memcpy(_tmp_clend, (void *)((vm_offset_t)buf + len),
-			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
-			   arm_dcache_align_mask));
+		if (partial) {
+			s = intr_disable();
+			if ((vm_offset_t)buf & arm_dcache_align_mask)
+				memcpy(_tmp_cl, (void *)((vm_offset_t)buf &
+				    ~arm_dcache_align_mask),
+				    (vm_offset_t)buf & arm_dcache_align_mask);
+			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
+				memcpy(_tmp_clend, 
+				    (void *)((vm_offset_t)buf + len),
+				    arm_dcache_align - (((vm_offset_t)(buf) +
+				    len) & arm_dcache_align_mask));
 		}
 		cpu_dcache_inv_range((vm_offset_t)buf, len);
 		cpu_l2cache_inv_range((vm_offset_t)buf, len);
-
-		if ((vm_offset_t)buf & arm_dcache_align_mask)
-			memcpy((void *)((vm_offset_t)buf &
-			    ~arm_dcache_align_mask), _tmp_cl, 
-			    (vm_offset_t)buf & arm_dcache_align_mask);
-		if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
-			memcpy((void *)((vm_offset_t)buf + len), _tmp_clend,
-			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
-			   arm_dcache_align_mask));
+		if (partial) {
+			if ((vm_offset_t)buf & arm_dcache_align_mask)
+				memcpy((void *)((vm_offset_t)buf &
+				    ~arm_dcache_align_mask), _tmp_cl, 
+				    (vm_offset_t)buf & arm_dcache_align_mask);
+			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
+				memcpy((void *)((vm_offset_t)buf + len), 
+				    _tmp_clend, arm_dcache_align - 
+				    (((vm_offset_t)(buf) + len) &
+				    arm_dcache_align_mask));
+			intr_restore(s);
+		}
 	}
 }
 
_______________________________________________
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
Comment 9 Marius Strobl freebsd_committer freebsd_triage 2012-05-26 10:22:51 UTC
State Changed
From-To: open->closed

Close
Comment 10 dfilter service freebsd_committer freebsd_triage 2012-12-31 21:00:46 UTC
Author: gonzo
Date: Mon Dec 31 21:00:38 2012
New Revision: 244912
URL: http://svnweb.freebsd.org/changeset/base/244912

Log:
  Merge r234561 from busdma_machdep.c to ARMv6 version of busdma:
  
  Interrupts must be disabled while handling a partial cache line flush,
  as otherwise the interrupt handling code may modify data in the non-DMA
  part of the cache line while we have it stashed away in the temporary
  stack buffer, then we end up restoring a stale value.
  
  PR:             160431
  Submitted by:   Ian Lepore

Modified:
  head/sys/arm/arm/busdma_machdep-v6.c

Modified: head/sys/arm/arm/busdma_machdep-v6.c
==============================================================================
--- head/sys/arm/arm/busdma_machdep-v6.c	Mon Dec 31 16:52:52 2012	(r244911)
+++ head/sys/arm/arm/busdma_machdep-v6.c	Mon Dec 31 21:00:38 2012	(r244912)
@@ -1347,35 +1347,49 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus
 			while (sl != NULL) {
 					/* write back the unaligned portions */
 				vm_paddr_t physaddr;
+				register_t s = 0;
+
 				buf = sl->vaddr;
 				len = sl->datacount;
 				physaddr = sl->busaddr;
 				bbuf = buf & ~arm_dcache_align_mask;
 				ebuf = buf + len;
 				physaddr = physaddr & ~arm_dcache_align_mask;
-				unalign = buf & arm_dcache_align_mask;
-				if (unalign) {
-					memcpy(_tmp_cl, (void *)bbuf, unalign);
-					len += unalign; /* inv entire cache line */
-				}
-				unalign = ebuf & arm_dcache_align_mask;
-				if (unalign) {
-					unalign = arm_dcache_align - unalign;
-					memcpy(_tmp_clend, (void *)ebuf, unalign);
-					len += unalign; /* inv entire cache line */
+
+
+				if ((buf & arm_dcache_align_mask) ||
+				    (ebuf & arm_dcache_align_mask)) {
+					s = intr_disable();
+					unalign = buf & arm_dcache_align_mask;
+					if (unalign) {
+						memcpy(_tmp_cl, (void *)bbuf, unalign);
+						len += unalign; /* inv entire cache line */
+					}
+
+					unalign = ebuf & arm_dcache_align_mask;
+					if (unalign) {
+						unalign = arm_dcache_align - unalign;
+						memcpy(_tmp_clend, (void *)ebuf, unalign);
+						len += unalign; /* inv entire cache line */
+					}
 				}
-					/* inv are cache length aligned */
+
+				/* inv are cache length aligned */
 				cpu_dcache_inv_range(bbuf, len);
 				l2cache_inv_range(bbuf, physaddr, len);
 
-				unalign = (vm_offset_t)buf & arm_dcache_align_mask;
-				if (unalign) {
-					memcpy((void *)bbuf, _tmp_cl, unalign);
-				}
-				unalign = ebuf & arm_dcache_align_mask;
-				if (unalign) {
-					unalign = arm_dcache_align - unalign;
-					memcpy((void *)ebuf, _tmp_clend, unalign);
+				if ((buf & arm_dcache_align_mask) ||
+				    (ebuf & arm_dcache_align_mask)) {
+					unalign = (vm_offset_t)buf & arm_dcache_align_mask;
+					if (unalign)
+						memcpy((void *)bbuf, _tmp_cl, unalign);
+
+					unalign = ebuf & arm_dcache_align_mask;
+					if (unalign)
+						memcpy((void *)ebuf, _tmp_clend,
+						    arm_dcache_align - unalign);
+
+					intr_restore(s);
 				}
 				sl = STAILQ_NEXT(sl, slinks);
 			}
_______________________________________________
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"