From: Hugh Dickins This patch is the odd-one-out of the sequence. The one before adjusted copy_pte_range from a for loop to a do while loop, and it was therefore simplest to check for lockbreak before copying pte: possibility that it might keep getting preempted without making progress under some loads. Some loads such as startup: 2*HT*P4 with preemption cannot even reach multiuser login. Suspect needs_lockbreak is broken, can get in a state when it remains forever true. Investigate that later: for now, and for all time, it makes sense to aim for a little progress before breaking out; and we can manage more pte_nones than copies. Signed-off-by: Hugh Dickins Signed-off-by: Andrew Morton --- 25-akpm/mm/memory.c | 11 ++++++++--- 1 files changed, 8 insertions(+), 3 deletions(-) diff -puN mm/memory.c~ptwalk-copy_pte_range-hang mm/memory.c --- 25/mm/memory.c~ptwalk-copy_pte_range-hang 2005-03-09 16:34:10.000000000 -0800 +++ 25-akpm/mm/memory.c 2005-03-09 16:34:10.000000000 -0800 @@ -328,6 +328,7 @@ static int copy_pte_range(struct mm_stru { pte_t *src_pte, *dst_pte; unsigned long vm_flags = vma->vm_flags; + int progress; again: dst_pte = pte_alloc_map(dst_mm, dst_pmd, addr); @@ -335,19 +336,23 @@ again: return -ENOMEM; src_pte = pte_offset_map_nested(src_pmd, addr); + progress = 0; spin_lock(&src_mm->page_table_lock); do { /* * We are holding two locks at this point - either of them * could generate latencies in another task on another CPU. */ - if (need_resched() || + if (progress >= 32 && (need_resched() || need_lockbreak(&src_mm->page_table_lock) || - need_lockbreak(&dst_mm->page_table_lock)) + need_lockbreak(&dst_mm->page_table_lock))) break; - if (pte_none(*src_pte)) + if (pte_none(*src_pte)) { + progress++; continue; + } copy_one_pte(dst_mm, src_mm, dst_pte, src_pte, vm_flags, addr); + progress += 8; } while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end); spin_unlock(&src_mm->page_table_lock); _