multifd: bugfix for incorrect migration data with QPL compression

When QPL compression is enabled on the migration channel and the same
dirty page changes from a normal page to a zero page in the iterative
memory copy, the dirty page will not be updated to a zero page again
on the target side, resulting in incorrect memory data on the source
and target sides.

The root cause is that the target side does not record the normal pages
to the receivedmap.

The solution is to add ramblock_recv_bitmap_set_offset in target side
to record the normal pages.

Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Jason Zeng <jason.zeng@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20241218091413.140396-3-yuan1.liu@intel.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 2588a5f99b0c3493b4690e3ff01ed36f80e830cc)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This commit is contained in:
Yuan Liu 2024-12-18 17:14:12 +08:00 committed by Michael Tokarev
parent 7fb9ce40e7
commit fcd5a157e6

View file

@ -679,6 +679,7 @@ static int multifd_qpl_recv(MultiFDRecvParams *p, Error **errp)
qpl->zlen[i] = be32_to_cpu(qpl->zlen[i]);
assert(qpl->zlen[i] <= multifd_ram_page_size());
zbuf_len += qpl->zlen[i];
ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
}
/* read compressed pages */