Virtual and Physical MMU Compatibility
The virtual MMU that the hypervisor exposes is generally compatible with the physical MMU that an x64 processor contains. The following guest-observable differences between virtual and physical MMU exist:
-
The CR3.PWT and CR3.PCD bits might not be supported in some hypervisor implementations. On such implementations, any attempt by the guest to set these bits through a MOV to CR3 instruction or a task-gate switch will be ignored by the hypervisor. Attempts to set these bits programmatically through a call to the HvSetVpRegisters or HvSwitchVirtualAddressSpace hypercall function might result in an error.
-
The PWT and PCD bits within a leaf page table entry (for example, a PTE for 4 KB-sized pages and a PDE for large pages) specify the cacheability of the page being mapped. The PAT, PWT, and PCD bits within non-leaf page table entries indicate the cacheability of the next page table in the hierarchy. Some hypervisor implementations might not support these bits. On such implementations, all page table accesses that the hypervisor performs are done by using write-back cache attributes. This type of access affects, in particular, accessed and dirty bits that are written to the page table entries. If the guest sets the PAT, PWT, or PCD bits within non-leaf page table entries, an "unsupported feature" message might be generated when a virtual processor accesses a page that is mapped by that page table.
-
The CR0.CD (cache disable) bit might not be supported in some hypervisor implementations. On such implementations, the CR0.CD bit must be set to 0. Any attempt by the guest to set this bit through a MOV to CR0 instruction will generate an "unsupported feature" error message. Attempts to set this bit programmatically through HvSetVpRegisters will result in an error.
-
The page address type (PAT) MSR might be treated as a per-partition register. If the PAT MSR is modified by any virtual processor within the partition, the change becomes accessible to all virtual processors in the partition.
-
For reasons of security and isolation, the INVD instruction will be virtualized to perform like a WBINVD instruction.
-
Some hypervisor implementations might use internal write protection of guest page tables to flush MMU mappings from internal data structures (for example, shadow page tables). This type of write protection is architecturally invisible to the guest because writes to these tables will be handled transparently by the hypervisor. However, writes that are performed to the underlying SPA pages by other partitions or by devices (that is, through DMA) might not trigger the appropriate TLB flush.
-
Internally, the hypervisor might use shadow page tables that translate GVAs to SPAs. In such implementations, these shadow page tables appear to software as large TLBs. However, several differences might be observable. First, shadow page tables can be shared between two virtual processors, whereas traditional TLBs are per-processor structures and are independent. This sharing might be visible because a page access by one virtual processor can fill a shadow page table entry that is subsequently used by another virtual processor.
Send comments about this topic to Microsoft
Build date: 11/16/2013