vm live recovery using vmotion delay migration

Last post 06-18-2018, 5:54 AM by aldog24. 1 replies.
Sort Posts: Previous Next
  • vm live recovery using vmotion delay migration
    Posted: 06-13-2018, 12:55 PM

    I have only one question. when i choice live recovery v motion. I need to select a datastore different from my final datastore. this is ok when i select delay migration from 1-12 hours. is this correct when i select migration betweeen 1-12 hours every changes which where be done in vm will be save on datastore which i select and then after time is over everything snapshoot and mychanges will be copied to final datastore.

    What happend when i select delay migration 0 hours? vm will be mounted from snapshoot and start and then after power on , will be migrated to final datastore?

    so this would be the fastest restore which is possible?

    from documentation

      1. To use the Live Recovery option, select Restore Virtual Machine using Live Recovery (vMotion).
      2. You can select the following options for Live Recovery:
      3. Redirect Writes to Datastore - Select a datastore for any changes made to the virtual machine during the recovery process. This must be a different datastore than the destination datastores used by the VM or its disks.
    • Delay migration - Delay the migration of the VM to the destination location for the specified time (0-12 hours). You can still use the VM when delaying the migration.
    1. Select Power ON Virtual Machine During restore.
  • Re: vm live recovery using vmotion delay migration
    Posted: 06-18-2018, 5:54 AM

    Hello GregorD,

     

    The full recovery process can be found: 

     

    http://documentation.commvault.com/commvault/v11/article?p=62257.htm

     

    1. When this option is selected for a restore, the restore operation can use the MediaAgent that was used to perform the backup.
    2. Rather than reading the backup, the restore process exposes the backup to the destination ESX server as a network file system (NFS) export.
    3. The NFS export is mounted to the destination ESX server as an NFS datastore.
    4. When the NFS datastore is visible to the ESX server, the restore process retrieves the .vmx and catalog files for the VM.

      The .vmx file is modified to indicate that writes can be made to the VMDK files on the NFS datastore (or the VM can be modified to redirect writes to an alternate datastore).

    5. When the VM files are available to the NFS datastore, the VM is registered and can be powered on.
    6. Any reads for the virtual machine disks are handled by the File Recovery Enabler for Linux, which restores the requested data to the NFS cache and presents it to the ESX server.
    7. After the initial reads needed to make the VM usable, a storage vMotion is initiated to migrate the virtual machine to the destination datastore specified for the restore.
    8. When the migration is complete, the ESX server unexports the backup and unmounts the datastore (if there are no other paths exported to the ESX server). When the cleanup is done, the restore job is marked as complete.
    In summary, the VM is exported from your media agent using 3DNFS, and attached as a "datastore" to you ESX host. The VM is registered and can be powered on for us. After your set interval, it will be storage vmotioned from the exported datastore to your production datastore.
    It is however worth noting, that as the VM is infact being run from your backup disk library, and from the 3dnfs cache on the MA, there will be a considerable performance hit to the machine until the vmotion has been completed. I would full expect the vmotion process to take roughly as long to complete as a full VM restore, as for the most part it is again limited by the read speed from your disk library to the write speed of your datastore (much the same as a restore). 
    Whilst the machine may be usable during this time, it will be no means comparable to the production machine. But this would be worth testing out to better understand the limitations within the environment.
    Please also note the recommendation on the documentation page listed above:

    Note: For faster recovery times, the 3dfs cache should be hosted on a solid state drive (SSD) using flash memory storage.

     

    Kind regards

     

    Allister

The content of the forums, threads and posts reflects the thoughts and opinions of each author, and does not represent the thoughts, opinions, plans or strategies of Commvault Systems, Inc. ("Commvault") and Commvault undertakes no obligation to update, correct or modify any statements made in this forum. Any and all third party links, statements, comments, or feedback posted to, or otherwise provided by this forum, thread or post are not affiliated with, nor endorsed by, Commvault.
Commvault, Commvault and logo, the “CV” logo, Commvault Systems, Solving Forward, SIM, Singular Information Management, Simpana, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, QuickSnap, QSnap, Recovery Director, CommServe, CommCell, SnapProtect, ROMS, and CommValue, are trademarks or registered trademarks of Commvault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.
Close
Copyright © 2018 Commvault | All Rights Reserved. | Legal | Privacy Policy