Tape drive GB/hr?

Last post 01-13-2012, 7:09 AM by JayST. 16 replies.
Sort Posts: Previous Next
  • Tape drive GB/hr?
    Posted: 08-26-2010, 12:23 PM

    In our environment we are able to stream ~250 GB/hr to our (3) LTO4 tape drives (FC connected, hardware compression enabled, no encryption).


    What numbers are other people getting?

  • Re: Tape drive GB/hr?
    Posted: 08-30-2010, 9:49 PM

    Hi Bolsen, Good question this one and one that many people always wonder about.

    I have a number of customers that use Version 8 of Simpana and the variation of speeds is astounding. I have had many support calls opened to investigate speed related issues and not all of them have been resolved 100%

    General rule of thumb for LTO4 Single Tape head speed is about 225-275GB/hr these speeds are normally achieved through multiplexed job runs, especailly when De-Duplication is being used. IF your backups have a combined value of 250GB/hr, you may want to check a few configuratio settings.

    Some of the rule of thumb checks would be:

    If you are using De-Duplication and Software compression during your backups, then you should disable hardware compression as this could add addtional latency in getting data to the media. The gains made by Hardware Compression over the already Software compressed data is very minscule, what we find is the space gained by enabling both technologies is not offset enough by the time lost when Hardware compression is disabled.

    Defrag ... yes this old chestnut. If you are using Version 8 with DeDuplication then you absolutely need to enable Defrag on the volumes, especially after a data aging run clears out a whole heap of pruned data. We have seen 25% increases in speed in customer environments that defrag there Magnetic Libraries on a regular bassis. This will depend greatly on whether you have ADO-P or ADO-S configured in the DeDupelication database of course.

    Number of Streams and Multiplexing factors. This is more of an advanced configuration tweak and I would suggest talking with CommVault Support about the use of these numbers in your environmnet. A good stream/multiplex factor can increase your tape write speed, too big and you can half your write speed to tape.

    Something else to keep in mind is the number of drives per Media Agent. According to the Support training manual the following should be adhered to: 16 bits of the controller per Tape Drive. This means if you are running a 64Bit Fibre Controller, you should not exceed 4 Tape Drives on that single controller.

    In the CommVault <install Directory>\Base you should be able to find a utility TapeTool.exe This utility will assit in confirming the achievable read/write performance of the tape drives in your environment.


  • Re: Tape drive GB/hr?
    Posted: 09-03-2010, 5:39 PM

    I've been investigating this and the Multiplexing factor is hard to determine. 

    3 = low low throughput

    5 = low throughput

    5+ = complains about CPU swapping / performance problems.


    I'm not sure how to set this because it seems like I need a higher value.

  • Re: Tape drive GB/hr?
    Posted: 09-03-2010, 6:12 PM
    Do you have a large number of networked based backups running to this policy? If so it sounds like what is happening is the shoeshine effect which the buffers are emptying out and filling again. In that case you can go higher on the multiplex value to try and feed the hungry drives. 5 is a recommendation but you can go higher. If you go too high on the value can also have negative impact on performance. I am leary of recommending a specific value but I have seen as high as 25 mux value. This was with a large number of slower network based backups running to drivepool using fast lto4 drives with a rather beefy MediaAgent. There is also the belief that your restores will suffer a performance hit with the higher the multiplex value. Long story short you will need to experiment a bit. Hope this helps.
    Mark Spencer
    CommVault, Business Critical Support
  • Re: Tape drive GB/hr?
    Posted: 09-08-2010, 3:11 PM

    If these are HP drives you can use LT&T to find the max throughput speeds of your tape drives.  That then gives you a throughput number to shoot for.

    Through a very long process and to make a long story short,  I was able to achieve a write to tape speed of 1.1TB/h ( 3 LTO4 drives with no compression ).  That works out to about 366GB/h per drive.  The theoretical maximum of an LTO4 drive with no compression is 400GB/h.  As you add more tapes you add more overhead and your aggregated throughput will slow down.  I can run this job to a single LTO4 tape and get 380GB/h.  But as I add more tapes the per tape speed slows down.  But the total speed still keeps increasing.  Right around 4 tapes was is where the law of diminishing returns took effect

    The biggest benefit was getting rid of my windows media agent and switching to a linux media agent.  That gave me so much more control over my file systems, cache policies, and IO policies.

    The job that gets this speed is a direct to tape job from my media agent to the tape library attached to the media agent.  The file system is 2 12 disk raid 5 arrays, then striped at the LVM level ( couldn't do raid 50 in the controller ).

    This setup took quite a bit of engineering and tinkering with the configuration.  And most if it was on the source side, getting a filesystem fast enough to feed the tape drive. YMMV.


    The point is high throughput speeds are achievable with lots of testing and good design.


    As a side I run my aux copies to a LTO3 tape in the same library and get speeds of 265GB/h.  The max throughput of an LTO3 tape is 288GB/h.

  • Re: Tape drive GB/hr?
    Posted: 09-08-2010, 7:51 PM


    HP Tape tools give us great speeds for our drives, and straight backup to tape is quite good but our aux copies from our deduped stroage policy are terrible :(

  • Re: Tape drive GB/hr?
    Posted: 09-08-2010, 8:14 PM

    The trick with LT&T is to run the test with no compression.  This will give you the native speed of the tape drive.  That should be around 115MB/s for an LTO4 drive ( max is 120MB/s ).


    You should get similar speeds from your AUX copy, becuase it is just copying compressed data.  If you're not getting those speeds then the problem is source side.  Either your file system is slow or you need to tweak settings in commvault.


    Commvault has a diskread tool that can help you benchmark your source filesystem.

  • Re: Tape drive GB/hr?
    Posted: 09-09-2010, 6:49 PM

    yeah ive got the CV boys onto it.

    Heres something crazy - they couldnt figure out what the issue was for the last 9 months so we got to the point last week of installing Symantec Backup Exec onto the MA to prove that the hardware is fine and surprise surprise it tore through the backups, faster than we have ever seen, so its back to the CV boys to try and figure out this.

    Mad why i had to install a competitors product on our MA to show that our systems are fine and that its a CV issue!

  • Re: Tape drive GB/hr?
    Posted: 09-09-2010, 7:08 PM

    Hi sssstew,

    You mentioned in an earlier post that the data being aux copied is from a dedupe policy. That may impact on performance as we have to "rehydrate" the data to copy.

    Can you tell me what your performance stats were from performing the same test with BE? I'm wondering if this is an apples to apples c

  • Re: Tape drive GB/hr?
    Posted: 09-10-2010, 11:16 AM


    Hi sssstew,

    You mentioned in an earlier post that the data being aux copied is from a dedupe policy. That may impact on performance as we have to "rehydrate" the data to copy.

    Can you tell me what your performance stats were from performing the same test with BE? I'm wondering if this is an apples to apples c

    When data "rehydrates", what's the bottleneck on performance? 

  • Re: Tape drive GB/hr?
    Posted: 09-10-2010, 11:19 AM

    When you change the multiplexing factor, while an aux copy is running, do you have to kill the job and restart it for the changes to take effect?  Or does it happen in real-time?

  • Re: Tape drive GB/hr?
    Posted: 09-10-2010, 2:09 PM


    ", what's the bottleneck on performance? 

    Major impacts are these three areas

    • Disk fragmentation
    • Amount of dedupe data
    • Disk performance (seek times/read speeds)

    Rehydrating the data relies heavily on disk I/O and can certainly impact on aux copy times if your read times are slow.

    With respect to aux copy and performance tunables I can share the following to provide some additional direction (Please forgive formatting)



    An auxiliary copy operation allows you to create secondary copies of data associated with data protection operations, independent of the original copy.


    Configure the auxiliary copy fallen behind alert for notification when the data to be copied for the associated storage policy exceeds the threshold and/or the number of days the jobs for the associated storage policy have not been copied exceeds the set threshold. This is a GUI-accessible parameter and is set in the Storage Policy Properties (Advanced) window.

    Optimizing and Troubleshooting Auxiliary Copy Operations


    1. If the source to be copied resides on magnetic (disk) storage, increasing the Chunk Size will generally increase the performance.
    2. If the source to be copied resides on magnetic (disk) storage, the disk may be I/O constrained.   Using the DiskRead utility on the Resource Pack CD, configured with the Non Backup API selection, check the disk performance for bottlenecking.  If the resulting reads are the same as the Auxiliary Copy performance, the disk source is the performance bottleneck.
    3. Prior to v8, if the disk I/O is constrained, check for disk fragmentation.  Fragmented disk is characterized by slow read operations.  If the disk is fragmented, defragment the disk.  To prevent disk fragmentation on a go forward basis add the DWORD registry value on each Media Agent attached to the Magnetic Library:

    HKEY_LOCAL_MACHINE\SOFTWARE\CommVault Systems\Galaxy\Platform Information\ControlSet001(machinename)\MediaAgent\nMagneticChunkFileIncrSize

    Set the value of this key to 128

    For v8 and above, this option is set in the GUI. This setting applies to all Windows-based Media Agents.  See the screen capture below:


    Unix Media Agents do not support this configuration setting.








    1. If the source to be copied resides on tape, complete a Drive Validation operation and note the performance statistics for read and write throughputs from the tape device.   When the drive validation is performed from the CommCell Console, the system performs all the operations that are necessary for operations. This includes operations such as mounting the media, writing on the media, re-winding and seeking data and then reading back from the media.  If reads from the tape drives are the same as the Auxiliary Copy performance, check the following:
        1. Software and hardware compression are enabled for the jobs to be copied.  Hardware compression, as completed, by the tape drive, will take longer to read software compressed operations.
        2. Review AuxCopy logs to determine if the same media is being mounted multiple times in order to complete the required read operation(s).
        3. Check if the source data is based on multiplexed writes. If the Auxiliary Copy type is selective, it is possible that the delay is due to read and discard operation of data from other archive files that are part of the ‘Plex that are not required to complete the Copy operation  Also, check if the source chunk(s) is being read from multiple times.  If this is the case, the Auxiliary Copy operation is trying to copy the data in a chunk, before all the backups which wrote to that chunk are finished.


    1. Compare the number of MediaAgents being used as source and number of MediaAgents used as Destination for the Auxiliary Copy operations.  If the ratio of readers to writers is not aligned, re-order jobs to load balance operations more evenly.
    2. Determine if the copy operation is LAN-Free or LAN-based.  If LAN-based copy operations are constrained check for bottlenecks associated with the network configuration (NIC Duplexing) and consider upgrading the bandwidth of the LAN connection between readers and writers (i.e.; 100Base to 1000Base; 1000Base to 10000Base).
    3. Verify the number of streams used for the Auxiliary Copy operation and confirm that data movement operations are completing on all configured streams.  If some of them are not transferring data copy operations will be compromised.  Check if there are any errors reported on those streams.  If the copy operation is not utilizing all destination streams even when there is data to be copied from the source streams, complete the following troubleshooting tasks:

    a.        Confirm that all resources necessary to complete the copy operation are available for the secondary target (tape drives, spare media, etc).

    b.       Enable Resource Manager logging and verify if the AuxcopyMgr.log displays any resource allocation issues for the streams which are not copied.

    c.        Use the getAuxCopySourceMediaSeq tool from the Resource Pack CD on the CommServe in order to determine what data is to be copied in the streams specified. This tool will output the list of source media from which specified copies are to be made.

    d.       Note that if there is a Combined Stream option is selected, a number of source streams will be mapped and copied to the given number of destination streams.

    e.        On the CommServe, create the following DWORD registry key under the CommServe Section of the registry:

    HKEY_LOCAL_MACHINE\SOFTWARE\CommVault Systems\Galaxy\Platform Information\ControlSet001(MachineName)\Commserve\AUXCOPY_SKIP_RESERVING_SEC

    Set the value to 5. Suspend and resume the Auxiliary Copy operation.

    1. Verify if the Auxiliary Copy Source or Destination is waiting on resources during the copy operation.  Make any necessary adjustments to eliminate resource contention on the affected host(s). 
    2. Enable the following DWORD registry key on the CommServe:

    HKEY_LOCAL_MACHINE\SOFTWARE\CommVault Systems\Galaxy\Platform Information\ControlSet001(tangerine)\CommServe\ AUXCOPY_REPORT_PROGRESS_MB

    Set the value to 4096.  Changing this parameter reduces the number of updates transferred to the CommServe from the Media Agent completing the copy operation, reducing the overhead of those operations.


    1. Increase the Auxiliary Copy logging verbosity to include performance information by enabling the following DWord registry keys on the reader and writer Media Agents:

    HKEY_LOCAL_MACHINE\SOFTWARE\CommVault Systems\Galaxy\Platform Information\ControlSet001(tangerine)\MediaAgent\nAuxCopyCountersLogInterval

    Set the value to 900

    HKEY_LOCAL_MACHINE\SOFTWARE\CommVault Systems\Galaxy\Platform Information\ControlSet001(tangerine)\MediaAgent\nDSBackupCountersLogInterval

    Set the value to 900

    With these registry keys defined, the log file will have more information on the average time taken for read and write operations to complete, allowing for more efficient troubleshooting.

    7.       For Auxiliary Copy operations based on non-deduplicated configurations (v8 and above) it is possible to increase performance through the use of the following DWord registry key on the reader and writer Media Agents.  This key functions when the "Unbuffer I/O" option is set on the MountPaths and deduplication is not in use. This key is used to determine the read ahead in unbuffered I/O operations.
    Set this key to a higher value to increase the read ahead.
    Setting this key to a high value will cause increased memory usage by the process.
    Higher values are recommended only for Auxiliary copies with a slow read performance using Unbuffered I/O mode.

    <Instance Root>\MediaAgent dwMaxAsyncIoRequests 


    Auxiliary Copy Stream Randomization

    When a storage policy is configured to use more than one data stream, stream randomization may be enabled through the GUI from the Storage Policy Properties (General) window.  When this parameter is enabled, streams are randomly chosen to complete data copy operations, which evenly distributes the data across all the streams, thereby increasing the rate at which data is copied during auxiliary copy operations.

  • Re: Tape drive GB/hr?
    Posted: 09-10-2010, 2:13 PM


    When you change the multiplexing factor, while an aux copy is running, do you have to kill the job and restart it for the changes to take effect?  Or does it happen in real-time?

    You will need to kill --> start a new aux copy job for the multiplexing to apply.

  • Re: Tape drive GB/hr?
    Posted: 09-13-2010, 10:12 PM


    Hi sssstew,

    You mentioned in an earlier post that the data being aux copied is from a dedupe policy. That may impact on performance as we have to "rehydrate" the data to copy.

    Can you tell me what your performance stats were from performing the same test with BE? I'm wondering if this is an apples to apples c

    Thanks Stephen , i think CV support have tried all options for our case, our disk and hardware performance has been validated as good, and when doing a aux copy for the de dupe SP it isnt hammering the disks.

    CV aux copy from dedupe we get about 60gb per hour
    BE backup to tape we got about 360-400gb per hour

    Couple of variables but look up the ticket on our account for all the history and talk to richard fleifel on CV support for loads of info.

  • Re: Tape drive GB/hr?
    Posted: 01-03-2012, 5:01 AM

    i'm having some performance troubles during aux copy on a deduped disk copy.

    In my case i found out that the mount paths were very heavly fragmented. 90% + fragmentation.

    So i started to defragment and i hope to see improvement.

    How do i minimize the fragementation on the mount paths? i already upped the allocation block size from 128 to 256, but i dont know how much this will help?

  • Re: Tape drive GB/hr?
    Posted: 01-09-2012, 5:19 PM

    Recently I have been focusing on improving tape write performance (IBM LTO4 drives). The best peak performance we get is around 23MB/s when doing aux copies of our databases. The source is deduplicated and our SAN does data teiring. We have found out that the deduplication + data teiring means the majority of data falls onto teir 3 storage (7k SAS drives) and this doesn't translate too well for read speed. We have similar speed issues restoring data to a server that's source is deduplicated. It seems reading deduplicated data is rather IO intensive.

    23MB/s is livable for us but this represents the best read speeds we have and even slower storage really has a hard time reading data any faster.

    I have been following this thread for performance improvements, but nothing that's too significant. We don't have fragmented mount paths since we don't have muliple paths per maglib. The dedupe look ahead reader does help if you are multiplexing, however I found it can also be worse for performance if you're not multiplexing the copy.

    I'm sure part of the issue is to do with performance reading from a deduplicated source and we encrypt to aux copies so I'm not sure how much overhead that gives. We use blowfish which apparently is one of the faster methods.

  • Re: Tape drive GB/hr?
    Posted: 01-13-2012, 7:09 AM


    I have been following this thread for performance improvements, but nothing that's too significant. We don't have fragmented mount paths since we don't have muliple paths per maglib. The dedupe look ahead reader does help if you are multiplexing, however I found it can also be worse for performance if you're not multiplexing the copy.


    Hi Rob,

    What do you mean by "fragmented mount paths"? What does windows NTFS fragmentation analysis tell you about your mount path? I doubt you don't have any fragmentation on it...

    I recently scheduled defragmentation on my mount paths and saw it double the aux copy performance to tape.

The content of the forums, threads and posts reflects the thoughts and opinions of each author, and does not represent the thoughts, opinions, plans or strategies of Commvault Systems, Inc. ("Commvault") and Commvault undertakes no obligation to update, correct or modify any statements made in this forum. Any and all third party links, statements, comments, or feedback posted to, or otherwise provided by this forum, thread or post are not affiliated with, nor endorsed by, Commvault.
Commvault, Commvault and logo, the “CV” logo, Commvault Systems, Solving Forward, SIM, Singular Information Management, Simpana, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, QuickSnap, QSnap, Recovery Director, CommServe, CommCell, SnapProtect, ROMS, and CommValue, are trademarks or registered trademarks of Commvault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.
Copyright © 2018 Commvault | All Rights Reserved. | Legal | Privacy Policy