I'm still struggling with getting good throughput to tape of large Synthetic Full backups in spite of seeing fantastic results on verify operations on the same backups.
I have the lookahead reader,nNumPipelineBuffers, and dwMaxAsyncIoRequests keys set, I have unbuffered IO set on the mount paths, the mount paths aren't particularly fragmented and I have a lot of spindles all configured as per the building blocks paper.
As I mentioned above if I run a verify on the jobs I see fantastic consistent speeds, it's only when doing an aux copy that the speeds seem to float all over the place.
I noticed that when doing aux copies from dedupe primary, my tapes never hold more than their native capacity, so LTO4 media will always be marked as full at around 790mb, LTO5 at around 1400-1500mb.
Something I just spotted in another thread was a suggestion to disable hardware compression on the data path to the tape library on the aux copy, if using software compression.
Has anyone tried this?
Also I'm a little confused if I actually am using software compression or not?
If I look at a running job it say "OFF", if I look on a subclient it says "Use Storage Policy Setting" and if I look on my Global Dedupe polict, which all my Storage Policies dedupe against, it says Software Compression is "On" - which is it?!
Oh just to add, using 256kb block & 16gb chunk on the aux copies.
MA is 2008 R2 and everything is Commvault 9.0 SP5a.