I have a question on the 'size of stored data' statistic one sees when viewing tape library media, and what it actually means.
I have a backup subclient that I setup to write directly to tape .. a storage policy where the primary copy is on tape, no disk-library involved. The SC has 1.7TB of content, and after the backup completes and I look at the tape cartridge (assuming it used a new/empty cartridge), it shows the 'size of stored data' as 1.7TB. It all makes sense.
I run the exact same backup subclient to a different storage policy which has a disk library (in our case, on a hyperscale) as the primary copy, then is setup with a secondary copy for tape. The SC backup to the disk lib shows it backed up 1.7TB of application data (though only a comparatively small amount was actually 'written' due to dedupe). Next I manually run an auxcopy to make the tape copy. Afterthe auxcopy completes (again, assuming it selected a new/empty tape cartridge), the 'size of stored content' for the aux tape media shows ~470GB of data. How can that be? I would think that for aux-to-tape that the source data would have to be completely re-constituted/re-hydrated. In other words, the aux job should have sent 1.7TB of non-deduped, non-compressed to be written to the tape drive. Am I wrong here?? What am I not understanding in this, or is the 'size of stored content' metric not what I think it should be?
One reason I'm asking this is that in scenario #1, the backup-to-tape takes ~2hrs to complete, whereas in scenario #2 the aux-2-tape was taking about half that time. Something seems odd here, and it's probably just something I don't fully understand. I do know the tape data is indeed recoverable as I've done that before in testing.
Thx in advance for any replies