Data streams, writers and multiplexing

Last post 03-15-2011, 8:05 PM by Bill Oyler. 9 replies.
Sort Posts: Previous Next
  • Data streams, writers and multiplexing
    Posted: 06-28-2010, 8:16 PM

    Good morning,

    I've been reading on and still do not really understand on how exactly do one determine the amount of data streams, writers and multiplexing. Our auxilary copy and backup to maglib are really really slow and was wondering if there's ways of optimising it.

    We have HP MSL4048 with 2x3Gbps SAS connected directly to the commserve.

    Throughput for backup to maglib: average of 25GB/hr

    Throughput for aux copy: average of 17GB/hr



  • Re: Data streams, writers and multiplexing
    Posted: 06-28-2010, 9:20 PM

    Hello Jason,

    Considering the hardware you have I am wondering why your backup speed is so slow.

    Here is a good place to start.

    • Select one server (large file server preferred)
    • On that server create a new backup set and create a subclient with one drive as content
    • Perform a backup to magnetic and see what your throughput speeds average.
    • Next, pinpoint what your bottleneck is as it can only be a few things. Disk Read, Network or Disk Write.

    Once you have found where your core issue is you can expand from there.

    Judging from what you're telling us here in the thread I suspect it is your magnetic library not accepting writes fast enough, however performing this test will help pinpoint what is slowing you down.

  • Re: Data streams, writers and multiplexing
    Posted: 06-28-2010, 10:00 PM

    Hi Stephen,

    Thanks for the reply, will definitely give that a go.


    Could you also help with explaining on how to determine, data streams, writers, readers and multiplexing?

    Been reading this



    Anyone has any better explanation that those two links?

  • Re: Data streams, writers and multiplexing
    Posted: 06-28-2010, 10:27 PM

    Hi Jason,

    Ok let me give this a go Laughing (Note: this is a very basic summary)


    Writers = the number of active writes you want on your destination magnetic volume. So if you have a NAS that contains multiple write heads then you can up your amount of writers to the magnetic volume that equals that number(recommended).

    Readers = the amount of reads we can perform from a backup client. If your client can handle multiple read requests then you can up the reader amount to increase backup throughtput.

    Example: Magnetic Volume has (5) write spindles. You can process (5) single datastream backups at one time or (1) backup job that has (5) readers.

    Multiplexing is typically used with tape media. When you enable multiplexing you allow the tape to handle more then one backup job at one time.

    This allows you to increase tape performance and often complete your backups faster then single stream. The downside of multiplexing is when you need to restore.

    Say you have a multiplex factor of (4) and backups jobs 1,2,3,4 were written to tape. In order to restore job 1 you must read through 2,3,4 as well, therefore increasing your restore window.


    When using these tun-ables it is always best to alter them slowly to find your "sweet" spot on the performance curve.

    Let me know if this helps.

  • Re: Data streams, writers and multiplexing
    Posted: 06-29-2010, 12:11 AM
    Hi Jason,

    We have added performance counters in the logs, which helps in narrowing down the performance bottle neck.

    The time value is in seconds.

    Below is parts of logging snippet from clbackup.log for Windows File system agent.

    1) ID=File Open time, Bytes Read = 0, Total time = 0.011359, Average = 0.011359, Samples = 1

    2) ID=Disk Read time, Bytes Read = 20, Total time = 0.013465, Average = 0.001417, Samples = 1

    3) ID=Pipeline Allocation time, Bytes Read = 0, Total time = 0.000010, Average = 0.000010, Samples = 1

    4) ID=Pipeline Write time, Bytes Read = 1160, Total time = 0.000004, Average = 282.587336, Samples = 1

    ID = File open time and Disk Read time (Disk Group) indicate the time taken for opening the file and reading the file from disk.

    ID = Pipeline Allocation time and Pipeline Write time (Network and media group)indicate the time taken for writing the data to the media + time taken to do the transfer over the network.

    The highest "Total time" indicates that the majority of the time is spent in that operation. Generally this falls in the disk id group or the network and media group.

    If the time taken is high for disk id group, the disk is the bottle neck this can also be verified with our tools like disk read. If multiple reads on the disk is enabled try disabling it as not all hardware supports multiple reads and enabling multiple reads degrades the performance without hardware support. On the other hand if you have Raid disks or disks that allow multiple simultaneous reads, allowing multiple reads can improve the backup performance.

    If the time taken is high for network and media group. Slowness could be because of either the network or the media. For troubleshooting this slowness our tools like testport, DiskRead and tapetest could be useful.

    If there is a network between the client and MA check if the network is slow, testport would help you identify it.

    For testing if the Media write is causing the slowness use tapetest if Media is tape or use DiskRead tool in write mode if the media is magnetic.

    Hope this helps narrow down the bottleneck.

  • Re: Data streams, writers and multiplexing
    Posted: 07-06-2010, 6:48 AM

    Hi Jason,

    With particular regard to "combine to (n) streams" and "multiplex factor", I too found this slightly confusing. I didn't understand correctly until I did a large amount of testing to see how the settings actually work. This is what the testing showed (and if you read the Books Online documentation, it seems to indicate the same, but could probably be rewritten in a clearer way).

    Simply put, the settings appear to mean:

    Combine the streams to N streams, and THEN multiplex Y of these to the same media, for a net result of using Y streams to N/Y media/tape heads.
    For example, we have a storage policy which uses 100 streams for the primary copy. We then auxiliary copy this to a tape library with four tape heads. I found the best performance for our particular setup configuring it as follows:

    Combine to 12 streams, and THEN multiplex 3 of these to each media, for a net result of 3 streams to 4 media/tape heads.

    This results in 4 tape drives being used, and 3 streams writing to each tape. If you made it "Combine to 16 streams" with a multiplex factor of 4, you'd be using 4 tapes, and writing 4 streams to each.

    What performs best depends on your hardware...

  • Re: Data streams, writers and multiplexing
    Posted: 07-13-2010, 9:58 AM

    Hey Stephen, Rahul and Daniel,


    Thanks for the replies, got distracted with work. Appreciate the replies (haven't had a good read yet but will now).

    Thanks for the examples as well, that will definitely help paint a prettier picture to how commvault actually works with these technologies.

  • Re: Data streams, writers and multiplexing
    Posted: 07-13-2010, 10:08 AM

    Hi Rahul,

    You speak of these tools to check the disk and tape speed. How do I get my hands on these tools mentioned?

  • Re: Data streams, writers and multiplexing
    Posted: 07-13-2010, 11:23 AM

    Hi Jason,

    Diskread is part of the resource pack.  Tapetool is also part of the resource pack or may already be part of the base folder on the MA's in question.


    For 8.0

  • Re: Data streams, writers and multiplexing
    Posted: 03-15-2011, 8:05 PM



    I'm not sure if your explanation of "Combine Streams" is correct based on my reading of Books Online.  Can someone from CommVault confirm?  Here is an example:


    - Storage Policy with 12 streams

    - Primary copy goes to MagLib (using all streams, no multiplexing)

    - Secondary copy goes to tape library which has 2 tape drives

    - The aux copy operation is configured to "allow maximum number of streams"


    Example with combine streams but no multiplexing: If the storage policy is configured with 12 streams and you combine the streams down to 2 for the aux copy, the first 2 streams will be copied simultaneously to tape #1 and tape #2.  When done, the next 2 streams will be copied simultaneously to tape #1 and tape #2, and so on until all 12 streams have been copied.


    Example with combine streams and multiplexing: If the storage policy is configured with 12 streams and you combine the streams down to 2 with a multiplexing factor of 3 for the aux copy, the first 3 streams will be copied to tape #1 and the next 3 streams will be copied to tape #2, all occurring simultaneously (6 simultaneous streams copying).  When done, the next 6 streams will be copied simultaneously (3 to tape #1 and 3 to tape #2).


    Books Online example (see the graphics):


    Can someone confirm that the above examples are correct?



The content of the forums, threads and posts reflects the thoughts and opinions of each author, and does not represent the thoughts, opinions, plans or strategies of Commvault Systems, Inc. ("Commvault") and Commvault undertakes no obligation to update, correct or modify any statements made in this forum. Any and all third party links, statements, comments, or feedback posted to, or otherwise provided by this forum, thread or post are not affiliated with, nor endorsed by, Commvault.
Commvault, Commvault and logo, the “CV” logo, Commvault Systems, Solving Forward, SIM, Singular Information Management, Simpana, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, QuickSnap, QSnap, Recovery Director, CommServe, CommCell, SnapProtect, ROMS, and CommValue, are trademarks or registered trademarks of Commvault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.
Copyright © 2018 Commvault | All Rights Reserved. | Legal | Privacy Policy