You searched for the word(s):
< 1 second(s)
You can check the files that are backed up in collect* files in JobResults foilder.
You can read collect files using gxtail.
- Optimized scan
This is fine .You can browse the granular data using mediaagent using live recovery until it is windows vm for which meta data collection failed
You can add registry key in additional settings of proxy server so that commvault ignores the error and marks job as completed
Are commvault licenses released before you attempt for delete operation
If it is a filesystem backups - default uses local windows account
4)One more issue might be client certificate - check if there is old certificate.Remove it and renew the certificate only if communication is working fine
1)Kill any active job for client.Check task manager for multiple iFind.exe process when there are no jobs running for client.This causes due to hung evmgrc service.
Kill the hung ifind.exe,restart services - job should work fine
2)Communication issue between commserve and client- Make sure commserve can ...
Intresting.Can you mount manually one of the snap and send me the cvd.log and cvma.log of the mediaagent.
Quick way to see which client is eating up so much of your disk library space.Got to mount path and browse the jobs.It shows you all valid jobs on the mount path.Sort out the data written size.You can definitely figure out the clients occupying large space.
Make sure you are using deduplication either from commvault or from media end.If
There is one simple way.
You can create a smart client computer group and get the clients with any agents installed on it.
You can apply custom rules.I am trying to get a sql Query.I will post that once i succeed.
Yeah you can switch to a different esx mount or if has never worked you may hvae to check communication between esx host and dell storage.
Do one check. Do a manual mount when you list snaps from a job,it will tell you clearly if esx host can mount the DS.
Also make sure when you are mounting same snap second ...
Sorry for the delay.You cna seperate structured data from unstructured data.Again this is for creation deduplication policies dedicated for structured data.And seperate deduplication policy for unstructured data.
You can always share your disk libraries which are going to save the backed up data
This alarm can be cause if you are using application consistent backups.I would suggest you to Change it to crash consistent backups in vmsubclient configuration.
If your VM has high I/O's while backup is running ,commvault application consistent backpus will cause guest OD quiescing which might cause your disconnections form applications ...
You need to make a change in control panel- mediamanagement- Dataaging
Days to keep the failed/killed backup job and other job histories.Default value is 90 days
There is no way to delete some contents from a completed job.You either have to delete whole job or wait until retention is complete
You can filter that directory from backups from now so that they are not backed up further
Once check if dummy instance with same name as of source instance is created and discovered in commvault GUI (Check under oracle IDA - instances in commvault GUI).
If you think it is a java issue,make sure you have a recent java version installed on the computer from where you are launching commvault console
Java 8 update ...
You cna always see in CVD log /debug it for any commvault service related error.