Repository presented to Veeam B&R as a SMB share Backup Copy jobs copied daily to Synology Repository presented to Veeam B&R as a NFS share (NFS 4.1 Enabled) 2 x 10GB Ethernet Bonded as 1 interface in the Synology I was also getting Backup copy jobs corrupting from a remote location. We run a new (April 2021) Synology RS3621xs+ with 2 x 10GB ethernet. All we can do is issue commands against it and observe results, which are really scary at times.
#Veeam support number code
We can never know the root cause because NAS is a black box for us: we don't have its source code nor means to troubleshoot its hardware.
#Veeam support number software
It could be due to honest bugs on their end, or due to the limitations of software RAID technology they are using, or really anything along these lines. but to be crystal clear, it's my own suspicion and nothing more. I personally always suspected that these reliability issues are due to NAS vendors making questionable optimizations in their software in order to score higher in performance tests, which is super important for their market. By now we've seen issues with all protocols (SMB/NFS/iSCSI) and with many different NAS vendors serving consumer/SOHO market. Health check aka "storage-level corruption guard" will ALWAYS catch this issue, but keep in mind it only checks the latest (most recent) restore point.ģ. It has always been the absolute top reason for failed recoveries we've been seeing in our customer support.Ģ. Does any particular configuration (teamed NICs, iSCSI vs NFS, etc.) cause this to happen? The letter only mentioned NFSġ. When does this appear? Is it only during a failed restore, or does/would a health check catch this?ģ. That backup guy wrote: ↑ 11:19 pmMy questions are:ġ. Maybe I just never noticed the corruption prior? Maybe Veeam checks for corruption better? I have never had a corruption issue with Synology prior so it is hard for me to put all the blame on them. I too, wish Synology units were more supported/reliable with Veaam. Not sure why, but maybe copy jobs have less I/O to possibly get corrupted? Since we are now using a Dell server as the repo, we are now running a Backup Copy job to the Synology box (iSCSI, ReFS) and have yet to experience corruption (we run the backup health check daily here too). Of note, when we formatted the Synology repo with NTFS we did not experience corruption. We confirmed the Synthetic Fulls were valid by restoring the entire VM which was not possible using a corrupt incremental.
#Veeam support number full
A weekly Synthetic Full would not be corrupt, which is quite odd to me considering it is "made from" the incrementals marked as corrupt but maybe Synthetic Fulls "heal" the data? I have no clue. However, after a number of days of incrementals, one or two VMs would be corrupted again. When we performed an Active Full the issue went away, as expected. We run the backup health check daily so we catch this issue quickly. The issue we had was incrementals would get randomly corrupted. We have since started using an old Dell server with a battery backed hardware RAID controller and have had zero issues. We were connecting to the repo with iSCSI and formatted with ReFS. We have, what I consider, a mid-range model (RP2818RP+, 8x 4TB WD Gold HDDs in RAID 10, 10Gb Intel ethernet NIC, dual PSUs). We tried using a Synology box and had corruption issues with incremental backups, and thus Full VM Restores of those backups.