10GbE Core Aggregation Switch Technology for Small Medium Business Storage, the XS+ with Intel SSDs

This post will discuss how a Small Medium Business (SMB) can use and appreciate affordable 10GbE Core Aggregation Switch technology for their growing storage network.

Problem: How to Maximize Bandwidth for Businesses with Growing Storage Needs with 10GbE Technology
A concern with SMBs is full bandwidth utilization on a NAS. Most NAS today feature 1GbE, which is very respectable amount of available performance for small to medium sized businesses. However, as a SMB grows with more users and become more dependent on centralized storage for sharing data or content creation, 1GbE on a storage back-end can easily become saturated. It becomes a bottleneck for meeting the needs of business, whether for video editing, surveillance, or creative multimedia work. Yes, one way to resolve this could be to use Link Aggregation of many 1GbE ports, or a simpler solution is to use 10GbE Technology. This article will discuss and demonstrate how 10GbE can be used to alleviate network performance concerns of a growing SMB.

The problem is that if 20x1GbE computers are concurrently using a 1x1GbE network interface on a storage back-end – effectively, each computer has 50Mbps of bandwidth to use, which results in low performance for each workstation. The way to resolve this is to increase the back-end storage network performance, but how to do so affordably? The answer involves using 2x10GbE connections on the Synology XS+ Series, and a new class of switch, currently classified as “Core Aggregation Switch”, which supports both 1GbE+10GbE connections. The core aggregation switch is an affordable infrastructure upgrade, as the 1GbE connection can be used for the existing infrastructure. The RackStation can be attached to the 10GbE ports for high performance access. Effectively, the switch takes 20x1GbE computers to 2x10GbE connections, meaning each computer can now run at its full 1GbE potential.
To test this theory – I decided to build an environment to test my theory. I designed a very simple test procedure to prove the concept that 20x1GbE average computers can appreciate 2x10GbE Links.

Test Hardware
I decided on the RS3413xs+, as I’m not using SAS drives in this experiment. However, if I were to choose one of our units for a SAS solution, the RS10613xs+ provides the ability to do so. I’m also using Intel 520 Series SSDs, for its high performance and multi-tasking access to the storage array. This will eliminate any disk performance bottleneck concerns with traditional hard drives and their latency due to their mechanical nature. I’m also using Intel 10GbE server adapters to allow the Synology XS+ Series RackStation to maximize access speeds. I’ve LAGged the 10GbE ports – together to create 20GbE of aggregate bandwidth. The Core Aggregation Switch I’m using is an affordable GS752TXS, which features 48x1GbE, and 2x10GbE for network use, 2x10GbE for stacking use. This switch I found for under $1500 USD and is the first switch I found which supported 1GbE+10GbE ports.

Test Procedure
The test procedure is quite simple, but yet proves quite effective for what I intend for testing, and can easily be replicated by any environment. I’m using 20 various computers in which I wrote a script to either read or write a single large file (10GiB in my test) to the RackStation. I will manually instruct the computers to do read all at once, or all write at once. During the benching process, the individual computer will record the throughput of that individual computer. I did not use SSD Read Cache for this experiment. After the performance testing is completed, I would collect all of the log files, and add up all of the performance data to determine the aggregate performance results.


  • Intel 520 Series SSD, 240GB, SSDSC2CW240A3
    • Write: 1281.21 MB/Sec
    • Read: 1354.81 MB/Sec
  • Western Digital WD2003FYYS, 2TB, Enterprise-Class
    • Write: 832.99MB/Sec
    • Read: 388.15MB/Sec

Intel 520 Series SSD Results Analysis
Given that the computers are working at 60-80% of 1GbE, it’s expected that the full 20GbE bandwidth won’t be fully appreciated. The reason for the varying performance per computer can be attributed to a few bottlenecks, such as virtual memory usage, file system fragmentation, security scans, or other background software that are running on the client computers. The above performance result is the average of three trials I achieved. Seeing that 1354.81 / 2500 is 54.2% effective bandwidth utilization of 2x10GbE. Using 54.2% as the average, I would need at least 37 computers to attempt to saturate 2x10GbE links. This test demonstrates how twenty computers can effectively utilize 10GbE Technology, a small business can take advantage of this technology at a practical cost.

SSD vs HDD Analysis
I was also curious to see what HDDs would perform, for more affordable deployments where a business needs capacity instead of performance. The odd behavior of where the write performance is faster than the read performance can be explained that write cache is involved. Also, given that I have 20 computers reading concurrently – the aggregate sequential reads effectively becomes random read behavior and this test demonstrates a worst case scenario of a small business. For everyday performance of the XS+ Series with HDDs, the everyday read performance should be much higher.

For businesses that demand capacity, using mechanical drives is still a viable option, and deploying with HDDs can be done without using a 10GbE card on the RackStation, but using the add-on Intel 4x1GbE Server Card, for a total of 8x1GbE links can be used, assuming the existing or replacement switch has eight free 1GbE ports for Link Aggregation. For deploying a pure 1GbE Solution, the investment in a 48x1GbE switch may be a good choice for the network. This allow for increased aggregate bandwidth for the XS+ and for the existing network infrastructure to grow as more work stations are added to the network.

Affordable and Beneficial Solution for a Growing Small Business Storage
All of the above hardware, whether looking at pure performance or capacity, can be acquired for around $10,000 USD, which is small cost wise, but yet a great investment in return for a small business. When looking at solving productivity issues of twenty plus users using a NAS for video editing, content creation, or deployment of a surveillance solution, $10,000 USD of hardware acquisitions costs is of little consequence. This small investment into the back-end storage allows a small business to continue to grow, as now their individual workstations can operate at their full potential of 1GbE capacity. Businesses are no longer limited to 1x1GbE network. The XS+ Series with Intel SSDs and Intel 10GbE allows the storage back end to have plenty of performance to meet the needs of a growing small business, allowing a business to scale out with more workstations to increase its productivity at a very affordable cost.

SSD VersionHDD Version
HardwarePart NumberCost (USD)Part NumberCost (USD)
NASSynology RS3413xs+4999.99Synology RS3413xs+4999.99
Drives10x Intel 520 Series SSD 240GB10x 299.9910x Western Digital WD2003FYYS 2TB ECD10x 229.99
NICIntel X520-DA2432.99Intel X520-DA2432.99
SFP+ Cables2x Cisco SFP-H10GB-CU3M2×144.992x Cisco SFP-H10GB-CU3M2×144.99
Storage Parameters
Estimated Usable Capacity in RAID-61.79TiB14.55TiB
Write Performance1281.21 MB/Sec832.99 MB/Sec
Read Performance1354.81 MB/Sec388.15 MB/Sec

Environment Setup

  • Synology RackStation RS3413xs+
    • DSM 4.1-2850
    • 10x Intel 520 Series SSD, 240GB, SSDSC2CW240A3, RAID-6
    • 1x Intel X520-DA2 10GbE NIC, SFP+
    • 2x Cisco SFP-H10GB-CU3M SFP+ Cables
    • LAN 5, 6 (10GbE Ports) are in LAG Mode
  • Core Aggregation Switch: GS752TXS
    • Boot Version: B5.2.0.1
    • Software Version:
    • Port 49, 50 are LAG, LACP
    • Flow Control: Enabled
    • STP Mode: Disable
    • Link Trap: Disable
  • 20x various desktops, laptops.
    • CPU: Wolfdale, Nehalem, Westmere, Sandy Bridge, or Ivy Bridge; dual or quad cores.
    • RAM: Various between 4-8GB, DDR2 or DDR3
    • OS: Windows 7, Windows 8 or Mac OS 10.8
    • HDD: Varies between 5400rpm, 7200rpm, or SSD; either LFF or SFF
    • NIC: 1x1GbE


  • Build a Synology RackStation XS+ Series with Intel SSDs and Intel 10GbE Server Network Card
  • LAG 10GbE Ports together on the RackStation and Switch
    • LAG, LACP, and Flow Control Enabled for the Switch Settings
  • Use a large data file, such as a 9GB ISO Image
    • In this test, I used 20x10GiB Files, where the 10GiB files are assigned to a specific folder per computer. I used ‘dd’ to create the 10GiB File
  • Map a SMB Share
  • Copy Procedure
    • Windows: use ‘robocopy’ to copy the test file from computer to the RackStation to determine performance results
    • Mac: use a combination of ‘date’ and ‘cp’ to run the copy process. Noting the time and conducting arithmetic to determine performance.
  • Add all of the computer’s performance to determine aggregate performance capability

Further Reading
1. Synology Wiki – Using Link Aggregation on the Synology DiskStation