Official Blog
The machine that keeps millions of systems reliably up-to-date
Yi-Lin Zhuo
February 6, 2026

The machine that keeps millions of systems reliably up-to-date

Authored by Yi-Lin Zhuo, Senior Manager at Synology leading the Quality Assurance teams responsible for Synology’s product releases.

Today, well over ten million Synology systems are actively in-use around the world, each holding an average of tens if not hundreds of terabytes of data. That actually puts the global Synology fleet on par with many public clouds. Behind every one of those systems is someone who depends on it: a business protecting years of financial records, a hospital archiving patient data, a creative studio safeguarding irreplaceable project files.

When we release an update, there’s a massive responsibility for us to do things right. We’re asking every one of those customers to trust that the update won’t disrupt what they’ve built. This trust is hard to earn and all it takes is one mistake to lose it, jeopardizing long-term security.

In the trenches

At Synology, QA isn’t just a gate at the end of development. Instead, it’s woven into the process from the very beginning of our software development pipeline. Our team works alongside product managers and developers to understand new features as they’re being designed, not after they’re handed off.

This means test methodology evolves in tandem with the feature itself. By the time a developer commits code, we’ve already mapped out how we’ll validate it, from the behaviors we need to check to the edge cases worth probing and the regressions we should watch for. Synology maintains a 1:4 ratio of QA engineers to developers, which gives us the bandwidth to be deeply involved.

That early collaboration also means we catch design-level issues before they become code-level problems. When QA has a seat at the table, fewer surprises make it to production.

Picture of the team discussing in front of a white board.

Testing methodologies

Once development reaches a testable state, every release moves through three distinct phases, each expanding in scope.

Feature testing focuses narrowly on what’s new or changed. If we’ve added a singular capability to Hyper Backup or adjusted how a certain wizard guides users through a process, this phase verifies that the specific behavior works as intended. This is usually the easiest to verify and can happen in parallel to development.

Integration testing examines how the new feature interacts with the broader package or operating system. We check for conflicts, performance regressions, and unintended side effects. A change in one subsystem can ripple in unexpected ways, and is significantly more complicated to detect and sometimes troubleshoot. Most software updates run through multiple comprehensive integration testing checklists and procedures to ensure that the updates our customers get are robust.

Comprehensive testing introduces the full complexity of real-world deployments by blending in hardware variables, upgrade paths from older DSM versions, and a wider range of configurations. This is where we simulate the diversity of environments our customers actually run, not just the clean, idealized setups. Each comprehensive testing run are multi-week, sometimes multi-month projects that accompany each and every one of our hardware releases. For enterprise platforms, we’ve worked with many of our close partners to design high-stress environments to mimic what the systems might be subject to in real-world environments.

QA team unloading multiple Synology systems (DUT) into a heated chamber.

Tackling near-infinite possibilities

The largest challenge that any QA team faces is variability. Serious bugs like a completely broken function, are easy to catch. But an issue that might crop up 1% of the time on just a handful of hardware configurations when a non-default setting is used? Not so much.

This is where automated tests, custom systems, and a development pipeline that adheres to stringent checkpoints helps 10X our QA engineers. Today, AI-assisted code review, dependency mapping, and even bug filing streamline the more predictable parts of day-to-day operations. This allows our engineers to focus more on identifying other high-importance issues, perform UI/UX validation, and also design more tools and automated tests that leverage their intuition and years of experience, while the repetitive portions chug along automatically.

For 2026 and beyond, we’re preparing to deploy an even more powerful automation system. The goal is a significant increase in concurrent test execution, which we expect will help us surface complex, hard-to-reproduce issues that slip through conventional methods simply due to the amount of variables there are. The new system leverages the ever increasing capabilities of AI, together with a comprehensive knowledge-base and testing methodology we’ve built over two decades to test significantly more combinations of hardware, software, and settings.

Synology QA team members examining and loading a PAS series system into a row of rackmount systems in a datacenter.

Testing is one step

Even after rigorous testing, a smart release strategy is mandatory. Major releases go through extended internal alpha periods, followed by public betas that invite real-world feedback. When we move to general availability, we use staged global rollouts, releasing to a subset of systems first and monitoring for issues before expanding further.

For customers running business-critical workloads, this deliberate pacing matters. An update that’s been validated internally is good, but an update that’s also been proven in the field is better. Obviously, organizations are always encouraged to perform their own testing on non-production systems before applying updates.

QA is a often misunderstood as just a final checkpoint, but it can be so much more. Our team members are engineers that participate during development, they build tools, and contribute significantly to the products they’re validating through their experience. That technical depth helps build a culture of software reliability, and adds credibility to when they push back on a release that just isn’t ready.

Reliable software doesn’t happen by accident. It comes from countless test iterations, a willingness to delay when something isn’t right, and a team that treats every update as if their own data depends on it. Because for millions of Synology users, it does.