Can QNAP or Synology devices be used with Intella?

Can QNAP or Synology devices be used with Intella?

Let's walk through some details with ChatGPT, and take a look at: some of the potential issues we've seen with our customers, better alternatives, and issues with running Lucene specifically.
Although Synology is specifically referenced here, in Vound's experience, QNAP is equally applicable.

Question:
"I am running a server which manages a database that does a lot of I/O operations (Intella). The DB is growing too large to be handled by local disks. I was thinking about attaching Synology NAS to it. Is that a good idea? Is that vendor a good fit for such usage?"

Answer:
Using a Synology NAS for a database with heavy I/O operations isn't ideal due to latency and throughput constraints. While Synology NAS devices are great for file storage, backups, and general-purpose NAS use, they aren't typically designed for high-performance database workloads.

Potential Issues

  1. Network Latency & Throughput

    • If you connect via NFS, SMB, or iSCSI, your database's disk access speed will be limited by network latency.
    • Even with 10GbE, it’s significantly slower than a local NVMe SSD.
  2. IOPS Performance

    • Most Synology NAS devices use RAID (SHR, RAID5, RAID6, etc.), which prioritizes redundancy over speed.
    • HDD-based NAS arrays have significantly lower IOPS compared to NVMe or SSD-based local storage.
    • Synology SSD caching can help, but it's still not comparable to local high-speed storage.
  3. Reliability Under Heavy DB Workloads

    • Many Synology NAS models use low-power CPUs (Intel Atom, Celeron, Ryzen V1000, etc.), which may struggle with concurrent database queries.
    • While higher-end models (like Synology FS/XS series) can handle better I/O, they are still not optimized for database workloads.

When is Synology NAS a Good Fit?

If your use case involves cold storage, archiving, or read-heavy workloads, Synology NAS can be useful. Some cases where it might work:

  • As a Backup Target: Storing database snapshots for disaster recovery.
  • For Read-Only DBs: Hosting secondary database replicas (e.g., read replicas).
  • For Less IOPS-Intensive DBs: If your database doesn’t require thousands of transactions per second, it might work with SSD-based iSCSI storage.

Better Alternatives

If your database is growing beyond local disk capacity, consider these options:

  1. Local NVMe SSD Expansion

    • If possible, add more local storage (NVMe RAID, PCIe SSDs).
    • Use ZFS with compression to maximize space efficiency.
  2. Direct-Attached Storage (DAS)

    • Instead of NAS, use a DAS (e.g., QNAP TL-D800S, Synology RX1217) over SAS/SATA or Thunderbolt.
  3. Dedicated SAN/NVMe over Fabric (NVMe-oF)

    • Enterprise-grade solutions like Dell EMC PowerStore, NetApp AFF, or Pure Storage provide low-latency shared storage.
  4. Object Storage with DB Sharding

    • If the database supports S3-compatible backends, consider hybrid object storage for large blobs.
  5. Cloud Expansion (If Feasible)

    • Offload older, less accessed tables to cloud-based storage solutions (AWS EFS, Azure Files, etc.).

Verdict

If you need high IOPS, low-latency, and continuous database performance, Synology NAS is not an optimal choice. Instead, consider local SSDs, enterprise SAN storage, or a DAS to handle growing database workloads. However, if you're looking for a cheap backup or read-only secondary storage, Synology could work.





Question:
"If I run a Synology NAS, will it have trouble working with a Lucene database and open file limits?"

Answer:

Yes, running a Lucene-based database (like Elasticsearch, Solr, or OpenSearch) on a Synology NAS could lead to performance issues, especially concerning open file limits and I/O performance.


Problems with Running Lucene on a Synology NAS

  1. Open File Limits
    • Lucene-based systems require thousands of open files due to numerous index segments.
    • Many Synology NAS models run a custom Linux-based OS (DSM), which might impose lower open file limits by default.
    • You'd likely need to increase ulimit -n for both




Question:
"What is the minimum requirements for a NAS to ensure no issues if I am running a Lucene database? I need to ensure that it does not have bottlenecks with file limits."

Answer:

If you're running a Lucene-based database (e.g., Elasticsearch, Solr, or OpenSearch) on a NAS, you need to ensure that the NAS can handle high IOPS, low-latency access, and high file descriptor limits. Below are the minimum requirements for a NAS to prevent bottlenecks:


1. Hardware Requirements

Component Minimum Requirement Recommended
CPU x86-64 (Intel i3/AMD Ryzen) Intel Xeon / Ryzen Pro / EPYC
RAM 16GB+ (Elasticsearch recommends 50% RAM for heap) 32GB+ ECC RAM
Disk Type All-SSD (NVMe or SATA SSDs in RAID 10) Enterprise NVMe SSDs (U.2, PCIe) with RAID 10 or ZFS RAIDZ1/RAIDZ2
Disk Configuration RAID 10 or RAIDZ1/RAIDZ2 (for performance & redundancy) RAID 10 NVMe or SSD over PCIe
Network 10GbE minimum 25GbE+ (if multiple nodes share storage)
Storage Protocol iSCSI (preferred) or NFS with tuning NVMe-oF, RDMA, or Fibre Channel
SSD Caching Enabled (for metadata, small file ops) Dedicated SSDs for write cache

2. File Open Limits & Linux Customization

a. Increase Open File Descriptors

Most NAS devices (especially Synology and QNAP) ship with low default file limits, which can cause issues when running Lucene-based databases. Ensure you can configure:

  • ulimit -n (open files): Set to minimum 65536, ideally > 262144
  • vm.max_map_count (for memory-mapped files): Set to 262144 or more
  • fs.file-max: Set to at least 2097152
  • Persistent in /etc/security/limits.conf, /etc/systemd/system.conf, and /etc/sysctl.conf

b. Custom Linux Support

  • Use a NAS with full Linux support (Synology DSM has restrictions; QNAP with QTS/Hero is more flexible).
  • TrueNAS SCALE (Linux-based) or Unraid might be better options.
  • Ability to SSH and manually configure system parameters is critical.

3. Avoiding Bottlenecks

  1. Avoid HDD-based NAS – HDD IOPS (~150-250) is too slow for Lucene indexes, especially during merges.
  2. Use iSCSI Instead of NFS/SMB – iSCSI provides block-level storage (closer to local disk performance).
  3. Enable SSD Write Caching – Especially for high ingest rates in Elasticsearch/Solr.
  4. Ensure NAS Supports Jumbo Frames – If using 10GbE+, enable 9000 MTU to reduce overhead.
  5. Avoid Low-End Consumer NAS Devices – Devices like Synology DS series (e.g., DS920+) are not ideal for Lucene.

4. Best NAS Choices for Lucene Workloads

  • TrueNAS SCALE (ZFS-based, Enterprise-Grade)
  • QNAP TS-h973AX (with ZFS & 10GbE)
  • Synology FS Series (All-Flash, e.g., FS3410, FS6400)
  • DIY NAS with Linux (Proxmox + Ceph, or Unraid)
  • Enterprise SAN/NAS (Dell PowerStore, NetApp AFF, Pure Storage)

Final Verdict

If you want no bottlenecks, the minimum NAS should be all-SSD, 10GbE, RAID 10, and support Linux file descriptor tuning. A high-end QNAP, TrueNAS SCALE, or custom-built NAS would work better than a Synology or regular QNAP device.