WordPress Performance on Digital Ocean: Full Stack with Apache, NGINX & Redis

wordpress-droplet-performance-benchmark

Setting up a WordPress site on a fresh VPS is easy — any developer can spin up a droplet and install a theme. But running a real WordPress site is a different challenge. I’m talking about a professional-grade setup with 27 plugins, full-page caching, Redis for object caching, and a dual web server configuration using both Apache and NGINX. This isn’t a clean demo install — it’s a realistic, heavy site that mirrors what many agencies, developers, and growing businesses deploy in production.

In this article, I take that kind of real-world WordPress stack and benchmark it across four different DigitalOcean droplets — each with different CPU architectures, RAM sizes, and regions. It’s the follow-up to my earlier Digital Ocean Droplet Performance Benchmark, where I tested raw CPU, RAM, disk, and network performance. But this time, the focus is squarely on application-level performance: how fast WordPress responds, how well caching behaves, and how stack choices like Redis and PHP 8.2 affect performance under real traffic conditions.

Missed the First One? Want to See What’s Next?

  • Part 1: Digital Ocean Droplet Performance Benchmark
    • Compared CPU, memory, disk, and network performance across Intel and AMD droplets in different regions.
  • Part 3: WordPress Performance with Apache, NGINX, Redis & PHP-FPM
    • Pushed the same WordPress stack further using PHP-FPM with a new extreme-load test for concurrency handling.

    Let’s first revisit the droplet configurations:

    HostnameRegionCPU TypeRAMDiskOS
    srv1NYC3AMD1 GB25 GB SSDAlmaLinux 9
    srv2SGP1AMD1 GB25 GB SSDAlmaLinux 9
    srv100NYC1AMD2 GB50 GB SSDAlmaLinux 9
    srv1000NYC3Intel1 GB25 GB SSDAlmaLinux 9

    Test Setup:

    Each droplet was configured with a full WordPress stack designed to mirror a real production site — not a minimal install, but a fully featured setup. Here’s what was running on each server:

    • Apache 2.4.62 with NGINX as a reverse proxy
    • PHP 8.2.0 with Opcache enabled for faster execution
    • MariaDB 10.5.27 as the database engine
    • Redis configured for object caching
    • A full WordPress site featuring:
      • 40 pages
      • 27 plugins (25 active during testing)
      • A fully loaded theme with custom blocks and custom post types
    wordpress-server-info-digitalocean
    Environment summary from WordPress Site Health

    Benchmark Results

    To understand how each droplet handles real-world WordPress load, I ran three rounds of controlled performance tests. Each test simulated traffic to a production-grade site with 27 plugins, Redis caching, and no full-page cache. Results highlight not just peak performance, but also stability, consistency, and how well each server handles stress over time.

    Test 1: Initial WordPress Performance

    This test simulated real-world traffic on WordPress site running PHP-CGI. I began with moderate concurrency — just 10 users and 80 requests — after discovering that pushing these droplets harder would quickly crash services or make the VPS completely unresponsive. In fact, I had to repeat test multiple times (up to 5) to find the sweet spot that wouldn’t overwhelm the system while still revealing performance characteristics.

    HostnameRegionCPURAMReq/secMedian Resp Time (ms)Total Time (s)Max Resp Time (ms)
    srv1NYC3AMD1 GB0.3628,093223.239,413
    srv2SGP1AMD1 GB0.2635,644312.960,429
    srv100NYC1AMD2 GB0.6614,825121.518,115
    srv1000NYC3Intel1 GB0.2932,983273.144,667

    • srv100 (2GB RAM, AMD) was clearly the fastest — it handled the requests more than twice as quickly as the next-best droplet.
    • srv1 (1GB, AMD) did OK, but it started to slow down under even light load. Its median response time was nearly 2x slower than srv100, likely due to limited memory and the inefficiency of PHP-CGI.
    • srv1000 (1GB, Intel) was slower than the AMD-based droplets in every metric. Intel’s legacy droplet performance just doesn’t hold up.
    • srv2 (1GB, AMD, Singapore region) had the worst performance, with extremely long response times and the longest test duration. The likely causes are:
      • Higher SSL handshake or connect time (700ms+)
      • Slower network to the test machine

    Test 2: Stability Check Under Repeat Load

    This round re-ran the same test on the same stack to check for consistency and stability under stress. Once again, concurrency was kept at 10 and requests at 80, as anything more would destabilize the server — in some cases forcing a reboot.

    HostnameRegionCPURAMReq/secMedian Resp Time (ms)Total Time (s)Max Resp Time (ms)
    srv1NYC3AMD1 GB0.3527,810229.9044,039
    srv2SGP1AMD1 GB0.2539,410326.2449,217
    srv100NYC1AMD2 GB0.7014,230114.9315,266
    srv1000NYC3Intel1 GB0.3131,026256.6942,377

    • srv100 remained the fastest and most stable, with even lower total time — showing consistent efficiency under pressure.
    • srv1 showed stable behavior but a slight creep in response times, likely from cumulative load or thermal throttling.
    • srv1000 improved very slightly in max latency, but AMD droplets still easily outpaced it.
    • srv2 continued to underperform. Network latency and CPU limitations in this region seem to be persistent bottlenecks.

    Test 3: Sustained Stress and Concurrency Limits

    In this third round, I continued using the same 10-concurrency, 80-request model to simulate consistent pressure. Anything more still caused service timeouts or PHP crashes, so I stayed within known-safe limits. This final run aimed to confirm patterns and uncover any performance degradation over time.

    HostnameRegionCPURAMReq/secMedian Resp Time (ms)Total Time (s)Max Resp Time (ms)
    srv1NYC3AMD1 GB0.2936,271272.3851,648
    srv2SGP1AMD1 GB0.2538,925320.0761,171
    srv100NYC1AMD2 GB0.6914,429115.5916,673
    srv1000NYC3Intel1 GB0.2933,587272.7343,609

    • srv100 remained the only droplet that comfortably handled repeated load, showing excellent consistency over all three tests.
    • srv1 performance degraded slightly in this round, indicating it may be close to its resource limit even with moderate traffic.
    • srv1000 was slightly more stable than expected but still underwhelming.
    • srv2 now clocked the worst response times yet — hitting a 61-second max latency.

    Monitoring Graphs Summary

    To complement the raw performance data, I captured resource usage metrics during each test run. These graphs provide a behind-the-scenes look at how each server handled CPU load, memory usage, and disk I/O under stress. In some cases, they helped explain slowdowns that weren’t obvious from response times alone — like CPU saturation, inefficient caching, or regional latency. Together, they offer a complete picture of how infrastructure and application behavior intersect.

    1. srv1: (1 GB AMD, NYC3)

    srv1-performance-metrics
    Resource usage on srv1 (1GB, AMD) during Test. CPU saturation visible; disk I/O limited.
    • CPU usage hits 100% during peak periods — a clear sign that PHP-CGI can’t scale on small instances.
    • Disk I/O remained moderate; bottleneck is likely PHP or memory, not storage.

    2. srv2: (1 GB AMD, SGP1)

    srv2-performance-metrics
    srv2 (1GB, AMD, Singapore) shows inconsistent CPU utilization and lower throughput.
    • Network latency and inconsistent CPU usage suggest external or regional factors are impacting performance.
    • Disk activity is low, indicating delays are not from storage but processing/network.

    3. srv100: (2 GB AMD, NYC1)

    srv100-performance-metrics
    srv100 (2GB, AMD) showed smooth CPU and I/O behavior — the most balanced and efficient.
    • CPU remained under 70%, disk I/O was consistent, and request handling was predictable.
    • Excellent resource balance — ideal candidate for production workloads.

    4. srv1000: (1 GB Intel, NYC3)

    srv1000-performance-metrics
    srv1000 (1GB, Intel) used more disk I/O than others and showed inconsistent CPU spikes.
    • Higher disk usage might indicate inefficient caching or database behavior.
    • Intel-based droplets don’t scale well for WordPress workloads under concurrency.

    Additional Application-Level Metrics

    To go beyond server-side concurrency and latency, I also captured WordPress-level performance using Query Monitor and real-world frontend timing using Pingdom Tools with San Francisco (USA) Server. These metrics reveal how PHP execution, memory usage, and network latency impact actual page load experience.

    • srv100 (2GB, AMD) had the fastest total load time (2.92s) and one of the lowest wait times (1832 ms). It’s the most balanced performer both server-side and client-side.
    • srv1 and srv2 were close in terms of backend performance (around 3.5–3.8s), but srv2 had the highest frontend wait time (2631 ms) — likely due to geographical distance or slower initial connection.
    • srv1000 (1GB, Intel) was the worst in both backend and frontend metrics — with the highest execution time (4.83s) and elevated wait time (2256 ms). This aligns with earlier benchmarks showing CPU and memory inefficiencies.
    • All droplets used similar memory (~32MB) and executed ~52–57 queries, indicating that performance differences stem mostly from infrastructure and latency, not app configuration.
    wordpress-execution-vs-pingdom-wait
    Execution time measured by Query Monitor vs. initial wait time reported by Pingdom (San Francisco test location).
    • srv100 had both the lowest execution time (2.92s) and a solid wait time (1832ms) — making it the most responsive overall.
    • srv2 suffered from the highest frontend wait (2631ms), highlighting regional latency.
    • srv1000 had the slowest backend execution (4.83s), further proving that Intel-based droplets lag in WordPress-heavy environments.

    Conclusion

    After three rounds of real-world WordPress benchmarking, deep stack monitoring, and client-side testing, the verdict is clear: infrastructure choices matter more than the spec sheet suggests.

    srv100 (2GB AMD, NYC1) was consistently the top performer in every test. It not only handled concurrency better but maintained stable performance across all metrics — backend, frontend, and system-level resource usage. It’s the kind of VPS I’d confidently run production workloads on.

    On the other hand, srv1 and srv1000, both 1GB droplets (AMD and Intel respectively), showed that 1GB RAM is a tight squeeze for plugin-heavy WordPress stacks — especially when running under PHP-CGI. Performance dipped under repeated load, and the system often hovered near failure. Enabling PHP-FPM and caching in a future test will likely bring major improvements.

    srv2 (AMD, Singapore) consistently had the worst performance despite similar specs. Regional latency and SSL handshake delays played a role, showing that geography and network routing can be just as important as RAM or CPU when it comes to global performance.

    These tests reinforced that benchmarking isn’t optional. No matter how similar your droplets appear on paper, real workloads reveal real differences.

    Need Help?

    If you’re building more than a blog — whether it’s a custom API, e-commerce site, or enterprise SaaS platform — and you care about speed, reliability, and scalability, I can help.

    I offer services in:

    • Performance tuning for WordPress, Laravel, Node.js, and Django
    • Infrastructure setup and automation for LEMP/LAMP stacks
    • Penetration testing and server hardening for security-critical applications
    • Mobile and web development (React, Flutter, Vue, Next.js)
    • Ongoing sysadmin and DevOps support for scaling or recovering servers

    Get in touch to build something that runs fast and doesn’t fall over. Also, if you want to try Digital Ocean with $200 in free credit? Sign up using my referral link here.

    Digital Ocean Droplet Benchmark: Intel vs AMD, NYC1 vs NYC3 vs SGP1

    digital-ocean

    When it comes to cloud hosting, it’s easy to get lost in spec sheets and marketing claims.
    I wanted real answers: How do different Digital Ocean droplets actually perform today?

    In this detailed benchmark analysis conducted in May 2025, I evaluated four different Digital Ocean droplets, comparing key factors such as CPU architecture (Intel vs. AMD), regional performance (New York City data centers NYC1 and NYC3, and the Singapore region SGP1), RAM allocation (1GB vs. 2GB), disk performance, and network capabilities.

    If you’re deploying a WordPress site, API, or app, this deep dive into CPU, disk, RAM, and network performance will help you choose the right droplet configuration for your needs. This will also help developers, system administrators, and businesses make informed decisions when deploying Digital Ocean droplets for optimal performance.

    Let’s start with the droplet configurations:

    HostnameRegionCPU TypeRAMDiskOS
    srv1NYC3AMD1 GB25 GB SSDAlmaLinux 9
    srv2SGP1AMD1 GB25 GB SSDAlmaLinux 9
    srv100NYC1AMD2 GB50 GB SSDAlmaLinux 9
    srv1000NYC3Intel1 GB25 GB SSDAlmaLinux 9

    I spun up four droplets across three regions (NYC1, NYC3, and SGP1) with similar RAM, CPU, and storage specs, and put them through real-world tests. This article focuses on raw infrastructure performance. In the upcoming article, I’ll install a full WordPress stack (Apache, NGINX, MariaDB, Redis) on these droplets and benchmark it under real-world conditions — including a 40-page site with 27 plugins.

    Benchmark Results:

    1. CPU: Intel vs AMD:

    First, let’s measure CPU performance by Prime Calculation. This test simulates compute-intensive workloads and helps highlight raw CPU efficiency and responsiveness.

    HostnameCPUEvents/secAvg Latency (ms)
    srv1DO-Premium-AMD584.331.71
    srv2DO-Premium-AMD1363.990.73
    srv100DO-Premium-AMD1241.270.80
    srv1000DO-Regular-Intel781.691.28

    • The best-performing AMD instance (srv2) is ~75% faster than the Intel droplet (srv1000).
    • srv1 is underperforming despite having the same AMD Premium label — likely due to CPU scheduling or shared tenancy issues.

    Next, let’s try the zip compression/decompression benchmark (MIPS). These results give insight into real-world workloads like packaging, archiving, and data processing.

    HostnameCPUCompression MIPSDecompression MIPS
    srv1DO-Premium-AMD31443079
    srv2DO-Premium-AMD29702920
    srv100DO-Premium-AMD31072989
    srv1000DO-Regular-Intel24212373

    • In compression-heavy workloads, AMD droplets outperform Intel by ~28–30%, with srv1 surprisingly doing well despite poor prime calculation results.
    • This discrepancy suggests that srv1 may throttle under CPU-bound load, but not under disk/CPU-mixed load, pointing to a dynamic frequency scaling issue.

    2. Disk Speed:

    I then assess disk throughput using sequential cached and buffered read tests. This reflects the base disk performance and cache effectiveness of each droplet’s storage layer.

    HostnameCPUCached Reads (MB/s)Buffered Read (MB/s)
    srv1DO-Premium-AMD8,314.48985.46
    srv2DO-Premium-AMD13,920.281,503.94
    srv100DO-Premium-AMD14,948.691,628.73
    srv1000DO-Regular-Intel8,681.901,531.92

    • AMD droplets (srv100, srv2) show ~70% better cached read speeds than the Intel droplet.
    • But srv1 lags behind in buffered performance, possibly throttled I/O.

    Next, I benchmarked random 4K read performance to simulate real-world disk usage. This test captures storage responsiveness in database-heavy or I/O-intensive applications.

    HostnameCPUIOPS (avg)Bandwidth (avg)Latency Avg (µs)
    srv1DO-Premium-AMD25.7k105 MB/s153.85
    srv2DO-Premium-AMD21.4k87.5 MB/s185.77
    srv100DO-Premium-AMD20.9k85.5 MB/s189.82
    srv1000DO-Regular-Intel18.2k74.4 MB/s218.51

    • srv1 flips expectations: It outperforms all others in random read bandwidth and IOPS, despite lagging in CPU and buffered reads.
    • Latency is best on srv1, which suggests its I/O stack or block device assignment was optimized, even if CPU performance was weaker.

    3. Network Speed:

    I also ran internet speed tests to assess network performance, covering both download and upload throughput. This reflects how well each droplet can handle external data transfer, important for APIs, streaming, and content delivery.

    HostnameRegionPing (ms)Download (Mbps)Upload (Mbps)
    srv1NYC311.682849.091151.82
    srv2SGP11.903734.511894.33
    srv100NYC111.733094.171595.19
    srv1000NYC310.522775.671298.54

    • srv2 (SGP1) showed the best performance overall, with the lowest latency (1.9 ms) and the highest download and upload speeds, ideal for Asia-based services.
    • srv100 (NYC1) outperformed both NYC3 droplets in download (+~9%) and upload (+~20%) compared to srv1.
    • All droplets delivered strong bandwidth, with download speeds over 2.7 Gbps and upload speeds over 1.1 Gbps, confirming Digital Ocean’s high-capacity networking.

    4. RAM Performance:

    I ran memory throughput tests to compare RAM access performance across all droplets. The number of memory operations completed in 10 seconds provides a measure of raw memory speed.

    HostnameRegionRAMTotal EventsAvg Throughput (Events/sec)
    srv1NYC31 GB46,062,9744.61 million
    srv2SGP11 GB38,410,1993.84 million
    srv100NYC12 GB40,821,8954.08 million
    srv1000NYC31 GB29,825,2432.98 million

    • srv1 (NYC3, AMD, 1 GB) had the fastest RAM throughput, outperforming all other droplets by a margin of ~13% over srv100, ~20% over srv2, and ~54% over srv1000.
    • srv1000 (Intel) lagged significantly behind, ~35% slower than srv2, reaffirming its underperformance in memory and CPU-intensive tasks.
    • srv100 (NYC1, 2 GB RAM) showed solid performance, but its extra RAM did not result in proportionally higher throughput, suggesting memory speed or latency matters more than RAM size for this test.

    5. Visual Summary:

    The chart below consolidates all performance metrics into a single visual, allowing quick comparison across RAM throughput, CPU events/sec, disk IOPS, and network speeds.

    digitalocean-vps-performance-radar-chart-2025
    Normalized performance comparison of four DigitalOcean droplets across RAM, CPU, disk IOPS, and network throughput.

    srv1 demonstrates the strongest memory and disk performance, making it well-suited for workloads requiring fast local operations. srv2, hosted in Singapore, clearly dominates in both download and upload throughput — a great fit for network-heavy applications, especially in Asia-Pacific regions. Meanwhile, srv100 performs consistently across all categories, showing no major weaknesses. In contrast, srv1000 consistently underperforms, with noticeably weaker CPU and RAM throughput, confirming the performance gap between Digital Ocean’s legacy Intel-based droplets and their newer AMD Premium offerings.

    The grouped comparison below breaks down each metric into actual values, making it easy to see how each droplet stacks up in raw performance.

    digitalocean-vps-performance-bar-chart-2025
    Side-by-side VPS benchmark results showing raw values for RAM speed, CPU throughput, disk IOPS, and network performance.

    srv1 leads in RAM throughput and disk IOPS, reinforcing its strength in local operations. srv2 delivers the highest network performance by a clear margin, making it ideal for data-heavy traffic or edge caching scenarios.

    srv100, the only droplet with 2GB RAM, maintains solid performance across all categories — a strong general-purpose option. Meanwhile, srv1000, based on older Intel architecture, trails in nearly every metric, particularly CPU and RAM, underscoring the performance gap between Digital Ocean’s older and newer droplet generations.

    If you’re specifically interested in how the full LEMP/LAMP stack performs under load — including PHP, Apache, NGINX, and database queries, check out my in-depth WordPress Stack Performance Comparison.

    Conclusion

    Whether you’re running a small business site or scaling a high-traffic application, choosing the right droplet makes a measurable difference. AMD droplets, especially in newer regions like NYC3 and SGP1, offer clear performance advantages. But remember: no two droplets are identical. Always benchmark after provisioning, and choose based on your app’s real-world bottlenecks, not just the spec sheet.

    After benchmarking four Digital Ocean droplets across key performance dimensions — CPU, memory, disk I/O, and network — several clear trends emerged:

    • AMD droplets outperformed legacy Intel-based plans across CPU, memory, and disk benchmarks — often by 30–70%.
    • srv1 and srv100 delivered the best overall balance — with srv1 leading in RAM and disk IOPS, and srv100 offering strong network and CPU results, especially considering it has 2GB of RAM.
    • srv1000 (Intel, NYC3) underperformed in nearly every category, confirming that older “Regular” Digital Ocean plans no longer compete with the Premium lineup.
    • Network performance was strong across all regions, with consistently high throughput and low latency, especially in Asia-Pacific locations.
    • Extra RAM (e.g., 2GB vs 1GB) didn’t translate to faster memory access, but it does offer more headroom for caching, multitasking, and database workloads.
    • AMD Premium droplets consistently outperformed Intel-based droplets, often by 30–70%, across CPU, RAM, and disk performance.
    • Hardware variance exists, even within the same droplet type, so testing after provisioning remains essential for ensuring predictable performance.

    If you’re deploying compute-intensive, memory-heavy, or disk-bound applications, go for AMD Premium droplets in newer regions like NYC3 or SGP1. For legacy workloads, avoid older Intel-based setups.

    Continue the Series

    • Part 2: WordPress Performance with Apache, NGINX, Redis & PHP-CGI
      • Real-world WordPress benchmark with full LAMP stack and Redis caching — focusing on plugin-heavy, dynamic workloads.
    • Part 3: WordPress Performance with Apache, NGINX, Redis & PHP-FPM
      • Upgrade to PHP-FPM and introduce high-concurrency stress testing — including a 1,000-request benchmark.
    Need Help?

    If you’re deploying a performance-critical application — whether it’s WordPress, Laravel, a Node.js app, or a custom API — or need help with server hardening, caching, scaling, or building a production-grade LEMP stack, I offer consulting and full-stack infrastructure setup for modern Linux environments. Let’s talk.

    If you’d like to try Digital Ocean with $200 in free credit, sign up with my referral link here.