
F5 and Scality Expand Partnership: 7 Powerful Wins for Secure, High-Performance Data Infrastructure for AI Workloads
F5 and Scality Expand Partnership to Deliver Secure, High-Performance Data Infrastructure for AI Workloads
On February 18, 2026, F5 and Scality announced an expanded partnership aimed at helping organizations scale AI, analytics, and other data-hungry workloads with stronger security and higher performance. The joint approach integrates the F5 Application Delivery and Security Platform (ADSP) with Scalityâs S3-compatible scale-out object storage, creating a more resilient way to move, protect, and access large volumes of data across on-premises, cloud-native, and hybrid environments.
This news matters because modern AI work is often limited by the âpipesâ that deliver dataânot only by GPUs and compute. Training, fine-tuning, retrieval-augmented generation (RAG), and large-scale analytics all depend on fast, predictable access to datasets. At the same time, organizations must meet strict requirements for uptime, governance, and cyber resilience. F5 and Scality are positioning their combined solution as a practical architecture to reduce bottlenecks, simplify operations, and strengthen protection around S3-based workflows.
Original announcement source: GlobeNewswire press release.
What the Expanded Partnership Delivers
At the center of the announcement is a validated, integrated approach that places F5 BIG-IP in front of Scality RING object storage, creating a âfront doorâ for S3 traffic. This front door is designed to provide:
- Smarter traffic management to route and load-balance S3 requests across nodes and even across sites
- High availability through health checks and fast failover behavior, reducing downtime during disruptions
- Stronger security via built-in controls such as WAF, DDoS protection, and policy-based access controls
- Performance optimization through TLS offload and cryptography acceleration, helping avoid storage-node overload
In short: Scality focuses on durable, scalable object storage, while F5 focuses on delivering and securing traffic to that storageâespecially under high concurrency and strict reliability needs.
Why AI Workloads Put So Much Pressure on Data Infrastructure
AI systems are data systems. Even if an organization buys powerful compute, it can still struggle if its data layer canât feed the workload efficiently. Here are common pressure points in enterprise AI environments:
1) Massive data movement (and repeated reads)
Training and fine-tuning can involve huge datasets that are read many times. RAG systems can also drive frequent lookups, retrieval, and indexing operations. If storage endpoints are slow or inconsistent, model training time increases and user-facing inference may feel laggy.
2) High concurrency and unpredictable peaks
AI pipelines often run many parallel jobs. During a big experiment or a production release, hundreds of threads or workers may hit object storage at the same time. A storage system might be scalable, but if traffic isnât distributed well, a few nodes can get âhotâ and become bottlenecks.
3) Hybrid and multi-site complexity
Many organizations keep some datasets on-premises for sovereignty, cost, or latency reasons, while also using public cloud services. Hybrid operations can be tricky: latency changes, outages happen, and teams need consistent security policies everywhere.
4) Security and compliance requirements
AI data often includes sensitive information: customer records, medical data, financial transactions, or proprietary research. Enterprises need encryption, access controls, and protections against common web threats and DDoSâwithout slowing everything down.
F5 and Scality are directly addressing these realities with a combined architecture focused on S3 access that is secure, balanced, and resilient at scale.
The Core Architecture: F5 BIG-IP in Front of Scality RING
The validated design described by Scality explains the structure clearly: F5 BIG-IP operates as a full proxy Application Delivery Controller (ADC) in front of a Scality RING storage cluster. Clients connect to a single S3 endpoint (exposed as a virtual IP), and BIG-IP manages traffic distribution and security before requests reach the storage nodes.
Traffic flow (easy mental model)
- Clients send S3 requests to one stable endpoint (VIP / DNS name).
- BIG-IP terminates TLS, applies policies, and routes requests.
- Scality RING nodes receive balanced traffic, with unhealthy nodes removed from service automatically.
This design âdecouplesâ what clients see (one consistent endpoint) from the changing reality behind the scenes (nodes added, nodes removed, a site experiencing issues). Itâs a big deal for operational simplicity.
7 Powerful Wins: What Organizations Can Gain
Win #1: A single stable S3 endpoint that scales
Instead of exposing many storage node addresses, the solution promotes one stable entry point for S3. BIG-IP can distribute traffic transparently across RING nodes, which can simplify application configuration and reduce âendpoint sprawl.â
Win #2: Better availability through health-aware load balancing
The solution sheet highlights health checks and pool-based traffic steering to reduce the impact of node failures and maintain responsiveness. If a node becomes unhealthy, traffic is redirected away automaticallyâhelping keep services online during disruptions.
Win #3: Stronger security posture without redesigning apps
Security controls can be centralized at BIG-IP, including SSL/TLS termination and optional application-layer protections such as WAF and access policy features (depending on licensing/modules). This can help protect S3 endpoints and APIs from common threats before requests reach the storage layer.
Win #4: More performance headroom via TLS offload and traffic optimization
TLS encryption is essential, but it can be costly. By terminating TLS at BIG-IP and optimizing connection handling, the architecture can reduce load on backend storage services and support more predictable performance under peak S3 workloads.
Win #5: Predictable performance under high concurrency
In Scalityâs validated testing, BIG-IP load balancing was assessed under high concurrency levels (up to hundreds of threads), with consistent throughput and stable latency described during steady-state operation. That kind of predictability matters when AI pipelines are running at scale and deadlines are tight.
Win #6: Multi-site resilience and smarter traffic steering
For multi-site deployments, global traffic steering can direct clients to the closest or best-performing site based on availability and latency. This supports resilient access to distributed RING environmentsâuseful for disaster recovery, geo-distributed analytics, and global AI teams.
Win #7: Operational simplicity for platform teams
The validated design emphasizes a âseparation of concernsâ: storage teams focus on storage, while traffic/security teams manage the access layer. Adding new RING nodes can be done without forcing clients to change configuration, which reduces day-to-day risk and makes scaling smoother.
Whatâs Inside the âValidated Integrationâ
Both the press release and Scalityâs validated design point to a tested, repeatable architecture. Validation matters because it reduces guesswork and gives teams confidence that a design behaves correctly under stress and failure scenarios.
Key tested areas mentioned
- TLS termination and connection handling at the access layer
- Load distribution across nodes and stable behavior under concurrent workloads
- Failure handling, where health checks detect issues and remove failed nodes without requiring endpoint changes
- Multi-site scenarios, including global server load balancing (GSLB) concepts
The press release also emphasizes built-in security services such as WAF and DDoS protection, plus policy-driven access controls, alongside TLS offload and cryptography acceleration for performance.
Use Cases the Partners Highlight
F5 and Scality describe the solution as supporting a wide range of enterprise needsânot just one AI workflow. The press release specifically calls out several common use cases:
- AI/ML training and inference
- Analytics and data-intensive applications
- Multi-site data protection and disaster recovery
- Hybrid and multicloud storage architectures
- Secure long-term data retention
The shared theme is âdata at scale,â where S3 access must remain fast, stable, and secure even as the environment grows more distributed.
How This Helps Control Cost and Complexity
AI infrastructure gets expensive quicklyâespecially when teams try to solve every issue by adding more compute. But if the real bottleneck is data delivery, spending on extra GPUs may not fix the underlying problem.
Reducing operational overhead
When you expose one stable endpoint and centralize traffic policies, you reduce the number of moving parts that application teams must understand. That can mean fewer outages caused by misconfiguration and fewer emergency changes.
Right-sizing performance vs. protection
The partnership emphasizes a balance: keep data access fast while still applying strong security controls. Centralized TLS termination and security enforcement can reduce repeated configuration work across each storage node and help make policy consistent.
A smoother path to hybrid scale
Hybrid and multicloud setups can easily become messy. A consistent access layer helps unify operations across data center and cloud environments, supporting a more predictable scaling path.
What Decision-Makers Should Ask Before Adopting This Approach
If youâre a CIO, CISO, platform leader, or data engineering manager evaluating the joint solution, here are practical questions to guide the conversation:
- Where are our biggest AI delays happening? (Data ingest, retrieval, replication, security approvals, or network latency?)
- Do we have one clear âfront doorâ for S3? Or do teams point apps directly at storage nodes?
- How do we handle failures today? Is failover automated, tested, and observable?
- Are security policies consistent across sites? Or do they drift over time?
- Can we scale without reconfiguring clients? The more manual the changes, the higher the risk.
The partnership message strongly suggests that the combined solution is designed for environments where the answers to these questions reveal pain: complexity, downtime risk, and performance unpredictability.
FAQ: Common Questions About the F5âScality AI Data Infrastructure Announcement
1) What exactly did F5 and Scality announce?
They announced an expanded partnership to help organizations securely scale AI, analytics, and data-intensive workloads by integrating F5âs Application Delivery and Security Platform with Scalityâs S3-compatible object storage.
2) What products are involved in the joint solution?
The press release highlights integrating F5 BIG-IP with Scality RING object storage to create a secure, high-performance S3 environment.
3) Why is S3 such a big focus for AI workloads?
S3 is widely used as a protocol for object storage, which is common for large AI datasets. The announcement positions S3 access as foundational for AI and analytics that require scalable, durable storage with predictable performance.
4) How does the architecture improve availability?
BIG-IP can load-balance across storage nodes, run health checks, and remove unhealthy nodes from service automatically. This helps reduce single points of failure and supports continuity during disruptions.
5) What security capabilities are mentioned?
The press release points to security services such as web application firewall (WAF), DDoS protection, and policy-driven access controls, along with TLS offload and cryptography optimization.
6) Who is this solution most useful for?
Itâs aimed at enterprises and service providers running large-scale or multi-site object storage environments, especially those supporting AI training, inference, analytics, disaster recovery, and hybrid/multicloud architectures.
7) What does âvalidated designâ mean here?
It means the integration was tested as a repeatable reference architecture (including performance and failure handling) with specific versions of Scality RING and F5 BIG-IP described in Scalityâs validated design write-up.
Conclusion: A Practical Blueprint for AI-Ready, Secure S3 at Scale
F5 and Scality are betting on a simple idea: AI success depends on dependable data delivery. By combining Scalityâs durable, scalable S3-compatible storage with F5âs traffic management and security controls, the partnership aims to remove common pain pointsâbottlenecks, downtime risk, inconsistent security policies, and operational complexity.
For organizations building AI pipelines across hybrid and multicloud environments, the announcement suggests a clear path: create a stable, secure, high-performance S3 access layer that can scale without constant rework. As data volumes grow and AI workloads become more distributed, architectures that prioritize resilience and simplicity can be the difference between âAI experimentsâ and real AI outcomes in production.
#F5 #Scality #AIInfrastructure #S3ObjectStorage #SlimScan #GrowthStocks #CANSLIM