Qumulo allows Cinesite to offload processing during periods of heavy rendering workloads, such as Active Directory lookups that the company’s existing bulk storage estate – on Dell EMC Isilon – could not deal with.
In the deployment, Cinesite can burst work from its on-site Qumulo nodes to the AWS cloud during busy periods.
Cinesite has a 500-strong visual effects and animation team in offices in London, Montreal, Berlin, Munich and Vancouver, as well as working from home.
Historically, all its rendering work was handled by on-site infrastructure with some bursting to AWS Montreal. Two years ago, the company migrated from industry-specific Pixit storage to Isilon scale-out NAS, of which it now has 2PB on-site.
“It worked well up to a point,” said Spencer Kuziw, Cinesite’s lead systems administrator. “We have multiple sites and a relatively complex Active Directory forest.
“By January and February this year, we had issues with Active Directory lookups with Isilon. We had nodes freezing depending on what movie asset files were accessed and the amount of lookups. Everything was on NFS and we had WAN-related delays.”
Cinesite’s COO, Graham Peddie, had used Qumulo in a previous role and the company decided to offload home directories and some applications to a 170TB four-node C72T Qumulo cluster on-site and a six-node cluster in the AWS Virginia/US East region.
Qumulo is part of a new wave of scale-out NAS and distributed storage products that seek to address the growing need to store unstructured data, often in the cloud as well as the customer datacentre.
Cinesite has 500 render nodes on-site. “If those are working OK to keep up, we won’t burst to AWS,” said Kuziw. “But generally, in the last couple of weeks of a project, our needs can expand to 2,000 render nodes, so that’s when we’ll burst to AWS.”
At such times, Qumulo nodes are also spun up in the AWS cloud to handle access. “We use AWS Cloud Formation to duplicate Qumulo nodes when we need them,” said Kuziw. “It’s really just a matter of seeding data.”
The on-site cluster is connected to AWS via a 40Gbps connection, while the on-site nodes house 12 6TB HDDs and four 480GB flash drives each.
The biggest benefit of moving to Qumulo has been the ability to burst over to the AWS cloud when required, because a bottleneck would always arise during the latter, busy phases of a movie project.
“We would expect a daily 3pm/4pm freeze on production systems,” said Kuziw. “When we moved to Qumulo, that went away. It’s not an insane amount of data, but parallel tasks, such as directory lookups, no longer pile up.”
Peddie said: “From a business point of view, it allows us to complete movies with a budget of $40m to $60m. Using Qumulo in the AWS cloud gives the flexibility to adjust to new demands and to only pay for it when we use it.”