01 September 2025 | Interaction | By editor@rbnpress.com
Robotics Business News sat down with Sunny Smith, Co-Founder and CTO of Massed Compute, to discuss the company’s groundbreaking partnership with Cisco and Digital Alpha, backed by a $300M investment. In this wide-ranging interview, Smith explains how Massed Compute is reshaping AI infrastructure with a secure, scalable, and enterprise-ready GPU-as-a-Service platform. He outlines how the collaboration will accelerate the deployment of next-generation NVIDIA GPUs, enable sovereign AI clouds, and bring enterprise-grade AI capabilities closer to customers across three continents—driving digital transformation at a global scale.
How does this partnership align with your organizations’ overall strategies in AI infrastructure and digital transformation?
At Massed Compute, our core mission is to accelerate the adoption of AI by making enterprise-grade GPU infrastructure secure, scalable, and affordable—available, scalable, and easy to consume. We focus on removing the traditional barriers to AI development—lengthy procurement cycles, capital-intensive hardware purchases, and operational complexity—so that enterprises, research institutions, and AI-native startups can deploy their workloads at speed and scale. This partnership with Digital Alpha and Cisco is a direct embodiment of that mission.
From a strategic standpoint, the collaboration addresses two key pillars of our roadmap:
Global Infrastructure Scale: The partnership enables us to deploy next-generation NVIDIA GPU clusters into additional Tier III facilities across multiple continents. This supports our strategy of being a truly global AI cloud, bringing low-latency compute closer to where customers operate and ensuring resilience and 99.99% uptimethrough geographic diversity.
Enterprise-Grade Integration: Working closely with Cisco aligns us with proven, secure, and high-performance infrastructure technology. Cisco UCS servers form the backbone of many enterprise data centers today, and by integrating them into our platform we can meet customers’ performance expectations while satisfying their security, compliance, and manageability requirements.
For Digital Alpha, the investment fits into their focus on scaling the digital infrastructure platforms that underpin digital transformation globally. AI is no longer a niche workload—it’s becoming embedded into every vertical, from healthcare to manufacturing to finance. By partnering with Massed Compute, Digital Alpha is backing an AI infrastructure provider purpose-built for the demands of these workloads, ensuring their portfolio aligns with high-growth, high-impact areas of digital transformation.
For Cisco, this partnership extends their AI infrastructure reach into the GPU-as-a-Service market. Cisco’s strategy has long been to deliver end-to-end solutions—from networking to compute to security—that power the digital enterprise. By integrating Cisco UCS into Massed Compute’s platform, they can help customers transition from traditional infrastructure to AI-native environments without abandoning the enterprise features and operational models they rely on.
What factors influenced Digital Alpha’s decision to invest up to $300 million in Massed Compute, and how will this investment accelerate AI infrastructure development?
Several factors contributed to Digital Alpha’s decision:
Market Timing: AI adoption is accelerating at an unprecedented pace, driven by breakthroughs in generative AI, natural language processing, and machine learning. The infrastructure requirements for these workloads—particularly large language models and complex training jobs—are unique in their scale and performance demands. Massed Compute has already demonstrated the ability to stand up high-density GPU clusters quickly and make them available to customers globally.
Proven Model: Massed Compute’s GPUaaS model combines the elasticity and OPEX-friendly economics of the public cloud with the performance, control, and security typically associated with private infrastructure. This hybrid value proposition resonates strongly with both AI-native companies and traditional enterprises.
Experienced Team: Our leadership and engineering teams bring deep expertise in data center operations, AI infrastructure, and enterprise service delivery. We are recognized subject-matter experts in NVIDIA GPUs and GPU operational excellence—supporting clients across the entire OSI seven-layer stack. At a time when the global shortage of GPU expertise is slowing many organizations’ deployments, Massed Compute fills that critical gap. Digital Alpha places a premium on execution capability, and our track record in deploying infrastructure across three continents was a decisive factor.
Alignment with Ecosystem Partners: Massed Compute’s existing collaborations with Cisco, PacketFabric, and Cloudian align closely with Digital Alpha’s investment philosophy of fostering synergies within a strategic partner ecosystem.
The investment of up to $300 million provides us with the capital to accelerate our deployment roadmap. It enables bulk procurement of next-generation GPUs, rapid onboarding of additional Tier III facilities, and the scaling of our automation and orchestration software to handle larger fleets of GPU nodes. This means shorter lead times for customer deployments, more capacity available on-demand, and the ability to offer specialized GPU configurations for niche workloads.
Cisco UCS brings a proven, modular compute platform that’s already well-established in enterprise environments. By integrating UCS servers into our AI cloud platform, we gain several advantages:
Performance Optimization: UCS servers are engineered for high I/O throughput, dense memory configurations, and integration with the latest NVIDIA GPUs. This ensures that AI workloads—whether training, inference, or fine-tuning—are not bottlenecked by system architecture.
Operational Consistency: Many enterprise customers already operate UCS in their own data centers. By using the same server architecture in our cloud, we make it easier for them to migrate or extend workloads into Massed Compute without retraining their teams or retooling their operational processes.
Security and Manageability: UCS integrates with Cisco Intersight for unified management, automation, and policy enforcement. This allows us—and by extension, our customers—to apply consistent security and compliance controls across environments.
Scalability and Reliability: UCS’s modular design means we can rapidly expand capacity within a given data center footprint, accommodating surges in demand without the delays associated with bespoke hardware integration, all while maintaining 99.99% uptime.
The result is an AI platform that’s not only powerful but also familiar and manageable for enterprise IT teams—bridging the gap between traditional data center operations and the emerging demands of AI workloads.
Our near-term expansion is focused on deploying the most advanced GPUs available for AI workloads, including:
NVIDIA H100 and H200 (Hopper architecture): Currently among the most sought-after GPUs for AI training, offering massive performance improvements for transformer models, large-scale NLP, and generative AI.
NVIDIA RTX 6000 Ada and RTX 6000 Blackwell Server Edition: Well-suited for AI inference, high-end visualization, and workloads that benefit from large frame buffers and high FP32 performance.
NVIDIA Blackwell and Blackwell Ultra GPUs: These next-generation GPUs promise significant efficiency gains, higher throughput for large-scale AI models, and advanced features for multi-instance GPU (MIG) workloads.
By offering a portfolio that spans from ultra-high-performance training GPUs to optimized inference GPUs, we can match customers to the most cost-effective configuration for their workload. This flexibility is critical for balancing performance, availability, and budget.
Our platform supports two primary consumption models:
Pay-As-You-Go: Customers can spin up GPU instances as needed, paying only for the compute time they consume. This is ideal for development, testing, or burst workloads that don’t require reserved capacity.
Contracted/Reserved Capacity: For production workloads or organizations that need guaranteed access to specific GPU types, we offer reserved capacity contracts. These provide predictable costs and SLA-backed performance guarantees.
Through the partnership, we can provision both models at greater scale and in more regions. Integration with Cisco’s management tools allows enterprise customers to manage these cloud resources alongside their on-prem infrastructure, while Digital Alpha’s investment ensures we have the capacity available when and where it’s needed.
Expanding our footprint across North America, Europe, and Asia-Pacific brings multiple benefits:
Reduced Latency: By placing GPU clusters closer to customers, we minimize network latency, which can have a significant impact on training and inference times.
Data Sovereignty: Localized infrastructure enables customers to keep data within specific geographic boundaries to meet compliance requirements.
Resilience and Redundancy: A distributed infrastructure reduces the risk of service disruptions due to localized outages or capacity constraints.
Market Access: Many organizations have multi-regional teams or customer bases; a global infrastructure allows them to deliver AI-powered services consistently across markets.
This expansion is backed by our engineering expertise and automation stack, ensuring customers receive enterprise-grade performance, security, and 99.99% uptime no matter where they operate.
Sovereign AI Clouds are becoming a priority for governments and regulated industries that need AI capabilities without compromising data control. This partnership enables us to:
Deploy dedicated GPU infrastructure within specific national or regional boundaries.
Apply strict access controls, encryption, and compliance auditing aligned with local regulations such as GDPR, HIPAA, or industry-specific standards.
Leverage Cisco’s security architecture to ensure that sovereign cloud environments are isolated, monitored, and compliant from the hardware layer up.
Digital Alpha’s investment gives us the flexibility to stand up these sovereign environments quickly, while our engineering team’s SME expertise across the full OSI stack ensures that these environments are not only compliant but also operationally robust—filling a critical gap in global GPU expertise.
We see several promising avenues:
Edge AI Deployments: Combining our GPUaaS model with Cisco’s edge computing solutions to bring AI capabilities closer to data sources in manufacturing, healthcare, and smart cities.
Integrated Networking Solutions: Working with PacketFabric to embed high-performance, on-demand networking into our provisioning process, enabling truly end-to-end AI pipelines.
AI-Optimized Storage Expansion: Extending our collaboration with Cloudian to provide storage architectures tuned for AI data workflows, including hybrid object storage and caching strategies.
Vertical-Specific AI Platforms: Building tailored infrastructure and software stacks for industries such as finance, healthcare, and media, leveraging Cisco’s and Digital Alpha’s ecosystems.
Ultimately, the partnership provides Massed Compute with the capital, technology, and market reach to move faster and deliver AI infrastructure that meets the needs of the world’s most demanding workloads—while enabling customers to accelerate their AI projects and realize their vision in record time. With Cisco’s reach and our SME expertise, this collaboration is democratizing AI for millions, and soon billions, of people globally.