Every crisis brings with it an impetus for change. But the COVID-19 crisis has brought a unique development: IT is no longer fighting with business. Digital acceleration is being – accelerated.
While digital transformation and modernization have been at the forefront of the innovation agenda everywhere, progress has been iterative and slow. By springing into action and delivering distributed infrastructure models to keep work going, IT leaders finally have the attention of CEOs. What used to be an annual exercise of developing business cases is now an urgent call to action.
In this article, we give you an overview of what the IT leaders of today are incorporating into their infrastructure strategy models.
On-premise, also known as private cloud
The traditional setup for IT infrastructure has been on-premise. All hardware and software – servers, desktops, networking, applications and data located, hosted, and processed in one premise. This model has suited many businesses in the past because it gave them control over IT assets and activity. This is also what we now know as private cloud.
Having all your assets under one roof means that you can keep access and provision secure. Your policies and protocols would define a perimeter, and access technologies such as VPNs and RFIDs would allow only relevant users. On-premise infrastructure also means that IT services can be quick to respond. This is conducive to environments that need IT helpdesks available to support employees, making the infrastructure more reliable – necessary for mission-critical scenarios.
However, this model comes with its inherent costs. Buying and maintaining equipment involves upfront capital expenditures. Requiring highly skilled IT resources to keep the infrastructure running smoothly translates to overhead. Moreover, capex ownership also means responsibility for real estate, utilities and ongoing procurement of products and services to manage fixed assets. Without the buy-in of business leadership, the on-prem model can be prone to chronic issues arising from lack of proper investment.
While many industries are adapting to newer models to keep up with scalability and flexibility requirements, the on-premise model still suits many use cases. Regulated industries such as banking and healthcare follow strict governance policies. When privacy and security standards are scrutinized and implemented on an industry level, compliance requirements directly influence IT infrastructure decisions.
Public and Hybrid Cloud
On the other side of the infrastructure spectrum is the public cloud, and the hybrid model for those who prefer a combination of public and private resources.
For businesses that need to prioritize scalability and flexibility over legacy procedures, the cloud landscape offers a plethora of solutions. Every imaginable IT service provider is adopting cloud delivery models into their offerings, whether it's in storage, networking, software development, testing, automation, machine learning, artificial intelligence, service delivery or support.
The biggest benefit of moving IT infrastructure to the public cloud is the shift from CAPEX to OPEX. Keeping infrastructure off-premise saves the biggest costs (real estate, utilities, skilled staff), and ‘as-a-service’ consumption models give businesses greater cost control. If you are partnered with the larger cloud providers, you can even be assured of security and almost zero downtime. And if not, there are plenty of smaller providers and migration specialists who can help you adopt newer models to help you scale.
Figure 1 Reference Architecture Model for Hybrid Cloud Management Solution Capabilities | Source: Forrester Report "Vendor Landscape: Hybrid Cloud Management"
Public and hybrid cloud models will undoubtedly help those digital transformation goals get realized at corporate. However, ramping up infrastructure investments needs to be done cautiously, and cloud adoption is no different. Left unchecked, businesses can encounter problems such as privacy breaches or egress charges.
A comfortable middle ground for businesses who prefer the control of on-prem infrastructure, but struggle with prohibitive costs, is the colocation model.
A colocation (colo) facility offers space for rent – specifically for data centers. It is a 3rd-party environment where you can lease a unit of space to house your data centers, along with the services associated with managing and running those data centers, such as utilities, networking and security. Also known as multi-tenant data centers (MTDCs), colos offer a variety of sizes and scopes for that space – you can lease a room, or several rooms – or you can lease smaller spaces such as a cage, or a rack or cabinet.
The data center itself would be owned by the customer. So, if you are an IT organization leasing space at a colo facility, you would be responsible for the actual assets that you place there. This means that you would have control over the hardware and software configurations, effectively giving you a similar level of control as you would have on-premise. But the colo facility assumes responsibility for the real estate costs and licenses, the power and cooling to run the equipment, the physical access control, and the networking and telecom consumption.
Today’s colo providers also offer managed services such as disaster recovery and business continuity planning, data center planning and cybersecurity services.
Colo gives you some relief from CAPEX, with the major capital-intensive component of finding, building and managing the real estate taken off your hands. And the leasing model with on-demand services puts your infrastructure costs under OPEX.
A 2020 CIO Survey by Credit Suisse revealed that 76% of CIOs either will be deploying into colocation/wholesale data centers or are still considering their deployment strategy (43%), highlighting the room for growth that MTDCs have. Further - more than 50% of CIOs expect to shut down enterprise-owned data centers going forward, another indicator for growth in the MTDC industry.
With IoT technologies finally seeing fruition in many regions around the globe, a newer model for infrastructure strategy that is seeing a lot of traction is edge computing.
The edge concept is that data processing workloads can be shifted closer to location-specific data sources – edges – so that bandwidth is managed more efficiently, and end results are delivered more quickly. Cisco defines edge computing as the architectural principle of moving services to locations where they can:
Yield lower latency to the end device to benefit application performance and improve the quality of experience (QoE).
Implement edge offloading for greater network efficiency.
Perform computations that augment the capabilities of devices and reduce transport costs.
Edge computing has application in use cases that need to process large volumes of data in real time. For example, low latency is particularly important to our favorite content publishers and content delivery networks (CDNs). They can distribute their data collection, transmission and processing workloads to give consumers a smoother on-screen experience. Government applications and public services are more critical than ever now – edge infrastructure models can greatly improve delivery of services to the masses. IoT and smart technology offerings will become more accurate, and even public cloud providers can scale their infrastructure by adding more edge endpoints in their ecosystems.
Figure 2 Emergence of the infrastructure edge | Source: Cisco Public White Paper "Establishing the Edge"
What is the right infrastructure model for you?
A Gartner study from last year indicates that by 2025, 85% of infrastructure strategies will integrate on-premise, colocation, cloud and edge delivery options, compared with 20% in 2020.
The key considerations for infrastructure strategy revolve around CAPEX and OPEX, but modernizing IT needs to go beyond cost efficiency. Today’s industries demand agility, responsiveness, reliability and scalability. And with growth comes careful consideration of security, privacy and compliance.
Every business needs to evaluate their goals and priorities, and leverage infrastructure as a competitive advantage. A good way to approach this is to link business risks, applications, standards and architecture to create a “Fit-for-Purpose” infrastructure model.
As specialized infrastructure offerings have grown into an industry of their own, the role of IT has moved beyond provisioning and supporting and managing costs. The right infrastructure model will help companies launch products and services, scale operations closer to markets that need it, and to collaborate and consolidate on a whole new level.
Now is the time for IT to add lasting value to business.
We stand ready to resell your retired equipment, leveraging our strong reputation and robust network in the secondary market.
Team Dataknox is committed to delighting our valued customers. Whether it’s a complex, global multi-site decom or a cloud transformation that has Devops up at night, we exist to solve problems for our clients no matter how technically or logistically complex.