Tech Expert Discusses How to Maximize Asset Velocity by Sharing Flash

Tech Expert Discusses How to Maximize Asset Velocity by Sharing Flash

Tech expert and Sales Director for Africa at Western DigitalM Ghassan Azzi elaborates to Legit.ng on the benefits and challenges of sharing high-impact drives.

We live in a data-driven world, and the sheer volume of data created by consumers, businesses, and machines is exploding. It’s hard to keep up.  

Global research shows that the abundance of data generated, copied, and stored will decrease. IDC projects the volume to reach more than 166 zettabytes by 2025—less than two years away. This will tremendously affect data storage.  

Ghassan Azzi on new drive
Sales Director for Africa at Western Digital, Ghassan Azzi
Source: Facebook

Storing a vast amount of growing data has two significant challenges.  

The first involves where and how to store it. For example, on-prem or in the cloud, what storage tier – hot, warm, or cold data tier – and what type of infrastructure topology? These are complex questions with many different considerations. And there’s no simple answer. One thing is clear, however. In the cloud or large enterprise data centres, storage architects must find solutions that can scale to accommodate petabytes of data while still providing the proper performance and service level agreement that meets business demands. 

Read also

Amazon cloud giant AWS wants public sector to embrace AI

The other big challenge is budgets. Storage architects are constantly pressured to find cost-effective solutions for shrinking or flat IT budgets. The more data an organization has, the more storage it needs. With the need for greater storage capacity comes increased storage costs. Today’s storage architects must perform a balancing act, one that involves the need for high-performance storage amidst dwindling budgets to get the best return on their investment. 

Introducing Asset Velocity and Why it Matters 

Data centre architects are increasingly deploying Flash to accelerate their workloads and deliver high-performance and low-latency storage. Many data centres are also adopting non-volatile memory express (NVMe™) technology for parts of their architecture to expand the performance and latency benefits even further. 

Laser-focused on optimising and controlling storage spend, they must efficiently manage, scale, and utilise these flash assets to get the biggest bang for their buck. This drives a growing trend to disaggregate and share NVMe flash over an Ethernet fabric for improved asset velocity. 

Read also

PWC report shows workers facing increasing workloads, pressure amid health risks

In data storage management schemes, achieving asset velocity involves obtaining the highest performance, ensuring maximum availability as measured in uptime, and extracting the storage value that results in the best utilisation and efficiency.  

In turn, high utilisation reduces costs and improves overall return on investment (ROI).

It’s the ability to use a storage device to its fullest potential to generate value and revenue while controlling costs. Most organisations are not fully using their flash assets and, therefore, are not realising the highest possible utilisation. In other words, inefficiencies in the architecture are impeding asset velocity.

Let’s review HCI or a scale-out environment as an example. One of the main selling points for customers is that HCI is relatively easy to scale by adding full nodes that include the server hardware and software and other components such as computing, storage, and networking in the platform. 

These nodes consume power, cooling, and networking resources to deliver application services. When the application duty cycle or workloads are predictable and stable, HCI architecture can effectively scale. 

Read also

“We aim to empower Africans”: Ikenwa speaks on plans to transform education with Univad

Design engineers will analyse these workloads and provide sufficient resources (compute, storage and network) in the node to ensure the application's demands are fulfilled for the node's service life, typically 5 to 7 years. When a workload or application needs additional resources, you add more nodes, regardless of the resources necessary.

However, one of the pain points of HCI architecture is that when applications and workloads in the node are unpredictable, and demand bursts occur, the resources contained in these nodes might be stressed. 

More significant concerns arise when these nodes have optional resources installed that need to be more utilised. Assets that are not being used, which consume power and cooling, create inefficiency, and these resources become stranded or trapped and are not available for other applications or workloads to utilise.

Another way to architect IT infrastructure is to deploy composable disaggregated infrastructure (CDI) rather than HCI, especially for high-value assets such as flash-based SSD storage.

Read also

Dutch app supermarket boss eyes tech boom in online delivery

Benefits of disaggregating and sharing flash 

As previously stated, scaling out using HCI has a purpose. It can be essential to keep up with growing data demands where the application requirements are fixed, and the workloads are predictable and stable. 

However, in HCI or other scale-out environments, resource management can become inefficient from a utilisation and asset velocity perspective when these applications have burst, or the resources provided exceed the applications' demand.

Alternatively, designers are deploying scale-up architecture by deploying CDI, which involves taking the flash assets out of the server nodes and creating the ability to share those assets across multiple applications and workloads using NVMe over Fabrics (NVMe-oFÔ).  

NVMe is a protocol used by the Peripheral Component Interconnect Express (PCIe(r)) bus to access flash storage. 

NVMe allows a more efficient way to use flash media when connected to PCIe, and this standard is becoming prevalent over SAS/SATA connections in data centre applications. As in HCI architecture, SSDs are installed directly on the PCI bus, and the storage is available to the applications and workloads contained within the server. 

Read also

“Apply with the link”: UBA announces exciting job opportunity with attractive salary

When scaling up using NVMe-oF technology, designers can now extend the PCIe bus out of the physical hardware node and deploy flash assets to applications and workloads on demand using high-speed Ethernet connections and running RoCE (RDMA over Converged Ethernet) or TCP (Transmission Control Protocol) protocols. 

There are several inherent benefits in the deployment of storage using NVMe-oF that are immediately realised.

With its many benefits, NVMe-oF is being adopted today and will continue to be a growing trend in the future of storage architectures. It allows designers to create a high-performance storage environment with latencies that rival direct-attached storage (DAS) and enables flash devices to be shared, which creates very high utilisation. We aptly call this Asset Velocity.  

Source: Legit.ng

Authors:
Pascal Oparada avatar

Pascal Oparada (Business editor) For over a decade, Pascal Oparada has reported on tech, energy, stocks, investment, and the economy. He has worked in many media organizations such as Daily Independent, TheNiche newspaper, and the Nigerian Xpress. He is a 2018 PwC Media Excellence Award winner. Email:pascal.oparada@corp.legit.ng