65: GreyBeards talk new FlashSystem storage with Eric Herzog, CMO and VP WW Channels IBM Storage

Sponsored by:

In this episode, we talk with Eric Herzog, Chief Marketing Officer and VP of WorldWide Channels for IBM Storage about the FlashSystem 9100 storage series.  This is the 2nd time we have had Eric on the show (see Violin podcast) and the 2nd time we have had a guest from IBM on our show (see CryptoCurrency talk). However, it’s the first time we have had IBM as a sponsor for a podcast.

Eric’s a 32 year storage industry veteran who’s worked for many major storage companies, including Seagate, EMC and IBM and 7 startups over his carreer. He’s been predominantly in marketing but was CFO at one company.

New IBM FlashSystem 9100

IBM is introducing a new FlashSystem 9100 storage series, using new NVMe FlashCore Modules (FCM) that have been re-designed to fit a small form factor (SFF, 2.5″) drive slot but also supports standard, NVMe SFF SSDs in a 2U appliance package. The new storage has dual active-active RAID controllers running the latest generation IBM Spectrum Virtualize software that’s running over 100K storage systems in the field today.

FlashSystem 9100 supports up to 24 NVMe FCMs or SSDs, which can be intermixed. The FCMs offer up to 19.2TB of usable flash and have onboard hardware compression and encryption.

With FCM media, the FlashSystem 9100 can sustain 2.5M IOPS at 100µsec response times with 34GB/sec of data throughput. Spectrum Virtualize is a clustered storage system, so one could cluster together up to 4 FlashSystem 9100s into a single storage system and support 10M IOPS and 136GB/sec of throughput.

Spectrum Virtualize just introduced block data deduplication within a data reduction pool. With thin provisioning, data deduplication, pattern matching, SCSI Unmap support, and data compression, the FlashSystem 9100 can offer up to 5:1 effective capacity:useable flash capacity. That means with 24 19.2TB FCMs, a single FlashSystem 9100 offers over 2PB of effective capacity.

In addition to the appliances 24 NVMe FCMs or NVMe SSDS, FlashSystem 9100 storage can also attach up to 20 SAS SSD drive shelves for additional capacity. Moreover, Spectrum Virtualize offers storage virtualization, so customers can attach external storage arrays behind a FlashSystem 9100 solution.

With FlashSystem 9100, IBM has bundled additional Spectrum software, including

  • Spectrum Virtualize for Public Cloud – which allows customers to migrate  data and workloads from on premises to the cloud and back again. Today this only works for IBM Cloud, but plans are to support other public clouds soon.
  • Spectrum Copy Data Management – which offers a simple way to create and manage copies of data while enabling controlled self-service for test/dev and other users to use snapshots for secondary use cases.
  • Spectrum Protect Plus – which provides data backup and recovery for FlashSystem 9100 storage, tailor made for smaller, virtualized data centers.
  • Spectrum Connect – which allows Docker and Kubernetes container apps to access persistent storage on FlashSystem 9100.

To learn more about the IBM FlashSystem 9100, join the virtual launch experience July 24, 2018 here.

The podcast runs ~43 minutes. Eric has always been knowledgeable on the enterprise storage market, past, present and future. He had a lot to talk about on the FlashSystem 9100 and seems to have mellowed lately. His grey mustache is forcing the GreyBeards to consider a name change – GreyHairsOnStorage anyone,  Listen to the podcast to learn more.

Eric Herzog, Chief Marketing Officer and VP of Worldwide Channels for IBM Storage

Eric’s responsibilities include worldwide product marketing and management for IBM’s award-winning family of storage solutions, software defined storage, integrated infrastructure, and software defined computing, as well as responsibility for global storage channels.

Herzog has over 32 years of product management, marketing, business development, alliances, sales, and channels experience in the storage software, storage systems, and storage solutions markets, managing all aspects of marketing, product management, sales, alliances, channels, and business development in both Fortune 500 and start-up storage companies.

Prior to joining IBM, Herzog was Chief Marketing Officer and Senior Vice President of Alliances for all-flash storage provider Violin Memory. Herzog was also Senior Vice President of Product Management and Product Marketing for EMC’s Enterprise & Mid-range Systems Division, where he held global responsibility for product management, product marketing, evangelism, solutions marketing, communications, and technical marketing with a P&L over $10B. Before joining EMC, he was vice president of marketing and sales at Tarmin Technologies. Herzog has also held vice president business line management and vice president of marketing positions at IBM’s Storage Technology Division, where he had P&L responsibility for the over $300M OEM RAID and storage subsystems business, and Maxtor (acquired by Seagate).

Herzog has held vice president positions in marketing, sales, operations, and acting-CFO roles at Asempra (acquired by BakBone Software), ArioData Networks (acquired by Xyratex), Topio (acquired by Network Appliance), Zambeel, and Streamlogic.

Herzog holds a B.A. degree in history from the University of California, Davis, where he graduated cum laude, studied towards a M.A. degree in Chinese history, and was a member of the Phi Alpha Theta honor society.

64: GreyBeards discuss cloud data protection with Chris Wahl, Chief Technologist, Rubrik

Sponsored by:

In this episode we talk with Chris Wahl, Chief Technologist, Rubrik. This is our second time having Chris on our show. The last time was about three years ago (see our Chris on agentless backup podcast). Talking with Chris again was great and there’s been plenty of news since we last spoke with him.

Rubrik now has three products the Rubrik Cloud Data Protection suite (onprem, virtual or in the [AWS & Azure] cloud), the Rubrik Datos IO (recent acquisition) for NoSql database with semantic dedupe and Rubrik Polaris GPS, a SaaS monitoring/trending/management solution for your data protection environment. Polaris GPS monitors and watches data protection trends for you, to insure all your data protection SLAs are being met. But we didn’t spend much time on Polaris.

Datos IO was designed from the start to backup new databases based on NoSQL technologies and provides, a semantic based deduplication capability, that’s unique in the industry . We talked with Datos IO before their acquisition by Rubrik (see our podcast with Tarun on 3rd generation data protection).

Cloud Data Protection

As for their Cloud Data Protection suite, one major differentiator is that all their functionality is available via RESTful APIs. Their GUI is completely built off their APIs. This means any customer could use their set of APIs to integrate Rubrik data protection with any application/workload on the planet.

Chris mentioned that Rubrik has 40+ specific application/system integrations that provide “strictly consistent” data protection. We assume this means application consistent backups and recovery but goes beyond mere applications.

With the Cloud Data Protection solution, data resides on the appliance for only a short (customer specifiable) period and then is migrated off to cloud or onprem object storage. The object storage could be any onprem S3 compatible storage, in the AWS or Azure cloud. It’s completely automatic. The data migrated to object storage is self-defining, meaning that metadata and data are all available in one spot and can be restored anywhere there’s a Rubrik Cloud Data Protection suite operating.

The Cloud Data Protection appliance also supports onboard search and analytics to search backup/recovery metadata/catalogs. As such, there’s no need to purchase other tools to uncover which backup files exist. Their solution also uses data deduplication to reduce the data stored.

Data stored is also encrypted by customer keys and use HTTPS to transfer data. So, data is secured at rest, secured in flight and deduped. Cloud Data Protection also offers data mobility. That is it can move your VMs and data from onprem to the cloud and use Rubrik in the cloud to rehydrade the data and translate your VMs to run in AWS or Azure and it works in reverse, translating AWS/Azure compute instances into VMs.

Rubrik’s major differentiator is simplicity. Traditionally, customers had been conditioned to thinking data protection took hours to maintain, fix and keep running. But with Rubrik Cloud Data Protection, a customer just points it to an application and selects an SLA, and Rubrik takes over from there.

The secret behind Rubrik’s simplicity is Cerebro. Cerebro is where they have put all the smarts to understand a data center’s infrastructure, applications/VMs, protected data and requested SLAs and just makes it work

The podcast runs ~27 minutes. Chris was great to talk with again and given how long it’s been since we last talked, he had much to discuss. Rubrik seems like an easy solution to adopt and if their growth is any indicator, customers agree. Listen to the podcast to learn more.

Chris Wahl, Chief Technologist, Rubrik

Chris Wahl, author of the award winning Wahl Network blog and host of the Datanauts Podcast, focuses on creating content that revolves around virtualization, automation, infrastructure, and evangelizing products and services that benefit the technology community.

In addition to co-authoring “Networking for VMware Administrators” for VMware Press, he has published hundreds of articles and was voted the “Favorite Independent Blogger” by vSphere-Land three years in a row (2013 – 2015). Chris also travels globally to speak at industry events, provide subject matter expertise, and offer perspectives to startups and investors as a technical adviser.

63: GreyBeards talk with NetApp A-Team members John Woodall & Paul Stringfellow

Sponsored by NetApp:

In this episode, we talk with NetApp A-Team members John Woodall (@John_Woodall), VP Eng, Integrated Archive Systems and Paul Stringfellow (@techstringy), Technical Dir.  Data Management Consultancy, Gardner Systems Plc.

Both John and Paul have been NetApp partners for quite awhile (John since the beginning of NetApp). John and Paul work directly with infrastructure customers in solving customer, real world data problems.

NetApp A-Team is a select, small (only 25 total) group of individuals that are brought together periodically and briefed by NetApp Execs and Product managers. A-Team membership is for life (as long as they continue to work in IT and not for a competitor). The briefings span a number of topics but are typically about what NetApp plans to do in the near term. The A-Team is there to provide a customer perspective to NetApp management and product teams.

Oftentimes, big companies can lose sight of customer problems and having a separate channel that’s engaged directly with customers can sometimes bring to light these issues. By having the A-Team, NetApp is getting feedback on customer problems and concerns from partners that directly engage with them.

Both Howard and I were very impressed that when John and Paul introduced themselves they mentioned DATA rather than storage. This signalsa a different perspective from pure infrastructure to a more customer view.

Following that theme, Howard asked about how customers were seeing the NetApp Data Fabric. This led to a long discussion of just what NetApp Data Fabric represents to customers in this multi-cloud world today. NetApp’s Data Fabric provides choice on where customers can run their work, liberating work that previously may have be stuck in the cloud or on prem.

Ray asked about how NetApp is embracing the cloud. What with cloud data volumes (see earlier NetApp sponsored podcast), NPS, Cloud ONTAP and other cloud solutions NetApp has lit up in various public clouds.  John mentioned that public preview for Cloud Data Volumes should free up by end of the year and at that time anyone can use it.

I was at a dinner with NetApp, 3-5 years ago, when the cloud looked like a steamroller that was going to grind infrastructure providers into dust. I was talking with a NetApp executive, he said they were doing everything they could at the time to figure out how to offer value with cloud providers rather than competing with them. Either you embrace change or you’re buried by it.

At the end of the podcast, Howard turned the discussion to NetApp HCI. Paul said, at first HCI was just shrunk infrastructure, but now, its more about the software stack on top of HCI that matters. The stack enables simpler deployment and configuration flexibility. From a NetApp HCI perspective, flexibility in being able to separately add more compute or storage is a strong differentiator.

The podcast runs ~30 minutes. Both John and Paul were very knowledgeable about current IT trends. I think we could have easily talked with them for another hour or so and not exhausted the conversation.  Listen to the podcast to learn more.

Paul Stringfellow, Technical Director, Data Management Consultancy Gardner Systems, Plc

An experienced technology professional, Paul Stringfellow is the Technical Director at Data Management Consultancy Gardner Systems Plc. He works with businesses of all types to assist with the development of technology strategies, and, increasingly, to help them manage, secure, and gain benefit from their data assets.

Paul is a NetApp A-Team and is very involved in the tech community. Paul often presents at conferences and user group events. He also produces a wide range of business focused technology content from his blog techstringy.com and Tech Interviews Podcast (podcast.techstringy.com), and he also writes regularly for a number of industry technology sites. You can find Paul on twitter at @techstringy.

John Woodall, VP Engineering, Integrated Archive Systems 

John Woodall is Vice President of Engineering at Integrated Archive Systems, Inc. (IAS). John has more than 28 years of experience in technology with a background focused on Enterprise and Infrastructure Architecture, Systems Engineering and Technology Management. In these roles John developed a long string of successes designing and implementing complex systems in demanding, mission critical large-scale enterprise environments.

John is a NetApp A-Team member and has managed the complete range of IT disciplines. John brings that experience and perspective to his role at IAS.At IAS, his focus is on mapping the company’s strategic direction, evaluating emerging technologies, trends, practices and managing the technology portfolio for IAS with the express goal of producing excellent customer experiences and business outcomes. Prior to joining IAS, John held architecture and management roles at Symantec, Solectron (now part of Flextronics), Madge Networks and Elsevier MDL.You can find me at @John_Woodall on twitter and Skype: TechWood

62: GreyBeards talk NVMeoF storage with VR Satish, Founder & CTO Pavilion Data Systems

In this episode,  we continue on our NVMeoF track by talking with VR Satish (@satish_vr), Founder and CTO of Pavilion Data Systems (@PavilionData). Howard had talked with Pavilion Data over the last year or so and I just had a briefing with them over the past week.

Pavilion data is taking a different tack to NVMeoF, innovating in software and hardware design, but using merchant silicon for their NVMeoF accelerated array solution. They offer Ethernet based NVMeoF block storage.

VR is a storage “lifer“, having worked at Veritas on their Volume Manager and other products for a long time. Moreover, Pavilion Data has a number of exec’s from Pure Storage (including their CEO, Gurpreet Singh), other storage technology companies and is located in San Jose, CA.

VR says there were 5 overriding principles for Pavilion Data as they were considering a new storage architecture:

  1. The IT industry is moving to rack scale compute and hence, there is a need for rack scale storage.
  2. Great merchant silicon was coming online so, there was less of a need to design their own silicon/asics/FPGAs.
  3. Rack scale storage needs to provide “local” (within the rack) resiliency/high availability and let modern applications manage “global” (outside the rack) resiliency/HA.
  4. Rack scale storage needs to support advanced data management services.
  5. Rack scale storage has to be easy to deploy and run

Pavilion Data’s key insight was in order to meet all those principles and deal with high performance NVMe flash and up and coming, SCM SSDs,  storage had to be redesigned to look more like network switches.

Controller cards?

One can see this new networking approach in their bottom of rack, 4U storage appliance. Their appliance has up to 20 controller cards creating a heavy compute/high bandwidth cluster attached via an internal PCIe switch to a backend storage complex made up of up to 72 U.2 NVMe SSDs.

The SSDs fit into an interposer that plugs into their PCIe switch and maps single (or dual ported) SSDs to the applianece’s PCIe bus. Each controller card supports an Intel  XeonD micrprocessor and 2 100GbE ports for up to 40 100GbE ports per appliance. The controller cards are configured in an active-active, auto-failover mode, for high availability. They don’t use memory caching or have any NVram.

On their website Pavilion data show  117 µsec response times and 114 GB/sec of throughput for IO performance.

Data management for NVMeoF storage

Pavilion Data storage supports widely striped/RAID6 data protection (16+2), thin provisioning, space efficient read only (redirect on write) snapshots and space efficient read-write clones. With RAID6, it takes more than 2  drive failures to lose data.

Like traditional storage, volumes (NVMe namespaces) are assigned to RAID groups.  The backend layout appears to be a log structured file. VR mentioned that they don’t do garbage collection and with no Nvram/no memory caching, there’s a bit of secret sauce here.

Pavilion Data storage offers two NVMeoF/Ethernet protocols:

  • Standard off the shelf,  NVMeoF/RoCE interface that makes use of v1.x of the Linux kernel NVMeoF/RoCE drivers and special NIC/switch hardware
  • New NVMeof/TCP interface that doesn’t need special networking  hardware and as such, offers NVMeoF over standard NIC/switches. I assume this takes host software to work.

In addition, Pavilion Data has developed their own Multi-path IO (MPIO) driver for NVMeoF high availability which they have contributed to the current Linux kernel project.

Their management software uses RESTful APIs (documented on their website). They also offer a CLI and GUI, both built using these APIs.  Bottom of rack storage appliances are managed as separate storage units, so they don’t support clusters of more than one appliance. However, there are only a few cluster storage systems we know of that support 20 controllers today for block storage.

Market

VR mentioned that they are going after new applications like MongoDB, Cassandra, CouchBase, etc. These applications are designed around rack scaling and provide “global”, off-rack/cross datacenter availability themselves. But VR also mentioned Oracle and other, more traditional applications. Pavilion Data storage is sold on a ($/GB) capacity basis.

The system comes in a minimum, 5 controller cards-18 NVMe SSD configuration and can be extended in groups of 5 controllers-18 NVMe SSDs to the full 20 controller cards-72 NVMe SSDs.

The podcast runs ~42 minutes. VR was very knowledgeable about the storage industry, NVMeoF storage protocols, NVMe SSDs and advanced data management capabilities. We had a good talk with VR on what Pavilion Data does and how well it works.   Listen to the podcast to learn more.

VR Satish, Founder and CTO, Pavilion Data Systems

VR Satish is the Chief Technology Officer at Pavilion Data Systems and brings more than 20 years of experience in enterprise storage software products.

Prior to joining Pavilion Data, he was an Entrepreneur-in-Residence at Artiman Ventures. Satish was an early employee of Veritas and later served as the Vice President and the Chief Technology Officer for the Information & Availability Group at Symantec Corporation prior to joining Artiman.

His current areas of interest include distributed computing, information-centric storage architectures and virtualization.

Satish holds multiple patents in storage management, and earned his Master’s degree in computer science from the University of Florida.

61: GreyBeards talk composable storage infrastructure with Taufik Ma, CEO, Attala Systems

In this episode,  we talk with Taufik Ma, CEO, Attala Systems (@AttalaSystems). Howard had met Taufik at last year’s FlashMemorySummit (FMS17) and was intrigued by their architecture which he thought was a harbinger of future trends in storage. The fact that Attala Systems was innovating with new, proprietary hardware made an interesting discussion, in its own right, from my perspective.

Taufik’s worked at startups and major hardware vendors in his past life and seems to have always been at the intersection of breakthrough solutions using hardware technology.

Attala Systems is based out of San Jose, CA.  Taufik has a class A team of executives, engineers and advisors making history again, this time in storage with JBoFs and NVMeoF.

Ray’s written about JBoF (just a bunch of flash) before (see  FaceBook moving to JBoF post). This is essentially a hardware box, filled with lots of flash storage and drive interfaces that directly connects to servers. Attala Systems storage is JBOF on steroids.

Composable Storage Infrastructure™

Essentially, their composable storage infrastructure JBOF connects with NVMeoF (NVMe over Fabric) using Ethernet to provide direct host access to  NVMe SSDs. They have implemented special purpose, proprietary hardware in the form of an FPGA, using this in a proprietary host network adapter (HNA) to support their NVMeoF storage.

Their HNA has a host side and a storage side version, both utilizing Attala Systems proprietary FPGA(s). With Attala HNAs they have implemented their own NVMeoF over UDP stack in hardware. It supports multi-path IO and highly available dual- or single-ported, NVMe SSDs in a storage shelf. They use standard RDMA capable Ethernet 25-50-100GbE (read Mellanox) switches to connect hosts to storage JBoFs.

They also support RDMA over Converged Ethernet (RoCE) NICS for additional host access. However I believe this requires host (NVMeoF) (their NVMeoY over UDP stack) software to connect to their storage.

From the host, Attala Systems storage on HNAs, looks like directly attached NVMe SSDs. Only they’re hot pluggable and physically located across an Ethernet network. In fact, Taufik mentioned that they already support VMware vSphere servers accessing Attala Systems composable storage infrastructure.

Okay on to the good stuff. Taufik said they measured their overhead and it was able to perform an IO with only an additional 5 µsec of overhead over native NVMe SSD latencies. Current NVMe SSDs operate with a response time of from 90 to 100 µsecs, and with Attala Systems Composable Storage Infrastructure, this means you should see 95 to 105 µsec response times over a JBoF(s) full of NVMe SSDs! Taufik said with Intel Optane SSD’s 10 µsec response times, they see response times at ~16 µsec (the extra µsec seems to be network switch delay)!!

Managing composable storage infrastructure

They also use a management “entity” (running on a server or as a VM),  that’s used to manage their JBoF storage and configure NVMe Namespaces (like a SCSI LUN/Volume).  Hosts use NVMe NameSpaces to access and split out the JBoF  NVMe storage space. That is, multiple Attala Systems Namespaces can be configured over a single NVMe SSD, each one corresponding to a single  (virtual to real) host NVMe SSD.

The management entity has a GUI but it just uses their RESTful APIs. They also support QoS on an IOPs or bandwidth limiting basis for Namespaces, to control manage noisy neighbors.

Attala systems architected their management system to support scale out storage. This means they could support many JBoFs in a rack and possibly multiple racks of JBoFs connected to swarms of servers. And nothing was said that would limit the number of Attala storage system JBoFs attached to a single server or under a single (dual for HA) management  entity. I thought the software may have a problem with this (e.g., 256 NVMe (NameSpaces) SSDs PCIe connected to the same server) but Taufik said this isn’t a problem for modern OS.

Taufik mentioned that with their RESTful APIs,  namespaces can be quickly created and torn down, on the fly. They envision their composable storage infrastructure to be a great complement to cloud compute and container execution environments.

For storage hardware, they use storage shelfs from OEM vendors. One recent configuration from Supermicro has hot-pluggable, dual ported, 32 NVMe slots in a 1U chasis, which at todays ~16TB capacities, is ~1/2PB of raw flash. Taufik mentioned 32TB NVMe SSDs are being worked on as we speak. Imagine that 1PB of flash NVMe SSD storage in 1U!!

The podcast runs ~47 minutes. Taufik took a while to get warmed up but once he got going, my jaw dropped away.  Listen to the podcast to learn more.

Taufik Ma, CEO Attala Systems

Tech-savvy business executive with track record of commercializing disruptive data center technologies.  After a short stint as an engineer at Intel after college, Taufik jumped to the business side where he led a team to define Intel’s crown jewels – CPUs & chipsets – during the ascendancy of the x86 server platform.

He honed his business skills as Co-GM of Intel’s Server System BU before leaving for a storage/networking startup.  The acquisition of this startup put him into the executive team of Emulex where as SVP of product management, he grew their networking business from scratch to deliver the industry’s first million units of 10Gb Ethernet product.

These accomplishments draw from his ability to engage and acquire customers at all stages of product maturity including partners when necessary.