Home » Storage Spaces » Is storage expensive – part 1

When exchange 2007 first came with the LCR/CCR concept to allow low cost direct attached storage (DAS) option for exchange storage, I started exploring same for production. The storage cost used to be very high on exchange 2003 setup, FC SAN with 15K RPM FC disk. It was difficult with this new concept that time, none of my storage vendor willing to support the idea. When MSIT released the reference architecture of their internal exchange deployment on DAS, it opened up the door for me to go ahead with DAS as the storage for one exchange location. Apart from cost, use of 10K SFF disks in DAS also helped to reduce the carbon foot print and the issues of disk failure in storage reduced significantly. The success story continued and after two year all our 150K+ mailboxes were on JBOD with SAS NL using exchange 2010. This started a new era of using JOBD in critical enterprise workloads.

In parallel the momentum of virtualization was picking up. I started experimenting with the first version of hyper-v on window server 2008, as an alternate virtualization solution. With 2008 R2 with CSV feature is when actually I started using hyper-v into production workloads, mostly IT infra. However, unlike exchange there was not much scope for reducing storage cost for hyper-v. Storage still has to be shared and large number disk to meet both IOPS and size requirement. Storage space with windows server 2012 was good but was not helping mush as still large number of disk were required to meet IOPS requirement. Few places where budget was constraint, hyper-v replica instead was allowing to use JBOD by keep a copy of data against failure into another node. Windows server 2012 R2 came with tiring support for storage spaces and this where I started looking it for production with the objective of lowering storage cost. Features like Data deduplication, CSV etc. were all proven in our environment and readily supported with storage space. With PowerShell support, there nothing much to ask for management. First requirement was to deploy the enterprise log infra, which includes SCOM ACS for windows servers apart from syslog server. This post is all about achieving 150TB+ storage and computing for around 30VMs with an budgets around 100K USD all inclusive for hardware.

Budget constraint didn’t allow to follow the standard two cluster architecture, one for storage spaces and one for hype-v. Hence gone for the consolidated architecture of single two node cluster for both with the understanding of ensuring SOFS and hyper-v are not mixed in this setup.

My rough take on storage spaces for planning the deployment. A standard storage with storage controller, groups disks into either disk group or RAID group and allows to carve out LUNs from this group. Disk pool is the other option that allows combining large number of disk to into a pool, virtualizing the storage and storage management. Storage spaces use disk pool and allows to add both SSD and HDD into the pool so that later can be used for storage tiring purpose. Data is split into chunks (interleave) and stored into multiple disk in a storage to meet the throughput and IOPS requirement. In storage spaces the number of disk that the data is split is called columns and is defined per virtual disk that is created out of a storage pool. However, when using tearing the column count is the one feasible in both on SSD and HDD disk. Column count needs to be high for performance reason. In general SSD disk being expensive would be less in number in a storage. Hence the max column count possible would be half the number of SSD disks with mirror as the resilience option. Storage spaces does provides redundancy against enclosure by mirroring data into disk from different enclosure. Hence two enclosures for the mirror resilience deployment to achieve high availability against enclose level failure.

With the clustered storage pool in plan Maximum limits to consider as per Storage Spaces FAQ : max 4 pools, 80 disks per pool and 64 spaces per pool

Brief overview of the hardware setup.

Overall hardware includes two dual proc server with 256 GB RAM. Two storage enclosures, each one with 12 X 200 GB write optimized SSD and 48 X 4TB SAS NL disks. 10G SR-IOV capable NICs for VM communication with corporate network. Private 10G connectivity for any requirement of SOFS access from VMs i.e. outside hyper-v cluster, as well as catering to future expansion of the infra to additional hyper-v server using SOFS from this clusters. Two Dual Port SAS HBA per server to ensure in case of SAS HBA failure, the server still has connectivity to both the enclosures. Server to storage enclosures connectivity reference diagram

 

 

Blog 2 in this series would be on deployment process

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*