Thin provisioning

(Redirected from Thin Provisioning)

In computing, thin provisioning involves using virtualization technology to give the appearance of having more physical resources than are actually available. If a system always has enough resource to simultaneously support all of the virtualized resources, then it is not thin provisioned. The term thin provisioning is applied to disk layer in this article, but could refer to an allocation scheme for any resource. For example, real memory in a computer is typically thin-provisioned to running tasks with some form of address translation technology doing the virtualization. Each task acts as if it has real memory allocated. The sum of the allocated virtual memory assigned to tasks typically exceeds the total of real memory.

The efficiency of thin or thick/fat provisioning is a function of the use case, not of the technology. Thick provisioning is typically more efficient when the amount of resource used very closely approximates to the amount of resource allocated. Thin provisioning offers more efficiency where the amount of resource used is much smaller than allocated, so that the benefit of providing only the resource needed exceeds the cost of the virtualization technology used.

Just-in-time allocation differs from thin provisioning. Most file systems back files just-in-time but are not thin provisioned. Overallocation also differs from thin provisioning; resources can be over-allocated / oversubscribed without using virtualization technology, for example overselling seats on a flight without allocating actual seats at time of sale, avoiding having each consumer having a claim on a specific seat number.

Thin provisioning is a mechanism that applies to large-scale centralized computer disk-storage systems, SANs, and storage virtualization systems. Thin provisioning allows space to be easily allocated to servers, on a just-enough and just-in-time basis. Thin provisioning is called "sparse volumes" in some contexts.

Overview

edit

Thin provisioning, in a shared-storage environment, provides a method for optimizing utilization of available storage. It relies on on-demand allocation of blocks of data versus the traditional method of allocating all the blocks in advance. This methodology eliminates almost all whitespace which helps avoid the poor utilization rates, often as low as 10%, that occur in the traditional storage allocation method where large pools of storage capacity are allocated to individual servers but remain unused (not written to). This traditional model is often called "fat" or "thick" provisioning.

With thin provisioning, storage capacity utilization efficiency can be automatically driven up towards 100% with very little administrative overhead. Organizations can purchase less storage capacity up front, defer storage capacity upgrades in line with actual business usage, and save the operating costs (electricity and floorspace) associated with keeping unused disk capacity spinning.

Thin technology on a storage virtualization frame was first introduced by VMware as part of their VMware Workstation and VMware ESX products in early 2001.[1] Previous systems generally required large amounts of storage to be physically pre-allocated because of the complexity and impact of growing volume (LUN) space. Thin provisioning enables over-allocation or over-subscription.

Over-allocation

edit

Over-allocation or over-subscription is a mechanism that allows a server to view more storage capacity than has been physically reserved on the storage array itself. This allows flexibility in growth of storage volumes, without having to predict accurately how much a volume will grow. Instead, block growth becomes sequential. Physical storage capacity on the array is only dedicated when data is actually written by the application, not when the storage volume is initially allocated. The servers, and by extension the applications that reside on them, view a full size volume from the storage but the storage itself only allocates the blocks of data when they are written.

As a practical consideration, a storage manager needs to monitor actual storage used, adding additional storage capacity such as disks, tapes, solid-state drives (SSD), etc. as necessary to satisfy the write requests of the server and residing application(s).

The over-allocation concept was first introduced when StorageTek (STK) announced their Iceberg product in 1991 (released later in 1994).[2][3]

Banking analogy

edit

There is an analogy between thin provisioning in computers and the keeping of cash reserve ratios in banks. Much as all processes running on a computer whose memory is thinly provisioned may not simultaneously use the sum total of their allotments of memory because it does not all exist in the computer at one time; if all depositors to a bank simultaneously close their accounts by taking cash withdrawals, a bank run ensues, since their combined total usually exceeds the cash kept by the bank.

See also

edit

References

edit
  1. ^ Mike Laverick. "Thin provisioning myth-busters: The benefits of thin virtual disks". Since the days of VMware ESX 3, many IT folks have been wary of thin virtual disks...
  2. ^ "Iceberg finally thaws out". Computerworld. May 2, 1994.
  3. ^ Jon William Toigo. "Thin Is In -- Or Is It?". It was first offered by StorageTek, prior to its acquisition by Sun Microsystems, in its Iceberg (mainframe) and Shared Virtual Array (SVA) (open systems) arrays
edit