forked from apache/cloudstack-documentation
-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathstorage.rst
More file actions
1702 lines (1235 loc) · 75.8 KB
/
storage.rst
File metadata and controls
1702 lines (1235 loc) · 75.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information#
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Storage Overview
----------------
CloudStack defines two types of storage: primary and secondary. Primary
storage can be accessed by either iSCSI or NFS. Additionally, direct
attached storage may be used for primary storage. Secondary storage is
always accessed using NFS.
There is no ephemeral storage in CloudStack. All volumes on all nodes
are persistent.
Primary Storage
---------------
This section gives technical details about CloudStack
primary storage. For more information about the concepts behind primary storage
see :ref:`about-primary-storage` . For information about how to install and configure
primary storage through the CloudStack UI, see the in the Installation Guide.
Best Practices for Primary Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- The speed of primary storage will impact guest performance. If
possible, choose smaller, higher RPM drives or SSDs for primary
storage.
- There are two ways CloudStack can leverage primary storage:
Static: This is CloudStack's traditional way of handling storage. In
this model, a preallocated amount of storage (ex. a volume from a
SAN) is given to CloudStack. CloudStack then permits many of its
volumes to be created on this storage (can be root and/or data
disks). If using this technique, ensure that nothing is stored on the
storage. Adding the storage to CloudStack will destroy any existing
data.
Dynamic: This is a newer way for CloudStack to manage storage. In
this model, a storage system (rather than a preallocated amount of
storage) is given to CloudStack. CloudStack, working in concert with
a storage plug-in, dynamically creates volumes on the storage system
and each volume on the storage system maps to a single CloudStack
volume. This is highly useful for features such as storage Quality of
Service. Currently this feature is supported for data disks (Disk
Offerings).
Runtime Behavior of Primary Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Root volumes are created automatically when an Instance is
created. Root volumes are deleted when the Instance is destroyed. Data volumes
can be created and dynamically attached to Instances. Data volumes are not
deleted when Instances are destroyed.
Administrators should monitor the capacity of primary storage devices
and add additional primary storage as needed. See :ref:`add-primary-storage`.
Administrators add primary storage to the system by creating a
CloudStack storage pool. Each storage pool is associated with a cluster
or a zone.
With regards to data disks, when a User executes a Disk Offering to
create a data disk, the information is initially written to the
CloudStack database only. Upon the first request that the data disk be
attached to an Instance, CloudStack determines what storage to place the volume
on and space is taken from that storage (either from pre-allocated
storage or from a storage system (ex. a SAN), depending on how the
primary storage was added to CloudStack).
Hypervisor Support for Primary Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following table shows storage options and parameters for different
hypervisors.
.. cssclass:: table-striped table-bordered table-hover
============================================== ================ ==================== =========================== ============================
Storage media \\ hypervisor VMware vSphere Citrix XenServer KVM Hyper-V
============================================== ================ ==================== =========================== ============================
**Format for Disks, Templates, and Snapshots** VMDK VHD QCOW2 VHD
Snapshots are not supported.
**iSCSI support** VMFS Clustered LVM Yes, via Shared Mountpoint No
**Fiber Channel support** VMFS Yes, via Existing SR Yes, via Shared Mountpoint No
**NFS support** Yes Yes Yes No
**Local storage support** Yes Yes Yes Yes
**Storage over-provisioning** NFS and iSCSI NFS NFS No
**SMB/CIFS** No No No Yes
**Ceph/RBD** No No Yes No
**PowerFlex/ScaleIO** No No Yes No
============================================== ================ ==================== =========================== ============================
XenServer uses a clustered LVM system to store Instance images on iSCSI and
Fiber Channel volumes and does not support over-provisioning in the
hypervisor. The storage server itself, however, can support
thin-provisioning. As a result the CloudStack can still support storage
over-provisioning by running on thin-provisioned storage volumes.
KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file
system path local to each server in a given cluster. The path must be
the same across all Hosts in the cluster, for example /mnt/primary1.
This shared mountpoint is assumed to be a clustered filesystem such as
OCFS2. In this case the CloudStack does not attempt to mount or unmount
the storage as is done with NFS. The CloudStack requires that the
administrator insure that the storage is available
VMware vSphere supports NFS, VMFS5, VMFS6, vSAN, vVols, DatastoreCluster storage types.
For DatastoreCluster storage type, any changes to the datastore cluster
at vCenter can be synchronised with CloudStack, like any addition of new
child datastore to the DatastoreCluster or removal or existing child datastore
from the DatastoreCluster. Synchronisation of DatastoreCluster happens during
host connect or storage pool maintenance operations or by calling the API
syncStoragePool.
With NFS storage, CloudStack manages the overprovisioning. In this case
the global configuration parameter storage.overprovisioning.factor
controls the degree of overprovisioning. This is independent of
hypervisor type.
Local storage is an option for primary storage for vSphere, XenServer,
and KVM. When the local disk option is enabled, a local disk storage
pool is automatically created on each host. To use local storage for the
System Virtual Machines (such as the Virtual Router), set
system.vm.use.local.storage to true in global configuration.
CloudStack supports multiple primary storage pools in a Cluster. For
example, you could provision 2 NFS servers in primary storage. Or you
could provision 1 iSCSI LUN initially and then add a second iSCSI LUN
when the first approaches capacity.
Using Multiple Local Storages (KVM only)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Since CloudStack 4.17.0.0, multiple local storages are supported on KVM hosts. The changes have been possible by editing the agent.properties file.
Since CloudStack 4.19.0.0, it's possible to add Local storage pool via UI/API as well.
It's advised that only one or the other method is used, not both.
Manually adding Local Storage Pool
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to use multiple local storage pools, you need to
#. Enable Local Storage For User VMs in the zone details (Edit the Zone, tick the "Enable local storage for user VMs")
#. Create local directories on KVM hosts
#. Edit /etc/cloudstack/agent/agent.properties
- Add extra directories to "local.storage.path".
- Add UUID of directories to "local.storage.uuid" (UUID can be generated by `uuidgen`).
"local.storage.uuid" must be present in the agent.properties file and should not be deleted.
.. parsed-literal::
local.storage.uuid=a43943c1-1759-4073-9db1-bc0ea19203aa,f5b1220b-4446-42dc-a872-cffd281f9f8c
local.storage.path=/var/lib/libvirt/images,/var/lib/libvirt/images2
#
#. Restart cloudstack-agent service
- Storage pools will be automatically created in libvirt by the CloudStack agent
Adding a Local Storage Pool via UI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When using UI, ensure that the scope of the storage is set to "Host", and
ensure that the protocol is set to "Filesystem".
|adding-local-pool-via-ui.png|
Changing the Scope of the Primary Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Scope of a Primary Storage can be changed from Zone-wide to Cluster-wide
and vice versa when the Primary Storage is in Disabled state.
An action button is displayed in UI for each Primary Storage in Disabled state.
|change-storage-pool-scope-via-ui.png|
Scope change from Cluster to Zone will connect the Primary Storage to all Hosts
of the zone running the same hypervisor as set on the storage pool.
|change-storage-pool-scope-to-zone.png|
Scope change from Zone to Cluster will disconnect the Primary Storage from all
Hosts that were previously connected to the Primary Storage and are not a part
of the specified Cluster. So, if there are running VMs on such hosts using this
Storage Pool, they cannot be disconnected from the hosts. In this case the Scope
change operation will error out.
The user VMs need to be stopped or migrated and system VMs need to be destroyed
while the primary Storage is disabled, before attempting the operation again.
listAffectedVmsForstorageScopeChange API can be used to get the list of all such VMs.
This might be a long running operation depending on how many hosts are there
in the zone which need to be connected or disconnected to the storage pool.
This feature is tested and supported for the following hypervisor and storage
combinations:
- KVM with NFS
- KVM with CEPH/RBD
- VMWare with NFS
It is possible to use this functionality with other configurations but some
manual intervention might be needed by the Administrator to make it work.
Storage Tags
~~~~~~~~~~~~
Storage may be "tagged". A tag is a text string attribute associated
with primary storage, a Disk Offering, or a Service Offering. Tags allow
administrators to provide additional information about the storage. For
example, that is a "SSD" or it is "slow". Tags are not interpreted by
CloudStack. They are matched against tags placed on service and disk
offerings. CloudStack requires all tags on service and disk offerings to
exist on the primary storage before it allocates root or data disks on
the primary storage. Service and disk offering tags are used to identify
the requirements of the storage that those offerings have. For example,
the high end service offering may require "fast" for its root disk
volume.
The interaction between tags, allocation, and volume copying across
clusters and pods can be complex. To simplify the situation, use the
same set of tags on the primary storage for all clusters in a pod. Even
if different devices are used to present those tags, the set of exposed
tags can be the same.
Maintenance Mode for Primary Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Primary storage may be placed into maintenance mode. This is useful, for
example, to replace faulty RAM in a storage device. Maintenance mode for
a storage device will first stop any new guests from being provisioned
on the storage device. Then it will stop all guests that have any volume
on that storage device. When all such guests are stopped the storage
device is in maintenance mode and may be shut down. When the storage
device is online again you may cancel maintenance mode for the device.
The CloudStack will bring the device back online and attempt to start
all guests that were running at the time of the entry into maintenance
mode.
.. note::
HA-Enabled Instances will also be stopped when the primary storage is put into maintenance mode.
It is recommended to migrate any business-critical Instances to alternate primary storage before initiating maintenance.
Browsing files on a primary storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Files can be listed at a path on a primary storage using `listStoragePoolObjects`
command or via UI under "Browser" tab for a primary storage. Depending
on the hypervisor, files and directories on a primary storage will get
associated with the cloudstack resources like snapshots, volumes,
templates, and ISOs.
.. image:: /_static/images/primary-storage-file-browser.png
:align: center
:alt: File browser for primary storage
.. note::
If files or folders are not associated with a cloudstack resource, it doesn't mean that they are not used by cloudstack.
Setting NFS Mount Options on the Storage Pool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
NFS mount options can be added while creating an NFS storage pool for
KVM hosts. When the storage pool is mounted on the KVM hypervisor host,
these options will be used. Options currently tested and supported are
`vers` and `nconnect`.
Although it depends on the NFS server, but commonly supported `vers` values
are `3` for NFSv3 and minor versions `4.0, 4.1 and 4.2` for NFSv4.
`nconnect` values can range from 1 to 16.
Administrator can give the NFS mount options while adding a Primary Storage
from the Create Zone Wizard as well as the Add Primary Storage form.
|nfs-mount-options-create-zone-wizard.png|
|nfs-mount-options-add-primary-storage.png|
NFS mount options can be changed on a pre-existing Storage Pool in maintenance
mode using the Edit Primary Storage form. Running VMs using volumes in the
Storage Pool will either be stopped or the volumes would be migrated to other
available pools upon enabling maintenance mode.
Storage Pool will be unmounted and mounted again on the KVM hosts using the
new options upon cancelling the maintenance mode.
|nfs-mount-options-edit-primary-storage.png|
Mount failing due to an incorrect mount option
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Add Storage Pool will fail with the error ``An incorrect mount option was specified``.
In the Update storage pool case, cancel maintenance will throw the above error.
The Administrator should set the correct mount option and cancel the maintenance mode again.
Version Requirements
^^^^^^^^^^^^^^^^^^^^
This feature needs libvirt version 5.1.0 and above on the KVM hosts.
The `nconnect` mount option exists in all Linux distributions with kernel 5.3 or higher
A note on the `nconnect` option
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This option defines the count of TCP connections that the client makes
to the NFS server. The `nconnect` setting is applied only during the
first mount process to the particular NFS server for a given NFS version.
If the same client executes the mount command again to the same NFS server using
the same version, it will get the same `nconnect` value as the first mount.
All mount point to the same server at a given version share the same number
of TCP connections. To change the `nconnect` settings all the such mount points
need to be unmounted and then mounted again with the new `nconnect` value.
So, from CloudStack’s perspective also, the first storage pool created from an
NFS server will set the `nconnect` setting on the hypervisor host corresponding
to the server. Specifying a different `nconnect` mount option while creating a
new storage pool from the same server will not change the `nconnect` setting on the host.
Similarly if there is only one pre-existing storage pool from a give NFS server
mounted on the host, modifying the `nconnect` mount option via CloudStack will
change the `nconnect` setting on that host. If there are more than one storage pools
from the same server mounted on a host. Changing the `nconnect` mount option on one
of the storage pools via CloudStack will not do anything. To change the `nconnect`
setting on the host, after modifying `nconnect` mount option on all storage pools,
the host needs to be restarted.
Secondary Storage
-----------------
This section gives concepts and technical details about CloudStack
secondary storage. For information about how to install and configure
secondary storage through the CloudStack UI, see :ref:`add-secondary-storage`.
Browsing files on a secondary storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Files can be listed at a path on a secondary storage using `listImageStoreObjects`
command or via UI under "Browser" tab for a secondary storage. Depending
on the hypervisor, files and directories on a primary storage will get
associated with the cloudstack resources like snapshots, volumes,
templates, and ISOs.
.. image:: /_static/images/secondary-storage-file-browser.png
:align: center
:alt: File browser for secondary storage
.. note::
If files or folders are not associated with a cloudstack resource, it doesn't mean that they are not used by cloudstack.
Migration of data between secondary storages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
One may choose to completely migrate the data or migrate data such that the stores
are balanced by choosing the appropriate Migration Policy. In order to facilitate
distributing the migration load, SSVMs are spawned up if a file transfer takes
more than a defined threshold. Following are the Global setting values to one may
want to look at before proceeding with the migration task:
+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Configuration Parameters | Description |
+==================================+========================================================================================================================================================================+
| image.store.imbalance.threshold | The storage imbalance threshold that is compared with the standard deviation percentage for a storage utilization metric. The value is a percentage in decimal format. |
+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| secstorage.max.migrate.sessions | The max number of concurrent copy command execution sessions that an SSVM can handle |
+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| max.ssvm.count | Number of additional SSVMs to handle migration of data objects concurrently |
+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| max.data.migration.wait.time | Maximum wait time for a data migration task before spawning a new SSVM |
+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Selective migration of templates and snapshots across secondary storages is also
possible using the `migrateResourceToAnotherSecondaryStorage` command. Or via UI
under "Browser" tab for a secondary storage.
Read only
~~~~~~~~~
Secondary storages can also be set to read-only in order to cordon it off
from being used for storing any further Templates, Volumes and Snapshots.
.. code:: bash
cmk updateImageStore id=4440f406-b9b6-46f1-93a4-378a75cf15de readonly=true
Direct resources to a specific secondary storage
~~~~~~~~~
By default, ACS allocates ISOs, volumes, snapshots, and templates to the freest secondary storage of the zone. In order to direct these resources to a specific secondary storage, the user can utilize the functionality of the dynamic secondary storage selectors using heuristic rules. This functionality utilizes JavaScript rules, defined by the user, to direct these resources to a specific secondary storage. When creating the heuristic rule, the script will have access to some preset variables with information about the secondary storage in the zone, about the resource the rule will be applied upon, and about the account that triggered the allocation. These variables are presented in the table below:
+-----------------------------------+-----------------------------------+
| Resource | Variables |
+===================================+===================================+
| Secondary Storage | ``id`` |
| +-----------------------------------|
| | ``name`` |
| +-----------------------------------|
| | ``usedDiskSize`` |
| +-----------------------------------|
| | ``totalDiskSize`` |
| +-----------------------------------|
| | ``protocol`` |
+-----------------------------------+-----------------------------------+
| Snapshot | ``size`` |
| +-----------------------------------|
| | ``hypervisorType`` |
| +-----------------------------------|
| | ``name`` |
+-----------------------------------+-----------------------------------+
| ISO/Template | ``format`` |
| +-----------------------------------|
| | ``hypervisorType`` |
| +-----------------------------------|
| | ``templateType`` |
| +-----------------------------------|
| | ``name`` |
+-----------------------------------+-----------------------------------+
| Volume | ``size`` |
| +-----------------------------------|
| | ``format`` |
+-----------------------------------+-----------------------------------+
| Account | ``id`` |
| +-----------------------------------|
| | ``name`` |
| +-----------------------------------|
| | ``domain.id`` |
| +-----------------------------------|
| | ``domain.name`` |
+-----------------------------------+-----------------------------------+
To utilize this functionality, the user needs to create a selector, using the API ``createSecondaryStorageSelector``. Each selector created specifies the type of resource the heuristic rule will be verified upon allocation (e.g. ISO, snapshot, template or volume), and the zone the heuristic will be applied on. It is noteworthy that can only be one heuristic rule for the same type within a zone. Another thing to consider is that the heuristic rule should return the ID of a valid secondary storage. Below, some examples are presented for heuristic rules considering different scenarios:
1. Allocate a resource type to a specific secondary storage.
.. code:: javascript
function findStorageWithSpecificId(pool) {
return pool.id === '7432f961-c602-4e8e-8580-2496ffbbc45d';
}
secondaryStorages.filter(findStorageWithSpecificId)[0].id
2. Dedicate storage pools for a type of template format.
.. code:: javascript
function directToDedicatedQCOW2Pool(pool) {
return pool.id === '7432f961-c602-4e8e-8580-2496ffbbc45d';
}
function directToDedicatedVHDPool(pool) {
return pool.id === '1ea0109a-299d-4e37-8460-3e9823f9f25c';
}
if (template.format === 'QCOW2') {
secondaryStorages.filter(directToDedicatedQCOW2Pool)[0].id
} else if (template.format === 'VHD') {
secondaryStorages.filter(directToDedicatedVHDPool)[0].id
}
3. Direct snapshot of volumes with the KVM hypervisor to a specific secondary storage.
.. code:: javascript
if (snapshot.hypervisorType === 'KVM') {
'7432f961-c602-4e8e-8580-2496ffbbc45d';
}
4. Direct resources to a specific domain:
.. code:: javascript
if (account.domain.id == '52d83793-26de-11ec-8dcf-5254005dcdac') {
'1ea0109a-299d-4e37-8460-3e9823f9f25c'
} else if (account.domain.id == 'c1186146-5ceb-4901-94a1-dd1d24bd849d') {
'7432f961-c602-4e8e-8580-2496ffbbc45d'
}
Working With Volumes
--------------------
A volume provides storage to a Guest Instance. The volume can provide for a
root disk or an additional data disk. CloudStack supports additional
volumes for Guest Instances.
Volumes are created for a specific hypervisor type. A volume that has
been attached to guest using one hypervisor type (e.g, XenServer) may
not be attached to a guest that is using another hypervisor type, for
example:vSphere, KVM. This is because the different hypervisors use
different disk image formats.
CloudStack defines a volume as a unit of storage available to a Guest
Instance. Volumes are either root disks or data disks. The root disk has "/"
in the file system and is usually the boot device. Data disks provide
for additional storage, for example: "/opt" or "D:". Every Guest Instance has
a root disk, and Instances can also optionally have a data disk. End Users can
mount multiple data disks to Guest Instances. Users choose data disks from the
disk offerings created by administrators. The User can create a Template
from a volume as well; this is the standard procedure for private
Template creation. Volumes are hypervisor-specific: a volume from one
hypervisor type may not be used on a guest of another hypervisor type.
.. note::
CloudStack supports attaching up to
- 13 data disks on XenServer hypervisor versions 6.0 and above,
And all versions of VMware.
- 64 data disks on Hyper-V.
- 6 data disks on other hypervisor types.
Creating a New Volume
~~~~~~~~~~~~~~~~~~~~~
You can add more data disk volumes to a Guest Instance at any time, up to the
limits of your storage capacity. Both CloudStack administrators and
Users can add volumes to Instances. When you create a new volume, it
is stored as an entity in CloudStack, but the actual storage resources
are not allocated on the physical storage device until you attach the
volume. This optimization allows the CloudStack to provision the volume
nearest to the guest that will use it when the first attachment is made.
When creating a new volume from an existing ROOT Volume Snapshot,
it is required to explicitly define a Disk offering (UI will offer only Disk
offerings whose disk size is equal or bigger than the size of the Snapshot).
|volume-from-snap.png|
When creating a new volume from an existing DATA Volume Snapshot, the disk offering
associated with the Snapshots (inherited from the original volume) is assigned
to the new volume.
Using Local Storage for Data Volumes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can create data volumes on local storage (supported with XenServer,
KVM, and VMware). The data volume is placed on the same host as the Instance,
that is attached to the data volume. These local data volumes
can be attached to Instances, detached, re-attached, and deleted
just as with the other types of data volume.
Local storage is ideal for scenarios where persistence of data volumes
and HA is not required. Some of the benefits include reduced disk I/O
latency and cost reduction from using inexpensive local disks.
In order for local volumes to be used, the feature must be enabled for
the zone.
You can create a data disk offering for local storage. When a User
creates a new Instance, they can select this disk offering in order to cause
the data disk volume to be placed in local storage.
You can not migrate an Instance that has a volume in local storage to a
different host, nor migrate the volume itself away to a different host.
If you want to put a host into maintenance mode, you must first stop any
Instances with local data volumes on that host.
To Create a New Volume
^^^^^^^^^^^^^^^^^^^^^^
#. Log in to the CloudStack UI as a User or admin.
#. In the left navigation bar, click Storage.
#. In Select View, choose Volumes.
#. To create a new volume, click Add Volume, provide the following
details, and click OK.
- Name. Give the volume a unique name so you can find it later.
- Availability Zone. Where do you want the storage to reside? This
should be close to the Instance that will use the volume.
- Disk Offering. Choose the characteristics of the storage.
The new volume appears in the list of volumes with the state
“Allocated.” The volume data is stored in CloudStack, but the volume
is not yet ready for use
#. To start using the volume, continue to Attaching a Volume
Uploading an Existing Volume to an Instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Existing data can be made accessible to an Instance. This is
called uploading a volume to the Instance. For example, this is useful to
upload data from a local file system and attach it to an Instance. Root
administrators, domain administrators, and end Users can all upload
existing volumes to Instances.
The upload is performed using HTTP. The uploaded volume is placed in the
zone's secondary storage
You cannot upload a volume if the preconfigured volume limit has already
been reached. The default limit for the cloud is set in the global
configuration parameter max.account.volumes, but administrators can also
set per-domain limits that are different from the global default. See
Setting Usage Limits
To upload a volume:
#. (Optional) Create an MD5 hash (checksum) of the disk image file that
you are going to upload. After uploading the data disk, CloudStack
will use this value to verify that no data corruption has occurred.
#. Log in to the CloudStack UI as an administrator or User
#. In the left navigation bar, click Storage.
#. Click Upload Volume.
#. Provide the following:
- Name and Description. Any desired name and a brief description
that can be shown in the UI.
- Availability Zone. Choose the zone where you want to store the
volume. Instances running on hosts in this zone can attach the volume.
- Format. Choose one of the following to indicate the disk image
format of the volume.
.. cssclass:: table-striped table-bordered table-hover
========== =================
Hypervisor Disk Image Format
========== =================
XenServer VHD
VMware OVA
KVM QCOW2
========== =================
- URL. The secure HTTP or HTTPS URL that CloudStack can use to
access your disk. The type of file at the URL must match the value
chosen in Format. For example, if Format is VHD, the URL might
look like the following:
``http://yourFileServerIP/userdata/myDataDisk.vhd``
- MD5 checksum. (Optional) Use the hash that you created in step 1.
#. Wait until the status of the volume shows that the upload is
complete. Click Instances - Volumes, find the name you specified in
step 5, and make sure the status is Uploaded.
Attaching a Volume
~~~~~~~~~~~~~~~~~~
You can attach a volume to a Guest Instance to provide extra disk storage.
Attach a volume when you first create a new volume, when you are moving
an existing volume from one Instance to another, or after you have migrated a
volume from one storage pool to another.
#. Log in to the CloudStack UI as a user or admin.
#. In the left navigation, click Storage.
#. In Select View, choose Volumes.
#. Click the volume name in the Volumes list, then click the Attach Disk
button |AttachDiskButton.png|
#. In the Instance popup, choose the Instance to which you want to attach the
volume. You will only see Instances to which you are allowed to
attach volumes; for example, a user will see only Instances created
by that user, but the administrator will have more choices.
#. When the volume has been attached, you should be able to see it by
clicking Instances, the Instance name, and View Volumes.
Detaching and Moving Volumes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
This procedure is different from moving volumes from one storage pool
to another as described in `“Instance Storage Migration”
<#vm-storage-migration>`_.
A volume can be detached from a Guest Instance and attached to another guest.
Both CloudStack administrators and users can detach volumes from Instances and
move them to other Instances.
If the two Instances are in different clusters, and the volume is large, it
may take several minutes for the volume to be moved to the new Instance.
#. Log in to the CloudStack UI as a user or admin.
#. In the left navigation bar, click Storage, and choose Volumes in
Select View. Alternatively, if you know which Instance the volume is
attached to, you can click Instances, click the Instance name, and click
View Volumes.
#. Click the name of the volume you want to detach, then click the
Detach Disk button. |DetachDiskButton.png|
#. To move the volume to another Instance, follow the steps in
`“Attaching a Volume” <#attaching-a-volume>`_.
Instance Storage Migration
~~~~~~~~~~~~~~~~~~~~
Supported in XenServer, KVM, and VMware.
.. note::
This procedure is different from moving disk volumes from one Instance to
another as described in `“Detaching and Moving Volumes”
<#detaching-and-moving-volumes>`_.
You can migrate an Instance’s root disk volume or any additional
data disk volume from one storage pool to another in the same zone.
You can use the storage migration feature to achieve some commonly
desired administration goals, such as balancing the load on storage
pools and increasing the reliability of Instances by moving them
away from any storage pool that is experiencing issues.
On XenServer and VMware, live migration of Instance storage is enabled through
CloudStack support for XenMotion and vMotion. Live storage migration
allows Instances to be moved from one host to another, where the Instances are not
located on storage shared between the two hosts. It provides the option
to live migrate an Instance’s disks along with the Instance itself. It is possible to
migrate an Instance from one XenServer resource pool / VMware cluster to
another, or to migrate an Instance whose disks are on local storage, or even to
migrate an Instance’s disks from one storage repository to another, all while
the Instance is running.
.. note::
Because of a limitation in VMware, live migration of storage for an
Instance is allowed only if the source and target storage pool are
accessible to the source host; that is, the host where the Instance is
running when the live migration operation is requested.
Migrating a Data Volume to a New Storage Pool
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are two situations when you might want to migrate a disk:
- Move the disk to new storage, but leave it attached to the same
running Instance.
- Detach the disk from its current Instance, move it to new storage, and
attach it to a new Instance.
Migrating Storage For a Running Instance
''''''''''''''''''''''''''''''''''''''''
(Supported on XenServer and VMware)
#. Log in to the CloudStack UI as a user or admin.
#. In the left navigation bar, click Instances, click the Instance name, and
click View Volumes.
#. Click the volume you want to migrate.
#. Detach the disk from the Instance. See `“Detaching and
Moving Volumes” <#detaching-and-moving-volumes>`_ but skip the “reattach”
step at the end. You will do that after migrating to new storage.
#. Click the Migrate Volume button |Migrateinstance.png| and choose the
destination from the dropdown list.
#. Watch for the volume status to change to Migrating, then back to
Ready.
Migrating Storage and Attaching to a Different Instance
'''''''''''''''''''''''''''''''''''''''''''''''''''''''
#. Log in to the CloudStack UI as a user or admin.
#. Detach the disk from the Instance. See `“Detaching and
Moving Volumes” <#detaching-and-moving-volumes>`_ but skip the “reattach”
step at the end. You will do that after migrating to new storage.
#. Click the Migrate Volume button |Migrateinstance.png| and choose the
destination from the dropdown list.
#. Watch for the volume status to change to Migrating, then back to
Ready. You can find the volume by clicking Storage in the left
navigation bar. Make sure that Volumes is displayed at the top of the
window, in the Select View dropdown.
#. Attach the volume to any desired Instance running in the same cluster as
the new storage server. See `“Attaching a
Volume” <#attaching-a-volume>`_
Migrating an Instance Root Volume to a New Storage Pool
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(XenServer, VMware) You can live migrate an Instance's root disk from one
storage pool to another, without stopping the Instance first.
(KVM) When migrating the root disk volume, the Instance must first be stopped,
and users can not access the Instance. After migration is complete, the Instance can
be restarted.
#. Log in to the CloudStack UI as a user or admin.
#. In the left navigation bar, click Instances, and click the Instance name.
#. (KVM only) Stop the Instance.
#. Click the Migrate button |Migrateinstance.png| and choose the
destination from the dropdown list.
.. note::
If the Instance's storage has to be migrated along with the Instance, this will
be noted in the host list. CloudStack will take care of the storage
migration for you.
#. Watch for the volume status to change to Migrating, then back to
Running (or Stopped, in the case of KVM). This can take some time.
#. (KVM only) Restart the Instance.
.. note::
In case of KVM and PowerFlex/ScaleIO storage, live migration of
Instance's root disk is allowed from one PowerFlex/ScaleIO storage pool
to another, without stopping the Instance.
Finding Primary Storage for Migration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When you click on migrate volume, CloudStack lists the available primary
storage. CloudStack uses its storage pool allocators to identify the primary
storages that are available and returns a list that is suitable for the selected
volume's migration.
The list also could include primary storages that are mentioned as
'Not suitable'. The criteria for which the primary storages are not suitable are:
- Storage tag mismatch with the volume.
- Doesn't have enough capacity.
- Reached its disable threshold.
- Disabled.
- Mismatch in the type of storage such as shared /Local.
Resizing Volumes
~~~~~~~~~~~~~~~~
CloudStack provides the ability to resize data disks; CloudStack
controls volume size by using disk offerings. This provides CloudStack
administrators with the flexibility to choose how much space they want
to make available to the end users. Volumes within the disk offerings
with the same storage tag can be resized. For example, if you only want
to offer 10, 50, and 100 GB offerings, the allowed resize should stay
within those limits. That implies if you define a 10 GB, a 50 GB and a
100 GB disk offerings, a user can upgrade from 10 GB to 50 GB, or 50 GB
to 100 GB. If you create a custom-sized disk offering, then you have the
option to resize the volume by specifying a new, larger size.
Additionally, using the resizeVolume API, a data volume can be moved
from a static disk offering to a custom disk offering with the size
specified. This functionality allows those who might be billing by
certain volume sizes or disk offerings to stick to that model, while
providing the flexibility to migrate to whatever custom size necessary.
This feature is supported on KVM, XenServer, and VMware hosts. However,
shrinking volumes is not supported on VMware hosts.
Before you try to resize a volume, consider the following:
- The Instances associated with the volume are stopped.
- The data disks associated with the volume are removed.
- When a volume is shrunk, the disk associated with it is simply
truncated, and doing so would put its content at risk of data loss.
Therefore, resize any partitions or file systems before you shrink a
data disk so that all the data is moved off from that disk.
- In Apache CloudStack 4.20 and before, resizing volume will fail if
the current storage pool does not have enough capacity for new volume size.
Since Apache CloudStack 4.21, it becomes possible if zone setting
volume.resize.allowed.beyond.allocation is set to true, and the new volume size
does not cross the resize threshold (pool.storage.allocated.resize.capacity.disablethreshold) of storage pool.
These two zone settings are configurable by ROOT admin.
To resize a volume:
#. Log in to the CloudStack UI as a user or admin.
#. In the left navigation bar, click Storage.
#. In Select View, choose Volumes.
#. Select the volume name in the Volumes list, then click the Resize
Volume button |resize-volume-icon.png|
#. In the Resize Volume pop-up, choose desired characteristics for the
storage.
|resize-volume.png|
#. Specify a custom size.
#. Click Shrink OK to confirm that you are reducing the size of a
volume.
This parameter protects against inadvertent shrinking of a disk,
which might lead to the risk of data loss. You must sign off that
you know what you are doing.
#. Check if you wish to auto migrate volume to another storage pool if required.
#. Click OK.
Root Volume size defined via Service Offering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a Service Offering is created with a root disk size, then resizing the Root volume is possible only by resizing the Instances service offering.
Service offering Root resizing constrains:
#. Users cannot deploy Instances with custom root disk size when using such offerings
#. Users cannot resize the Instance root disk size when using such offerings
#. The Root Volume of such Instances can only be resized when changing to another Service Offering with a Root disk size equals or larger than the current one.
#. Users can change the Instance offering to a service offering with a Root size of 0GB (default) and then customize the volume size.
The following table shows possible combinations of Service offering supported resizing based on the offering Root disk size:
+---+----------------------------+---------------------------+-------------------------------+
| # | Service Offering Root size | new Service Offering Root | Does support offering resize? |
+---+----------------------------+---------------------------+-------------------------------+
| 1 | 0GB (default) | Any | YES |
+---+----------------------------+---------------------------+-------------------------------+
| 2 | 5GB | 5GB | YES |
+---+----------------------------+---------------------------+-------------------------------+
| 3 | 5GB | 10GB | YES |
+---+----------------------------+---------------------------+-------------------------------+
| 4 | 10GB | 5GB | NO |
+---+----------------------------+---------------------------+-------------------------------+
| 5 | Any | 0GB | YES |
+---+----------------------------+---------------------------+-------------------------------+
.. note::
Shrinking the Root disk is not supported via the service offering resizing workflow. All the combinations above assume a transition to Root disks with size equals or bigger than the original.
Service Offerings with Root size of 0GB do not change the disk size to Zero and indicates that the offering do not enforces a Root disk size.
Change disk offering for volume
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are volume operations like migrate volume and resize volume and both accepts new disk offering to replace the existing disk offering of volume.
Instead of using these APIs directly, the operation can be performed in the UI using change offering in the details view for the volume.
Upon changing the disk offering the volume will be resized and/or migrated to the suitable storage pool if required according to the new disk offering.
The zone level setting "match.storage.pool.tags.with.disk.offering" gives flexibility or control to choose the new disk offering.
If this setting is true, then the new disk offering should have the same storage tags as the exiting disk offering of the volume.
To change the disk offering of a volume:
#. Log in to the CloudStack UI as a user or admin.
#. In the left navigation bar, click Storage.
#. In Select View, choose Volumes.
#. Select the volume name in the Volumes list, then click the Change Offering for Volume button
#. In the Change Offering For Volume pop-up, choose desired disk offering for the
volume.
|change-offering-for-volume.png|
#. If you select Custom Disk, specify a custom size.