Backtrack:  
 
by lunarg on September 9th 2019, at 16:54

On VCSA, the database is stored on a separate disk. It could happen that this disk runs out of room, causing Vcenter to no longer function properly. One way to resolve this is by running database clean up as mentioned in KB 2110031. However, if this is not possible, or you don't want to clear out the data, you can also resize the disk.

For this to work, you'll need root access and access to the bash-shell, either on the console or through SSH.

Before resizing, identify the physical disk to be resized. For VCSA 6.5 and 6.7, this should normally be Disk 8 (device node in linux = /dev/sdh), but your setup may vary, so it's best to double-check this.

In VCSA 6.5 and 6.7, the database is located here:

  • Mount point = /storage/seat
  • Device node (LVM-node) = /dev/mapper/seat_vg-seat
  • LV-name = seat
  • VG-name = seat_vg

By querying LVM, you can figure out on which "physical" disk the volume group seat_vg is located:

pvdisplay

In the output, look for VG Name = seat_vg. The device node mentioned at PV Name will tell you the disk. The last letter of the name of the device node will tell you which disk it is: /dev/sda for the first disk, /dev/sdb for the second and so on. While this would not always correspond with the disk number in VMWare itself, in this case, it will be. In our case, the device node was /dev/sdh which is Disk 8 in the list of disks in the properties of the VM. Once we've established the correct disk, we can continue with our resize process.

Next up, resize the disk in VMware (log directly onto the ESX running VCSA and modify its disk properties). The disk will then be resized but in the OS, the old volume size will still be in use, because the services are still started and the volume is still mounted.

If you're scared about making mistakes, now is a good time to create a snapshot of the appliance. Note that you couldn't do it before because resizing the disk can only be done if there aren't any snapshots present.

First, stop all services and unmount the volume:

service-control --stop --all
umount /storage/seat

If the umount doesn't show any output, then the volume was successfully unmounted, and you can proceed with the resize.

Next, test to see whether we can resize the volume group:

pvresize -v -t /dev/sdh

Because LVM is used and the volume group was made on the entire disk, and not a partition of that disk, the change will be visible almost immediately. If the next command complains that it hasn't resized, try running the following to see whether the correct disk was resized (note: replace the device node if it is not sdh):

fdisk -l | grep sdh

It should show you the size of the disk:

If pvresize says it can resize the volume, you can remove the -t and re-run the command:

pvresize -v /dev/sdh

Note that by not specifying a new size, we will resize the volume group to the maximum size of the disk.

Once that was successful, we still are left with resizing the volume itself. Perform a new test resize to see how far we can extend. In case of a volume resize, we do need to specify the new size (or the amount of extents to add):

lvresize -L 30G -r -v -t /dev/seat_vg/seat

In this case, we had resized our disk to 30GB and will now test to see whether we can resize it to 30GB. More than likely, we will get an error here, stating that there are not enough extents available. This is to be expected as LVM-overhead will reduce the available space to a little less than the maximum size. The fortunate bit is that it will tell you how many extents are needed, which allows us to resize the volume up to the maximum number of extents:

For example, if the command said you only have 2560 extents left (which is the case when resizing from 10 GB to 30 GB), type the following to resize to the maximum size:

lvresize -l +2560 -r -v /dev/seat_vg/seat

Note that we now use -l because we are specifying extents. Also note the + sign, which says to add this much extents. Leaving out the plus-sign would require you to specify the absolute number of extents. We also specified the -r parameter, telling lvresize to also resize the file system (so we won't have to do it ourself after resizing the volume).

Once the resize process is completed, remount the volume:

mount /dev/seat_vg/seat

Check with df -h, and you should see /storage/seat having much a new size and much more free space:

Finally, start all services again. You don't need to reboot the appliance.

service-control --start --all

After all services have been started, test to see whether you can log in again. In the Appliance Management, on the Summary page, everything should also be "green" again.

If everything is working, and you've created a snapshot, you can remove it now.

 
 
« November 2024»
SunMonTueWedThuFriSat
     12
3456789
10111213141516
17181920212223
24252627282930
 
Links
 
Quote
« When a bird does poo poo in your eye, be happy elephants don't fly. »