Discussion:
xfs_growfs too quick
(too old to reply)
Woozy Song
2024-04-30 10:58:22 UTC
Permalink
Trying to install Alma Linux with 2 hard drives, the Ass Hat installer
is a PITA for custom partitioning. Wanted home on larger drive and the
rest on the other drive. But login after install, and found home was
only 500 GB of the 2 TB drive. It was xfs and to my surprise, I see
xfs_growfs can run on a mounted drive. So did control-alt-F3 for a text
terminal, then sudo xfs_growfs /dev/sda1
It only took a second, so apparently only change partition table limits.
Shouldn't format the rest of the drive?
Paul
2024-04-30 12:43:14 UTC
Permalink
Trying to install Alma Linux with 2 hard drives, the Ass Hat installer is a PITA for custom partitioning. Wanted home on larger drive and the rest on the other drive. But login after install, and found home was only 500 GB of the 2 TB drive. It was xfs and to my surprise, I see xfs_growfs can run on a mounted drive. So did control-alt-F3 for a text terminal, then sudo xfs_growfs /dev/sda1
It only took a second, so apparently only change partition table limits. Shouldn't format the rest of the drive?
While the XFS partition was mounted...

Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 40962047 40960000 19.5G 83 Linux <=== Using fdisk, I deleted the (XFS) partition.

Command (m for help): n <=== Using fdisk, I created a bigger partition definition.
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-104857599, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-104857599, default 104857599): 84857599

Created a new partition 1 of type 'Linux' and of size 40.5 GiB.
Partition #1 contains a xfs signature.

Do you want to remove the signature? [Y]es/[N]o: n <=== The signature should remain of course.

Command (m for help): p

Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 84857599 84855552 40.5G 83 Linux <=== You can see my space, is now bigger

Command (m for help):w
Command (m for help):q

*******

***@CASEMINT:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 20469760 175784 20293976 1% /media/bullwinkle/TESTXFS <=== original size (on top of 40.5G declaration)

***@CASEMINT:~$ sudo xfs_growfs -d /media/bullwinkle/TESTXFS <=== NOW, do the growfs
meta-data=/dev/sdb1 isize=512 agcount=4, agsize=1280000 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=5120000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 5120000 to 10606944 <=== growfs, verified

***@CASEMINT:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 42417536 329140 42088396 1% /media/bullwinkle/TESTXFS <=== I can see I have extra space

*******

You should really make a backup before you do this.

And "gparted" is likely a safer way to do it.

I don't like the idea of doing this with live data on the partition.
My "fdisk sequence" is just a bad idea, OK? I only did it to make a point.
Doing it hot like that... is bad karma.

Paul
J.O. Aho
2024-04-30 13:33:36 UTC
Permalink
Post by Woozy Song
Trying to install Alma Linux with 2 hard drives, the Ass Hat installer
is a PITA for custom partitioning. Wanted home on larger drive and the
rest on the other drive. But login after install, and found home was
only 500 GB of the 2 TB drive. It was xfs and to my surprise, I see
xfs_growfs can run on a mounted drive. So did control-alt-F3 for a text
terminal, then sudo xfs_growfs /dev/sda1
It only took a second, so apparently only change partition table limits.
Shouldn't format the rest of the drive?
When adding space to file system, no "format" is done on the new blocks,
they will just be marked as empty internally regardless of the data they
may have had before.

xfs_growfs will not change partition data, so if your 500GB /home was in
a partition that was just 500GB then running the xfs_growfs will not do
anything as there is nothing to do. If the partition was larger than the
file system, then the command you run would add the rest of the
partition to the file system.

To see how your partitions look like you can always run: sudo fdisk -l
/dev/sda
--
//Aho
Carlos E.R.
2024-05-02 05:52:11 UTC
Permalink
Post by J.O. Aho
Post by Woozy Song
Trying to install Alma Linux with 2 hard drives, the Ass Hat installer
is a PITA for custom partitioning. Wanted home on larger drive and the
rest on the other drive. But login after install, and found home was
only 500 GB of the 2 TB drive. It was xfs and to my surprise, I see
xfs_growfs can run on a mounted drive. So did control-alt-F3 for a
text terminal, then sudo xfs_growfs /dev/sda1
It only took a second, so apparently only change partition table
limits. Shouldn't format the rest of the drive?
When adding space to file system, no "format" is done on the new blocks,
they will just be marked as empty internally regardless of the data they
may have had before.
xfs_growfs will not change partition data, so if your 500GB /home was in
a partition that was just 500GB then running the xfs_growfs will not do
anything as there is nothing to do. If the partition was larger than the
file system, then the command you run would add the rest of the
partition to the file system.
It has to reserve more space for the metadata structures, but most of
them are dynamically created when needed.

There are some disadvantages for a grown xfs system compared to one
created with the big size from the start. I don't remember which, but
some of the structures do not cope very well (with large size grown). I
read about it in the xfs mail list.
Post by J.O. Aho
To see how your partitions look like you can always run: sudo fdisk -l
/dev/sda
--
Cheers, Carlos.
J.O. Aho
2024-05-02 09:37:43 UTC
Permalink
Post by Carlos E.R.
Post by J.O. Aho
xfs_growfs will not change partition data, so if your 500GB /home was
in a partition that was just 500GB then running the xfs_growfs will
not do anything as there is nothing to do. If the partition was larger
than the file system, then the command you run would add the rest of
the partition to the file system.
There are some disadvantages for a grown xfs system compared to one
created with the big size from the start. I don't remember which, but
some of the structures do not cope very well (with large size grown). I
read about it in the xfs mail list.
I doubt you will get this when grown from 500G to 2T, I could see issues
when you go from 1T to 1P, but don't trust what I say too much as I'm
far from an export on xfs, hardly used it myself, been more into jfs in
the older days as xfs back then did be a big ram hog and was easily
corrupted when abrupt "shutdowns".
--
//Aho
Carlos E.R.
2024-05-02 18:26:31 UTC
Permalink
Post by J.O. Aho
Post by Carlos E.R.
Post by J.O. Aho
xfs_growfs will not change partition data, so if your 500GB /home was
in a partition that was just 500GB then running the xfs_growfs will
not do anything as there is nothing to do. If the partition was
larger than the file system, then the command you run would add the
rest of the partition to the file system.
There are some disadvantages for a grown xfs system compared to one
created with the big size from the start. I don't remember which, but
some of the structures do not cope very well (with large size grown).
I read about it in the xfs mail list.
I doubt you will get this when grown from 500G to 2T, I could see issues
when you go from 1T to 1P, but don't trust what I say too much as I'm
far from an export on xfs, hardly used it myself, been more into jfs in
the older days as xfs back then did be a big ram hog and was easily
corrupted when abrupt "shutdowns".
I found a reference:

The problem was seen especially when the file system
was small initially and later grown to a larger size
using xfs_growfs. The log size does not grow when the
FS is grown. In these cases, we are stuck with the
same log size calculated for the smaller file system
size (which was 10MB, the earlier default value).


On new filesystem, the new default size is 64megs.

I don't know where the archive is for
X-Mailing-List: linux-***@vger.kernel.org
to point you to that thread.
--
Cheers, Carlos.
J.O. Aho
2024-05-03 05:39:28 UTC
Permalink
Post by J.O. Aho
Post by Carlos E.R.
Post by J.O. Aho
xfs_growfs will not change partition data, so if your 500GB /home
was in a partition that was just 500GB then running the xfs_growfs
will not do anything as there is nothing to do. If the partition was
larger than the file system, then the command you run would add the
rest of the partition to the file system.
 >
Post by Carlos E.R.
There are some disadvantages for a grown xfs system compared to one
created with the big size from the start. I don't remember which, but
some of the structures do not cope very well (with large size grown).
I read about it in the xfs mail list.
I doubt you will get this when grown from 500G to 2T, I could see
issues when you go from 1T to 1P, but don't trust what I say too much
as I'm far from an export on xfs, hardly used it myself, been more
into jfs in the older days as xfs back then did be a big ram hog and
was easily corrupted when abrupt "shutdowns".
   The problem was seen especially when the file system
   was small initially and later grown to a larger size
   using xfs_growfs. The log size does not grow when the
   FS is grown. In these cases, we are stuck with the
   same log size calculated for the smaller file system
   size (which was 10MB, the earlier default value).
Here is the whole original post and the reply:
https://lore.kernel.org/linux-xfs/***@dread.disaster.area/

Not much there, but the commit mentioned:
https://kernel.googlesource.com/pub/scm/fs/xfs/xfsprogs-dev/+/cdfa467edd2d1863de067680b0a3ec4458e5ff4a
On new filesystem, the new default size is 64megs.
Seems the issue mainly on file systems smaller than 250G and the
increase of the min size was done in 2022, so I guess the OP is a bit
more safe from the speed degradation unless he installed an EOL version
of Linux.

Noticed that xfs_admin has the -j options, but it applies only to quite
old xfs installs which uses version 1 of the log, the option would
convert the log to version 2 and increase the buffer size, but this
wouldn't apply to OP.
--
//Aho
Loading...