xfs文件系統

引用http://blog.chinaunix.net/uid-522675-id-4665059.html  xfs文件系統使用總結html

 

1.3 xfs相關經常使用命令
xfs_admin: 調整 xfs 文件系統的各類參數  
xfs_copy: 拷貝 xfs 文件系統的內容到一個或多個目標系統(並行方式)  
xfs_db: 調試或檢測 xfs 文件系統(查看文件系統碎片等)  
xfs_check: 檢測 xfs 文件系統的完整性  
xfs_bmap: 查看一個文件的塊映射  
xfs_repair: 嘗試修復受損的 xfs 文件系統  
xfs_fsr: 碎片整理  
xfs_quota: 管理 xfs 文件系統的磁盤配額  
xfs_metadump: 將 xfs 文件系統的元數據 (metadata) 拷貝到一個文件中  
xfs_mdrestore: 從一個文件中將元數據 (metadata) 恢復到 xfs 文件系統  
xfs_growfs: 調整一個 xfs 文件系統大小(只能擴展)  
xfs_freeze    暫停(-f)和恢復(-u)xfs 文件系統
xfs_logprint: 打印xfs文件系統的日誌  
xfs_mkfile: 建立xfs文件系統  
xfs_info: 查詢文件系統詳細信息  
xfs_ncheck: generate pathnames from i-numbers for XFS  
xfs_rtcp: XFS實時拷貝命令   
xfs_io: 調試xfs I/O路徑

2.2  計算塊使用
 We want to use mysql on /dev/sda3, but how can we ensure that it is aligned with the RAID stripes?  It takes a small amount of math:

    Start with your RAID stripe size.  Let’s use 64k which is a common default.  In this case 64K = 2^16 = 65536 bytes. 默認尺寸是64K
    Get your sector size from fdisk.  In this case 512 bytes. 扇區大小512b
    Calculate how many sectors fit in a RAID stripe.   65536 / 512 = 128 sectors per stripe. 每一個條帶大小128個扇區。
    Get start boundary of our mysql partition from fdisk: 27344896. 查看mysql分區的起始數爲27344896
    See if the Start boundary for our mysql partition falls on a stripe boundary by dividing the start sector of the partition by the sectors per stripe:  27344896 / 128 = 213632.  This is a whole number, so we are good.  If it had a remainder, then our partition would not start on a RAID stripe boundary. 查看若是由起始扇區劃分的起始邊界落到條帶的邊界,再計算扇區數,獲得一個整數。若是有餘數,那麼咱們的分區不會從raid條帶邊界開始。
    
Create the Filesystem

XFS requires a little massaging (or a lot).  For a standard server, it’s fairly simple.  We need to know two things:

    RAID stripe size
    Number of unique, utilized disks in the RAID.  This turns out to be the same as the size formulas I gave above:
        RAID 1+0:  is a set of mirrored drives, so the number here is num drives / 2.
        RAID 5: is striped drives plus one full drive of parity, so the number here is num drives – 1.
In our case, it is RAID 1+0 64k stripe with 8 drives.  Since those drives each have a mirror, there are really 4 sets of unique drives that are striped over the top.  Using these numbers, we set the ‘su’ and ‘sw’ options in mkfs.xfs with those two values respectively.
 
2.3 格式化文件系統
經過以上實例總結執行命令 mkfs.xfs -d su=64k,sw=4 /dev/sda3

3. xfs文件系統的建立
3.1 默認方法
#mkfs.xfs /dev/sdc1
meta-data=/dev/sdc1 isize=256    agcount=18, agsize=1048576 blks
data     =                       bsize=4096   blocks=17921788, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096  
log      =internal log           bsize=4096   blocks=2187, version=1
         =                       sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

3.2 指定塊和內部log大小

# mkfs.xfs -b size=1k -l size=10m /dev/sdc1
meta-data=/dev/sdc1 isize=256    agcount=18, agsize=4194304 blks
data     =                       bsize=1024   blocks=71687152, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096  
log      =internal log           bsize=1024   blocks=10240, version=1
         =                       sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
3.3 使用邏輯卷作爲外部日誌的卷
# mkfs.xfs -l logdev=/dev/sdh,size=65536b /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=4, agsize=76433916
blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=305735663,
imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =/dev/sdh               bsize=4096   blocks=65536, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

3.3 目錄塊

# mkfs.xfs -b size=2k -n size=4k /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=4,
agsize=152867832 blks
         =                       sectsz=512   attr=2
data     =                       bsize=2048   blocks=611471327,
imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=2048   blocks=298569, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

3.4 擴展文件系統
新增的空間不會使原有文件系統上的文件不會被改動,並且被增長的空間變成可用的附加的文件存儲
XVM支持xfs系統的擴展
# xfs_growfs /mnt
meta-data=/mnt                   isize=256    agcount=30, agsize=262144 blks
data     =                       bsize=4096   blocks=7680000, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=1200 version=1
         =                       sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
data blocks changed from 7680000 to 17921788

4. 文件系統的維護
4.1 碎片的整理
查看文件塊情況: xfs_bmap -v file.tar.bz2
查看磁盤碎片情況: xfs_db -c frag -r /dev/sda1
整理碎片: xfs_fsr /dev/sda1


mountpoint與device要區別

掛載點
[root@my ~]# xfs_info /root
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=3110656 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=12442624, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=6075, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

設備名,(下面輸出比較多)
[root@my ~]# xfs_logprint /dev/mapper/centos-root|more

[root@my ~]# xfs_bmap /var/log/messages
/var/log/messages:
        0: [0..119]: 6304..6423
        1: [120..127]: 6440..6447
        2: [128..135]: 6464..6471
[root@my ~]# xfs_bmap /var/log/secure
/var/log/secure:
        0: [0..7]: 6424..6431
        1: [8..15]: 6456..6463
        2: [16..23]: 6592..6599
[root@my ~]# xfs_bmap -v /var/log/messages
/var/log/messages:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL
   0: [0..119]:        6304..6423        0 (6304..6423)       120
   1: [120..127]:      6440..6447        0 (6440..6447)         8
   2: [128..135]:      6464..6471        0 (6464..6471)         8


[root@my ~]# xfs_db -c frag -r /dev/xvda1
actual 326, ideal 324, fragmentation factor 0.61%

[root@my ~]# xfs_db -c frag -r /dev/xvda2
xfs_db: /dev/xvda2 is not a valid XFS filesystem (unexpected SB magic number 0x00000000)
Use -F to force a read attempt.
由於/dev/xvda2是一個pv,它沒有包含文件系統

[root@my ~]# xfs_db -c frag -r /dev/mapper/centos-root
actual 20226, ideal 20092, fragmentation factor 0.66%
[root@my ~]# xfs_db -c frag -r /dev/centos/root
actual 20239, ideal 20103, fragmentation factor 0.67%
[root@my ~]# xfs_db -c frag -r /dev/dm-0
actual 20239, ideal 20103, fragmentation factor 0.67%


mysql