Ben's

솔라리스 zpool 생성 과정 본문

Other OS/솔라리스

솔라리스 zpool 생성 과정

Ben Ko (SINCE 2013) 2013. 1. 16. 16:09
728x90

* 프롬프트 변수 설정: /etc/profile

export PS1="[ \u@\h \w \\$ ] "

* sys-unconfig: 시스템 구성 재설정 (http://blog.naver.com/tangamjaelt?Redirect=Log&logNo=39652807)

* showrev -p: 패치정보 보기

 

[750G*3 raidz + 750*1 hotspare 구성]

1. 먼저 한개의 디스크를 파티셔닝 한다
format /dev/rdsk/c1t2d0

2. 다른 디스크들을 초기화 한다.

[ root@new-file-02 / # ] fdisk -B /dev/rdsk/c1t3d0
[ root@new-file-02 / # ] fdisk -B /dev/rdsk/c1t4d0
[ root@new-file-02 / # ] fdisk -B /dev/rdsk/c1t5d0

3. 파티셔닝한 디스크의 vtoc 정보를 다른 디스크 들에 넣어주어 같게 만들어 준다.

[ root@new-file-02 / # ] prtvtoc /dev/rdsk/c1t2d0s2 | fmthard -s - /dev/rdsk/c1t3d0s2
fmthard: Partition 2 specifies the full disk and is not equal
full size of disk.  The full disk capacity is 1465031610 sectors.
fmthard: Partition 2 specified as 1465095870 sectors starting at 0
        does not fit. The full disk contains 1465031610 sectors.
fmthard:  New volume table of contents now in place.

[ root@new-file-02 / # ] prtvtoc /dev/rdsk/c1t2d0s2 | fmthard -s - /dev/rdsk/c1t4d0s2
fmthard: Partition 2 specifies the full disk and is not equal
full size of disk.  The full disk capacity is 1465031610 sectors.
fmthard: Partition 2 specified as 1465095870 sectors starting at 0
        does not fit. The full disk contains 1465031610 sectors.
fmthard:  New volume table of contents now in place.

[ root@new-file-02 / # ] prtvtoc /dev/rdsk/c1t2d0s2 | fmthard -s - /dev/rdsk/c1t5d0s2
fmthard: Partition 2 specifies the full disk and is not equal
full size of disk.  The full disk capacity is 1465031610 sectors.
fmthard: Partition 2 specified as 1465095870 sectors starting at 0
        does not fit. The full disk contains 1465031610 sectors.
fmthard:  New volume table of contents now in place.

4. zpool 생성(750G*3 raidz + 750*1 hotspare)

[ root@new-file-02 / # ] mkdir /FILE02

[ root@new-file-02 / # ] zpool create -f -m /FILE02 FILE02pool raidz c1t2d0 c1t3d0 c1t4d0 spare c1t5d0

[ root@new-file-02 / # ] zpool status
  pool: FILE02pool
state: ONLINE
scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        FILE02pool  ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
        spares
          c1t5d0    AVAIL  

5. scrub 기능 켜기
[ root@new-file-02 / # ] zpool scrub FILE02pool
[ root@new-file-02 / # ] zpool scrub OSpool

6. nfs server start(네트워크 공유 설정)
[ root@new-file-02 / # ] zfs set sharenfs='anon=0,rw' FILE02pool

7. nfs 설정 확인
[ root@new-file-02 / # ] share
-@FILE02pool    /FILE02   anon=0,sec=sys,rw   ""

[ root@new-file-02 / # ] svcs nfs/server
STATE          STIME    FMRI
online         17:28:29 svc:/network/nfs/server:default

[ root@new-file-02 / # ] df -h -F zfs
Filesystem             size   used  avail capacity  Mounted on
OSpool/ROOT/snv_98     147G   5.9G   139G     5%    /
OSpool                 147G    36K   139G     1%    /OSpool
OSpool/export          147G    19K   139G     1%    /export
OSpool/export/home     147G    26K   139G     1%    /export/home
FILE02pool             1.3T    24K   1.3T     1%    /FILE02

8. 리부팅 시에도 scrub 을 켜주기 위해 스크립트 등록후 리눅스를 사용하던 분들의 편의(^^;)를 위해 /etc/rc.local 에 심볼릭 링크 검.
[ root@new-file-02 ~ # ] ls -al /etc/rc.local
lrwxrwxrwx   1 root     root          20 Oct 24 17:32 /etc/rc.local -> /etc/rc3.d/S90common


[디스크 장애 발생시 교체요령] : 장애 디스크를 offline 으로 빼서 detach 시키면 online 상태가 되어 남아 있는 디스크들로만 운용가능

[ root@new-file-02 ~ # ] zpool status OSpool
  pool: OSpool
state: ONLINE
scrub: scrub completed after 0h5m with 0 errors on Fri Oct 24 17:43:34 2008
config:

        NAME          STATE     READ WRITE CKSUM
        OSpool        ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0

errors: No known data errors

[ root@new-file-02 ~ # ] zpool offline OSpool c1t1d0s0 ========> 장애났다고 가정하고 offline 시킴

[ root@new-file-02 ~ # ] zpool status OSpool ========> degraded 상태로 빠지나 서비스는 가능한 상태
  pool: OSpool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
scrub: scrub completed after 0h5m with 0 errors on Fri Oct 24 17:43:34 2008
config:

        NAME          STATE     READ WRITE CKSUM
        OSpool        DEGRADED     0     0     0
          mirror      DEGRADED     0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  OFFLINE      0     0     0

errors: No known data errors

[ root@new-file-02 ~ # ] zpool detach OSpool c1t1d0s0 ========> detach 하고 디스크 제거

[ root@new-file-02 ~ # ] zpool status OSpool
  pool: OSpool
state: ONLINE
scrub: scrub completed after 0h5m with 0 errors on Fri Oct 24 17:43:34 2008
config:

        NAME        STATE     READ WRITE CKSUM
        OSpool      ONLINE       0     0     0
          c1t0d0s0  ONLINE       0     0     0

errors: No known data errors

[ root@new-file-02 ~ # ] zpool attach -f OSpool c1t0d0s0 c1t1d0s0  ========> 새 디스크 장착후 attach 시킴

[ root@new-file-02 ~ # ] zpool status OSpool
  pool: OSpool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 4.29% done, 0h3m to go
config:

        NAME          STATE     READ WRITE CKSUM
        OSpool        ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0

errors: No known data errors

* scrub 은 리눅스의 fsck 과 같은 기능이라고 하네요 문제있을때만 켜서 체크하고 체크가 끝나면 꺼두는것이 좋겠네요
http://www.tech-recipes.com/rx/1405/zfs-how-to-fsck-or-check-filesystem-integrity-with-scrub/