Netapp Concepts (for network engineers)

This blog is written from a a network engineers point view. I’m going to try relate the concepts to tasks/commands that you usually run on a new network device to understand what its doing.

Hardware

Like Cisco devices the Netapp filer has a concept of slots and ports. The slots start their numbering from zero and can contain either storage adapters (called HBAs) or network adapters (called line cards or NICs).

The Netapp Simulator below has no storage adapters and one quad port ethernet line card

netapp01*> sysconfig
        NetApp Release 8.2.1 7-Mode: Fri Mar 21 14:48:58 PDT 2014
        System ID: 4082368508 (netapp01)
        System Serial Number: 4082368508 (netapp01)
        System Storage Configuration: Multi-Path
        System ACP Connectivity: NA
        slot 0: System Board
                Model Name:         SIMBOX
                Processors:         2
                Memory Size:        1599 MB
                Memory Attributes:  None
        slot 0: 10/100/1000 Ethernet Controller V
                e0a MAC Address:    00:0c:29:56:d2:4a (auto-1000t-fd-up)
                e0b MAC Address:    00:0c:29:56:d2:54 (auto-1000t-fd-up)
                e0c MAC Address:    00:0c:29:56:d2:5e (auto-1000t-fd-up)
                e0d MAC Address:    00:0c:29:56:d2:68 (auto-1000t-fd-up)

 

 

While a real Netapp filer shows:

netapp-a> sysconfig
        NetApp Release 8.1.1 7-Mode: Mon Jul 30 12:49:46 PDT 2012
        System ID: xxxxxxxx (netapp-a); partner ID: yyyyyyy (netapp-b)
        System Serial Number: zzzzzzz (netapp-a)
        System Rev: D0
        System Storage Configuration: Multi-Path HA
        System ACP Connectivity: Full Connectivity
        slot 0: System Board
                Processors:         4
                Processor type:     Intel(R) Xeon(R) CPU           C3528  @ 1.73GHz
                Memory Size:        6144 MB
                Memory Attributes:  Hoisting
                                    Normal ECC
                Controller:         A
        Service Processor           Status: Online
        slot 0: Internal 10/100 Ethernet Controller
                e0M MAC Address:    00:a0:98:38:26:13 (auto-100tx-fd-cfg_down)
                e0P MAC Address:    00:a0:98:38:26:12 (auto-100tx-fd-up)
        slot 0: Quad Gigabit Ethernet Controller 82580
                e0a MAC Address:    00:a0:98:38:26:0e (auto-1000t-fd-up)
                e0b MAC Address:    00:a0:98:38:26:0f (auto-1000t-fd-up)
                e0c MAC Address:    00:a0:98:38:26:10 (auto-1000t-fd-up)
                e0d MAC Address:    00:a0:98:38:26:11 (auto-1000t-fd-up)
        slot 0: Interconnect HBA:   Mellanox IB MT25204
        slot 0: SAS Host Adapter 0a
                72 Disks:            120629.8GB
                1 shelf with IOM3, 1 shelf with IOM6, 1 shelf with IOM6E
        slot 0: SAS Host Adapter 0b
                72 Disks:            120629.8GB
                1 shelf with IOM3, 1 shelf with IOM6, 1 shelf with IOM6E
        slot 0: Intel ICH USB EHCI Adapter u0a (0xdf101000)
                boot0   Micron Technology Real SSD eUSB 2GB, class 0/0, rev 2.00/11.10, addr 2 1936MB 512B/sect (4DF0022700247875)
        slot 1: Dual 10 Gigabit Ethernet Controller IX1-SFP+
                e1a MAC Address:    00:a0:98:37:2f:78 (auto-10g_twinax-fd-up)
                e1b MAC Address:    00:a0:98:37:2f:79 (auto-10g_twinax-fd-up)

The filer has two slots: slot0 & slot1. Slot 0 has four adapters two network (ports numbered e0x) & two storage adapters (ports numbered 0a and 0b).

 

Network Interfaces

The network interface config can be viewed using the standard ifconfig / netstat commands. However the command ifgrp shows the etherchannel config for the interfaces.

netapp-a> ifgrp status
default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log'
lvif1: 2 links, transmit 'IP Load balancing', Ifgrp Type 'multi_mode' fail 'default'
         Ifgrp Status   Up      Addr_set
        up:
        e1b: state up, since 07Nov2014 08:28:44 (19+02:08:46)
                mediatype: auto-10g_twinax-fd-up
                flags: enabled
<SNIP>
        e1a: state up, since 18Jan2013 16:33:49 (676+18:03:41)
                mediatype: auto-10g_twinax-fd-up
                flags: enabled
<SNIP>
lvif0: 4 links, transmit 'IP Load balancing', Ifgrp Type 'lacp' fail 'default'
         Ifgrp Status   Up      Addr_set
        up:
        e0d: state up, since 08Sep2014 11:06:16 (78+22:31:14)
<SNIP>

There are three types of etherchannel:

  • single-mode – only one of the interfaces in the interface group is active. The other interfaces are on standby
  • static multimode – all links are bundled manually (in cisco’ese it basically says etherchannel mode on)
  • dynamic multimode – links are bundled using lacp

 

Storage interfaces

The storage adapters details can be seen as follows:

netapp-a> storage show adapter -a
Slot:            0a
Description:     SAS Host Adapter 0a (PMC-Sierra PM8001 rev. C)
Firmware Rev:    01.11.00.00
Base WWN:        5:00a098:0012b0e:70
State:           Enabled
In Use:          Yes
Redundant:       Yes
Phy State:       [0] Enabled, 6.0Gb/s (10)
                 [1] Enabled, 6.0Gb/s (10)
                 [2] Enabled, 6.0Gb/s (10)
                 [3] Enabled, 6.0Gb/s (10)

Slot:            0b
Description:     SAS Host Adapter 0b (PMC-Sierra PM8001 rev. C)
Firmware Rev:    01.11.00.00
Base WWN:        5:00a098:0012b0e:74
State:           Enabled
In Use:          Yes
Redundant:       Yes
Phy State:       [0] Enabled, 3.0Gb/s (9)
                 [1] Enabled, 3.0Gb/s (9)
                 [2] Enabled, 3.0Gb/s (9)
                 [3] Enabled, 3.0Gb/s (9)

The storage adapters connect to “shelves” which contains the storage media. A “pure” SSD shelf contains SSDs only; a “mixed” shelf contains a combination of SSDs and HDDs. The connections are done in a daisy chain fashion which means you can see the same shelf on multiple ports. Each shelf has a unique serial number and is given a unique shelf id like shelf1 which is set physically on the hardware. In front of this ID is added the storage port adapter though which you can “see” the shelf. EG 0a.shelf1 and 0b.shelf1 means you can see the same shelf through both adapters.

netapp-a> storage show shelf 0a.shelf1
Shelf name:    0b.shelf1
Channel:       0b
Module:        A
Shelf id:      1
Shelf UID:     50:05:0c:c1:02:03:9c:6b
Shelf S/N:     SHJ00000000xxxx
Term switch:   N/A
Shelf state:   ONLINE
Module state:  OK


               Partial Path   Link    Invalid   Running     Loss    Phy       CRC     Phy
Disk    Port   Timeout        Rate     DWord    Disparity   Dword   Reset     Error   Change
Id     State   Value (ms)    (Gb/s)    Count    Count       Count   Problem   Count   Count
--------------------------------------------------------------------------------------------
[SQR0] OK             7        6.0        0           0       0        0         0       3
<SNIP>
[SIL3] DIS/UNUSD      7         NA        0           0       0        0         0       1

Shelf name:    0a.shelf1
Channel:       0a
Module:        B
Shelf id:      1
Shelf UID:     50:05:0c:c1:02:03:9c:6b
Shelf S/N:     SHJ00000000xxxx
Term switch:   N/A
Shelf state:   ONLINE
Module state:  OK


               Partial Path   Link    Invalid   Running     Loss    Phy       CRC     Phy
Disk    Port   Timeout        Rate     DWord    Disparity   Dword   Reset     Error   Change
Id     State   Value (ms)    (Gb/s)    Count    Count       Count   Problem   Count   Count
--------------------------------------------------------------------------------------------
[SQR0] OK             7        6.0        0           0       0        0         0       3
<SNIP>
[SIL3] DIS/UNUSD      7         NA        0           0       0        0         0       1

 

Storage Concepts

Physical disks (in shelves or onboard) are organized into aggregates which provides pools of storage. In each aggregate, one or more flexible volumes can be created. Each volume has a default qtree (called qtree0). A qtree creates a subset of a volume to which a quota can be applied to limit its size. As a special case, a qtree can be the entire volume. A qtree is flexible because you can change the size of a qtree at any time. In addition to a quota, a qtree possesses a few other properties (mainly file security permissions).

A plex is a physical copy of a filesystem or the disks holding the data. A volume normally consists of one plex (called plex0).   A mirrored volume has two or more plexes, each with a complete copy of the data in the volume.  Multiple plexes provides safety for your data as long as you have one complete plex, you will still have access to all your data. So bottom-line, unless you mirror an aggregate, plex0 is just a placeholder that should remind you of the ability to create a mirror if needed.

On a brand new system you would first need to create and aggregate and then a volume that lives on that aggregate.  You can then attach CIFS or NFS to this volume to make it available to end users. The default qtree0 and plex0 are created automatically.

Here is the what this looks like on a Netapp Simulator

netapp01*> aggr status -v
           Aggr State           Status                Options
          aggr0 online          raid_dp, aggr         root, diskroot, nosnap=off, raidtype=raid_dp,
                                64-bit                raidsize=16, ignore_inconsistent=off,
                                                      snapmirrored=off, resyncsnaptime=60,
                                                      fs_size_fixed=off, lost_write_protect=on,
                                                      ha_policy=cfo, hybrid_enabled=off,
                                                      percent_snapshot_space=0%,
                                                      free_space_realloc=off

                Volumes: vol0

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal, block checksums

netapp01*> vol status -v
         Volume State           Status                Options
           vol0 online          raid_dp, flex         root, diskroot, nosnap=off, nosnapdir=off,
                                64-bit                minra=off, no_atime_update=off, nvfail=off,
                                                      ignore_inconsistent=off, snapmirrored=off,
                                                      create_ucode=off, convert_ucode=off,
                                                      maxdirsize=16291, schedsnapname=ordinal,
                                                      fs_size_fixed=off, guarantee=volume,
                                                      svo_enable=off, svo_checksum=off,
                                                      svo_allow_rman=off, svo_reject_errors=off,
                                                      no_i2p=off, fractional_reserve=100, extent=off,
                                                      try_first=volume_grow, read_realloc=off,
                                                      snapshot_clone_dependency=off,
                                                      dlog_hole_reserve=off, nbu_archival_snap=off
                         Volume UUID: 19647a8b-5a5c-4bd7-b67e-37a78fb4108c
                Containing aggregate: 'aggr0'

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal, block checksums

        Snapshot autodelete settings for vol0:
                                        state=off
                                        commitment=try
                                        trigger=volume
                                        target_free_space=20%
                                        delete_order=oldest_first
                                        defer_delete=user_created
                                        prefix=(not specified)
                                        destroy_list=none
        Volume autosize settings:
                                mode=off
        Hybrid Cache:
                Eligibility=read-write
netapp01*> qtree status -v
Volume   Tree     Style Oplocks  Status
-------- -------- ----- -------- ---------
vol0              unix  enabled  normal

Finally show which file systems have been “advertised” by NFS and can be mounted by clients:

netapp01> exportfs
/vol/vol0/home  -sec=sys,rw,nosuid
/vol/vol0       -sec=sys,rw,anon=0,nosuid

 

 

 

 

Advertisements

2 thoughts on “Netapp Concepts (for network engineers)

  1. Pingback: Running the Netapp Ontap 7-Mode 8.2.1 Simulator | Random Tech Notes

  2. when I connect cable between switch and netapp storage why Mac address of storage is not learned automatically on switch in general we use Mac address learn on switch to confirm physical connection before proceeding Port channel

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s