Netapp Concepts (for network engineers)

This blog is written from a a network engineers point view. I’m going to try relate the concepts to tasks/commands that you usually run on a new network device to understand what its doing.

Hardware

Like Cisco devices the Netapp filer has a concept of slots and ports. The slots start their numbering from zero and can contain either storage adapters (called HBAs) or network adapters (called line cards or NICs).

The Netapp Simulator below has no storage adapters and one quad port ethernet line card

netapp01*> sysconfig
        NetApp Release 8.2.1 7-Mode: Fri Mar 21 14:48:58 PDT 2014
        System ID: 4082368508 (netapp01)
        System Serial Number: 4082368508 (netapp01)
        System Storage Configuration: Multi-Path
        System ACP Connectivity: NA
        slot 0: System Board
                Model Name:         SIMBOX
                Processors:         2
                Memory Size:        1599 MB
                Memory Attributes:  None
        slot 0: 10/100/1000 Ethernet Controller V
                e0a MAC Address:    00:0c:29:56:d2:4a (auto-1000t-fd-up)
                e0b MAC Address:    00:0c:29:56:d2:54 (auto-1000t-fd-up)
                e0c MAC Address:    00:0c:29:56:d2:5e (auto-1000t-fd-up)
                e0d MAC Address:    00:0c:29:56:d2:68 (auto-1000t-fd-up)

 

 

While a real Netapp filer shows:

netapp-a> sysconfig
        NetApp Release 8.1.1 7-Mode: Mon Jul 30 12:49:46 PDT 2012
        System ID: xxxxxxxx (netapp-a); partner ID: yyyyyyy (netapp-b)
        System Serial Number: zzzzzzz (netapp-a)
        System Rev: D0
        System Storage Configuration: Multi-Path HA
        System ACP Connectivity: Full Connectivity
        slot 0: System Board
                Processors:         4
                Processor type:     Intel(R) Xeon(R) CPU           C3528  @ 1.73GHz
                Memory Size:        6144 MB
                Memory Attributes:  Hoisting
                                    Normal ECC
                Controller:         A
        Service Processor           Status: Online
        slot 0: Internal 10/100 Ethernet Controller
                e0M MAC Address:    00:a0:98:38:26:13 (auto-100tx-fd-cfg_down)
                e0P MAC Address:    00:a0:98:38:26:12 (auto-100tx-fd-up)
        slot 0: Quad Gigabit Ethernet Controller 82580
                e0a MAC Address:    00:a0:98:38:26:0e (auto-1000t-fd-up)
                e0b MAC Address:    00:a0:98:38:26:0f (auto-1000t-fd-up)
                e0c MAC Address:    00:a0:98:38:26:10 (auto-1000t-fd-up)
                e0d MAC Address:    00:a0:98:38:26:11 (auto-1000t-fd-up)
        slot 0: Interconnect HBA:   Mellanox IB MT25204
        slot 0: SAS Host Adapter 0a
                72 Disks:            120629.8GB
                1 shelf with IOM3, 1 shelf with IOM6, 1 shelf with IOM6E
        slot 0: SAS Host Adapter 0b
                72 Disks:            120629.8GB
                1 shelf with IOM3, 1 shelf with IOM6, 1 shelf with IOM6E
        slot 0: Intel ICH USB EHCI Adapter u0a (0xdf101000)
                boot0   Micron Technology Real SSD eUSB 2GB, class 0/0, rev 2.00/11.10, addr 2 1936MB 512B/sect (4DF0022700247875)
        slot 1: Dual 10 Gigabit Ethernet Controller IX1-SFP+
                e1a MAC Address:    00:a0:98:37:2f:78 (auto-10g_twinax-fd-up)
                e1b MAC Address:    00:a0:98:37:2f:79 (auto-10g_twinax-fd-up)

The filer has two slots: slot0 & slot1. Slot 0 has four adapters two network (ports numbered e0x) & two storage adapters (ports numbered 0a and 0b).

 

Network Interfaces

The network interface config can be viewed using the standard ifconfig / netstat commands. However the command ifgrp shows the etherchannel config for the interfaces.

netapp-a> ifgrp status
default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log'
lvif1: 2 links, transmit 'IP Load balancing', Ifgrp Type 'multi_mode' fail 'default'
         Ifgrp Status   Up      Addr_set
        up:
        e1b: state up, since 07Nov2014 08:28:44 (19+02:08:46)
                mediatype: auto-10g_twinax-fd-up
                flags: enabled
<SNIP>
        e1a: state up, since 18Jan2013 16:33:49 (676+18:03:41)
                mediatype: auto-10g_twinax-fd-up
                flags: enabled
<SNIP>
lvif0: 4 links, transmit 'IP Load balancing', Ifgrp Type 'lacp' fail 'default'
         Ifgrp Status   Up      Addr_set
        up:
        e0d: state up, since 08Sep2014 11:06:16 (78+22:31:14)
<SNIP>

There are three types of etherchannel:

  • single-mode – only one of the interfaces in the interface group is active. The other interfaces are on standby
  • static multimode – all links are bundled manually (in cisco’ese it basically says etherchannel mode on)
  • dynamic multimode – links are bundled using lacp

 

Storage interfaces

The storage adapters details can be seen as follows:

netapp-a> storage show adapter -a
Slot:            0a
Description:     SAS Host Adapter 0a (PMC-Sierra PM8001 rev. C)
Firmware Rev:    01.11.00.00
Base WWN:        5:00a098:0012b0e:70
State:           Enabled
In Use:          Yes
Redundant:       Yes
Phy State:       [0] Enabled, 6.0Gb/s (10)
                 [1] Enabled, 6.0Gb/s (10)
                 [2] Enabled, 6.0Gb/s (10)
                 [3] Enabled, 6.0Gb/s (10)

Slot:            0b
Description:     SAS Host Adapter 0b (PMC-Sierra PM8001 rev. C)
Firmware Rev:    01.11.00.00
Base WWN:        5:00a098:0012b0e:74
State:           Enabled
In Use:          Yes
Redundant:       Yes
Phy State:       [0] Enabled, 3.0Gb/s (9)
                 [1] Enabled, 3.0Gb/s (9)
                 [2] Enabled, 3.0Gb/s (9)
                 [3] Enabled, 3.0Gb/s (9)

The storage adapters connect to “shelves” which contains the storage media. A “pure” SSD shelf contains SSDs only; a “mixed” shelf contains a combination of SSDs and HDDs. The connections are done in a daisy chain fashion which means you can see the same shelf on multiple ports. Each shelf has a unique serial number and is given a unique shelf id like shelf1 which is set physically on the hardware. In front of this ID is added the storage port adapter though which you can “see” the shelf. EG 0a.shelf1 and 0b.shelf1 means you can see the same shelf through both adapters.

netapp-a> storage show shelf 0a.shelf1
Shelf name:    0b.shelf1
Channel:       0b
Module:        A
Shelf id:      1
Shelf UID:     50:05:0c:c1:02:03:9c:6b
Shelf S/N:     SHJ00000000xxxx
Term switch:   N/A
Shelf state:   ONLINE
Module state:  OK


               Partial Path   Link    Invalid   Running     Loss    Phy       CRC     Phy
Disk    Port   Timeout        Rate     DWord    Disparity   Dword   Reset     Error   Change
Id     State   Value (ms)    (Gb/s)    Count    Count       Count   Problem   Count   Count
--------------------------------------------------------------------------------------------
[SQR0] OK             7        6.0        0           0       0        0         0       3
<SNIP>
[SIL3] DIS/UNUSD      7         NA        0           0       0        0         0       1

Shelf name:    0a.shelf1
Channel:       0a
Module:        B
Shelf id:      1
Shelf UID:     50:05:0c:c1:02:03:9c:6b
Shelf S/N:     SHJ00000000xxxx
Term switch:   N/A
Shelf state:   ONLINE
Module state:  OK


               Partial Path   Link    Invalid   Running     Loss    Phy       CRC     Phy
Disk    Port   Timeout        Rate     DWord    Disparity   Dword   Reset     Error   Change
Id     State   Value (ms)    (Gb/s)    Count    Count       Count   Problem   Count   Count
--------------------------------------------------------------------------------------------
[SQR0] OK             7        6.0        0           0       0        0         0       3
<SNIP>
[SIL3] DIS/UNUSD      7         NA        0           0       0        0         0       1

 

Storage Concepts

Physical disks (in shelves or onboard) are organized into aggregates which provides pools of storage. In each aggregate, one or more flexible volumes can be created. Each volume has a default qtree (called qtree0). A qtree creates a subset of a volume to which a quota can be applied to limit its size. As a special case, a qtree can be the entire volume. A qtree is flexible because you can change the size of a qtree at any time. In addition to a quota, a qtree possesses a few other properties (mainly file security permissions).

A plex is a physical copy of a filesystem or the disks holding the data. A volume normally consists of one plex (called plex0).   A mirrored volume has two or more plexes, each with a complete copy of the data in the volume.  Multiple plexes provides safety for your data as long as you have one complete plex, you will still have access to all your data. So bottom-line, unless you mirror an aggregate, plex0 is just a placeholder that should remind you of the ability to create a mirror if needed.

On a brand new system you would first need to create and aggregate and then a volume that lives on that aggregate.  You can then attach CIFS or NFS to this volume to make it available to end users. The default qtree0 and plex0 are created automatically.

Here is the what this looks like on a Netapp Simulator

netapp01*> aggr status -v
           Aggr State           Status                Options
          aggr0 online          raid_dp, aggr         root, diskroot, nosnap=off, raidtype=raid_dp,
                                64-bit                raidsize=16, ignore_inconsistent=off,
                                                      snapmirrored=off, resyncsnaptime=60,
                                                      fs_size_fixed=off, lost_write_protect=on,
                                                      ha_policy=cfo, hybrid_enabled=off,
                                                      percent_snapshot_space=0%,
                                                      free_space_realloc=off

                Volumes: vol0

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal, block checksums

netapp01*> vol status -v
         Volume State           Status                Options
           vol0 online          raid_dp, flex         root, diskroot, nosnap=off, nosnapdir=off,
                                64-bit                minra=off, no_atime_update=off, nvfail=off,
                                                      ignore_inconsistent=off, snapmirrored=off,
                                                      create_ucode=off, convert_ucode=off,
                                                      maxdirsize=16291, schedsnapname=ordinal,
                                                      fs_size_fixed=off, guarantee=volume,
                                                      svo_enable=off, svo_checksum=off,
                                                      svo_allow_rman=off, svo_reject_errors=off,
                                                      no_i2p=off, fractional_reserve=100, extent=off,
                                                      try_first=volume_grow, read_realloc=off,
                                                      snapshot_clone_dependency=off,
                                                      dlog_hole_reserve=off, nbu_archival_snap=off
                         Volume UUID: 19647a8b-5a5c-4bd7-b67e-37a78fb4108c
                Containing aggregate: 'aggr0'

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal, block checksums

        Snapshot autodelete settings for vol0:
                                        state=off
                                        commitment=try
                                        trigger=volume
                                        target_free_space=20%
                                        delete_order=oldest_first
                                        defer_delete=user_created
                                        prefix=(not specified)
                                        destroy_list=none
        Volume autosize settings:
                                mode=off
        Hybrid Cache:
                Eligibility=read-write
netapp01*> qtree status -v
Volume   Tree     Style Oplocks  Status
-------- -------- ----- -------- ---------
vol0              unix  enabled  normal

Finally show which file systems have been “advertised” by NFS and can be mounted by clients:

netapp01> exportfs
/vol/vol0/home  -sec=sys,rw,nosuid
/vol/vol0       -sec=sys,rw,anon=0,nosuid

 

 

 

 

Advertisements

Running the Netapp Ontap 7-Mode 8.2.1 Simulator

The below is copied from a set of notes taken a while ago. I’ll update these as I go.

For VMware Workstation

  • Grab the vsim_netapp-7m.tgz from the Netapp Support site, untar / unzip it
  • This will uncompress a bunch of vmdk files. Most of these files are “individual” disk which will appear on the storage controller
  • Load the VMX file in VMware Workstation

 For ESXi

  • Grab the vsim_esx-7m.tgz from the Netapp Support site, untar / unzip it
  • Enable ssh on the exsi server
  • Copy the tar to the datastore1
  • Uncompress the image (tar -xvzf)
  • Run vmkload_mod multiextent  (https://communities.netapp.com/thread/24329)

 

Common Instructions

  • Boot the vm
    Ctrl-c during boot

1

Select option 4 to start with a fresh config

Provide a hostname, don’t use IPv6 or interface groups
Then setup an IP address and default gateway on e0a, the remaining interfaces can be setup later

When requested don’t provide an admin host (otherwise you will be restricted to use this machine configure the netapp)

netapp_ip

Setup root password when requested
Log in as root
Change the network interfaces setting to connect the correct physical network (Usually e0a is mapped to network adaper 1)

netapp_settings

  • You should now be able to ping and ssh to the netapp using root / pwd
  • Now install OnCommand System Manager 3.1  (I tired to use 3.1.1 but it refused to authenticate correctly stick with the older version)
  • Add the netapp to OnCommand System Manager using the root username/password

If the command manager does not work (connection refused) then its probably because the httpd wasn’t enabled during initial config. Fix as follows:

1) ssh to netapp

netapp1> options httpd
httpd.access legacy
httpd.admin.access legacy
httpd.admin.enable off
httpd.admin.hostsequiv.enable off
httpd.admin.max_connections 512
httpd.admin.ssl.enable off
httpd.admin.top-page.authentication on
httpd.autoindex.enable off
httpd.bypass_traverse_checking off
httpd.enable off
httpd.ipv6.enable off
httpd.log.format common
httpd.method.trace.enable off
httpd.rootdir /vol/vol0/home/http
httpd.timeout 300
httpd.timewait.enable off
3) options httpd.admin.enable true (enable access)
4) options httpd.admin.ssl.enable true (enable secure access)
netapp1> options httpd
httpd.access legacy
httpd.admin.access legacy
httpd.admin.enable on
httpd.admin.hostsequiv.enable off
httpd.admin.max_connections 512
httpd.admin.ssl.enable on
httpd.admin.top-page.authentication on
httpd.autoindex.enable off
httpd.bypass_traverse_checking off
httpd.enable off
httpd.ipv6.enable off
httpd.log.format common
httpd.method.trace.enable off
httpd.rootdir /vol/vol0/home/http
httpd.timeout 300
httpd.timewait.enable off

Now install the licenses from the OnCommand System Manager.  Click Config -> System tools -> Licences -> Add then paste the codes. Note, the ESXi version has different codes to the Vmware Workstation version.

The Netapp is now ready to configure and use.

Read my Netapp Storage Concepts (for network engineers) for more information on how to use the netapp.

 

Disk Structure in the Simulator

The 8.2.1 simulator starts off with:

  • 28 disks (2 shelves with 14 disks each)
netapp01*> storage  show disk
DISK                  SHELF BAY SERIAL           VENDOR   MODEL      REV
--------------------- --------- ---------------- -------- ---------- ----
v4.16                   ?    ?  08561200         NETAPP   VD-1000MB- 0042
v4.17                   ?    ?  08561201         NETAPP   VD-1000MB- 0042
v4.18                   ?    ?  08561202         NETAPP   VD-1000MB- 0042
v4.19                   ?    ?  08561203         NETAPP   VD-1000MB- 0042
v4.20                   ?    ?  08561204         NETAPP   VD-1000MB- 0042
v4.21                   ?    ?  08561205         NETAPP   VD-1000MB- 0042
v4.22                   ?    ?  08561206         NETAPP   VD-1000MB- 0042
  • pool 0 with 14 assigned disks (leaving 14 unowned disks)
  • aggr0, containing plex0, and rg0 (RAID group) with 3 disks in a RAID-DP configuration (1 data disk)
netapp01*> aggr status -v
           Aggr State           Status                Options
          aggr0 online          raid_dp, aggr         root, diskroot, nosnap=off, raidtype=raid_dp,
                                64-bit                raidsize=16, ignore_inconsistent=off,
                                                      snapmirrored=off, resyncsnaptime=60,
                                                      fs_size_fixed=off, lost_write_protect=on,
                                                      ha_policy=cfo, hybrid_enabled=off,
                                                      percent_snapshot_space=0%,
                                                      free_space_realloc=off

                Volumes: vol0

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal, block checksums
  • vol0 in aggr0 – thick provisioned 871.916MB in size
netapp01*> vol size vol0
vol size: Flexible volume 'vol0' has size 871916k.


In onCommand click Storage -> Disks

netapp_disk

 

 

Enable access to the OS

Enter advanced mode and unlock the diagnostic user. This will allow you to look at the operating system files/logs

 ssh as root users, Enter a password and confirm.
 priv set advanced
 useradmin diaguser unlock
 useradmin diaguser password

Then launch the systemshell and login as diag and enter the password you have just set:

systemshell

netapp_diag

 

 

References

Netapp Cheat Sheet – Lists most basic cli commands

ESXi Install guide

Add Shelves to the simulator