Netapp Transition Migration with PowerShell helpers

Intro

I often do work with customers moving legacy NetApp 7-mode systems to current ONTAP. This is termed as transition In NetApp parlance and they provide extensive tools to evaluate, plan and execute a transition. In the ideal situation, NetaApp's 7mTT tool is used to orchestrate all of the transition work. However, customer systems and requirements may not make that entirely possible. Customers may have old 32-bit aggregates and volumes, for example, and they may not want to negotiate an additional maintenance window to upgrade an old 7-mode filer to a new version to gain the capability to update those aggregates to 64-bit. They may want to consolidate some data down on the ONTAP target systems in such a way that they can't utilize 7mTT and transition SnapMirror.

There are of course many ways to migrate data. Netapp provides xcp freely for very efficient host-based copies but it's current use case today is NFS data. Robocopy on windows with multithreading can also be reasonable as well as other 3rd party tool options. NetApp filers also provide ndmpcopy to duplicate volumes using the NDMP protocol for transport. In my case and the examples below I used ndmpcopy controlled via the NetApp PowerShell toolkit to move data and used PowerShell to perform various cutover tasks that 7mTT typically performs. I had volumes containing ISCSI luns as well as SMB shares to migrate.

Ndmpcopy has been around a long time as has ndmp. It's not necessarily simple to use and is the most efficient way to move data. It can have challenges with user files that have problematic file names. But, despite those caveats, it was suitable for my use case detailed here.

Setup and planning notes

I added additional IP aliases to the 7-mode filers and I used those IPs as the sources for communicating with the filers and for data copy activities for items that I was going to move via 7mTT as well as ndmpcopy. The reason is that the filer's source IP would eventually move to the destination SVM when SMB shares were migrated and I would still have need to communicate with the old filer and possibly still migrate ISCSI data after I moved that primary IP. I recommend doing this as part of a transition regardless. Also even if you can't use transition SnapMirror at all, use 7mTT to assess the environment. I did use 7mTT for a substantial portion of the transition that I reference in this post.

Setup connections and variables

The first item you need to do is establish a credentials cache for all the filer sources and destinations. NOTE: You really only need to do this once to populate a credentials cache file. This file is local to your account on the host used to execute PowerShell. With it you don't have to concern yourself with typing passwords. In this case I'm using local accounts rather than Active Directory accounts to authenticate to the devices. If you have AD configured such that AD accounts have API and management authorization on the filer, they will automatically login without the need to first enter a password and populate the credentials cache. file: populate_creds.ps1


  import-module DataOntap
  write-host "Get credentials for dst2netapp_cluster"
  Add-NcCredential -Controller 10.27.0.20 -Credential(Get-Credential)
  write-host "Get credentials for dst1netapp_cluster"
  Add-NcCredential -Controller 10.21.0.20 -Credential(Get-Credential)
  write-host "Get credentials for src2netapp1"
  Add-NaCredential -Controller 10.27.0.221 -Credential(Get-Credential)
  write-host "Get credentials for src2netapp2"
  Add-NaCredential -Controller 10.27.0.222 -Credential(Get-Credential)
  write-host "Get credentials for dst2node1"
  Add-NaCredential -Controller 10.27.0.23 -Credential(Get-Credential)
  write-host "Get credentials for dst2node2"
  Add-NaCredential -Controller 10.27.0.24 -Credential(Get-Credential)
  write-host "Get credentials for dst1node1"
  Add-NaCredential -Controller 10.21.0.23 -Credential(Get-Credential)
  write-host "Get credentials for dst1node2"
  Add-NaCredential -Controller 10.21.0.24 -Credential(Get-Credential)
  write-host "Get credentials for src1netapp1"
  Add-NaCredential -Controller 10.21.0.201 -Credential(Get-Credential)
  Get-NcCredential

Next configure NDMP specific credentials for the destination filer and 7-mode filers as well by populating variables for use in subsequent commands. I'm assuming the NDMP protocol is already enabled and don't detail those steps here. I used node-scoped NDMP on the clustered ONTAP systems to support these operations using the same ndmp creds for the nodes. File: netapp_variables.ps1


  $dst2node1 = "10.27.0.23"
  $dst2node2 = "10.27.0.24"
  $dst1node1 = "10.21.0.23"
  $dst1node2 = "10.21.0.24"

  $ndmp_src1netapp1_pw = convertto-securestring "qBsGHJXpJFJGXDrL" -asplaintext -force
  $ndmp_src2netapp1_pw = convertto-securestring "06gPCHJXNXXG3vbc" -asplaintext -force
  $ndmp_src2netapp2_pw = convertto-securestring "JAqSoieHaZgPYGXl" -asplaintext -force
  $ndmp_dst2netapp_cluster_pw = convertto-securestring "aJHhASDlJ5PklFZe" -asplaintext -force
  $ndmp_dst1netapp_cluster_pw = convertto-securestring "iHp4CAPNlrXxAVz4" -asplaintext -force
  $src1netapp1_ndmpcred = new-object system.management.automation.PSCredential ("scott",$ndmp_src1netapp1_pw)
  $src2netapp1_ndmpcred = new-object system.management.automation.PSCredential ("scott",$ndmp_src2netapp1_pw)
  $src2netapp2_ndmpcred = new-object system.management.automation.PSCredential ("scott",$ndmp_src2netapp2_pw)
  $dst2netapp_cluster_ndmpcred = new-object system.management.automation.PSCredential ("backup",$ndmp_dst2netapp_cluster_pw)
  $dst1netapp_cluster_ndmpcred = new-object system.management.automation.PSCredential ("backup",$ndmp_dst1netapp_cluster_pw)

Now I can connect to filers and initialize device connection variables. file: netapp_initiate.ps1

  import-module DataOnTap

  $src2netapp1 = connect-nacontroller -name 10.27.0.221 -http
  $src2netapp2 = connect-nacontroller -name 10.27.0.222 -http
  $src1netapp1 = connect-nacontroller -name 10.21.0.201 -http
  $dst1netapp_cluster = Connect-NcController -name 10.21.0.20
  $dst2netapp_cluster = Connect-NcController -name 10.27.0.20

These files can be source into the PowerShell session used to control subsequent operations.

Create volumes and SnapMirror destinations

Now I need to create transition destination volumes and, if applicable, their eventual SnapMirror destinations as well. One item to consider here is when you should intialize SnapMirror relationships and start snapshot schedule policies. If ndmp incrementalforever works well, your SnapMirror updates and their base snapshots will be okay. If you have to redo a level 0 ndmp dump you also have to consider the impact on the SnapMirror base snapshot and may have to re-baseline as well. In other words, your snapshots for SnapMirror and/or scheduled snap sizes could balloon temporarily on the destination cluster.

In this example, I get volume information from 7-mode and then apply it to the ONTAP destination. I could grab the volume size and supply it the subsequent New-NcVol here rather than manually inputting into my $newsize variable. I put it in manually because in some cases I do alter the size of the transition target volumes.

  PS C:\Users\Scott\Scripts> get-navol -controller $src2netapp1 foodata

  Name                      State       TotalSize  Used  Available Dedupe  FilesUsed FilesTotal Aggregate
  ----                      -----       ---------  ----  --------- ------  --------- ---------- ---------
  foodata            online         1.0 TB   56%   454.0 GB  True        192k        32M aggr1


  PS C:\Users\Scott\Scripts> $newvol = "foodata"
  PS C:\Users\Scott\Scripts> $newsize = "1t"
  PS C:\Users\Scott\Scripts> New-NcVol -controller $dst2netapp_cluster -Name $newvol  -Aggregate dst2node1_data -VserverConext DST2_CIFS_SVM -SecurityStyle ntfs -spaceguarantee none -Size $newsize -JunctionPath /$newvol -ExportPolicy default -SapshotReserve 2 -QosPolicyGroup default_qos_cifs -EfficiencyPolicy default

  Name                      State       TotalSize  Used  Available Dedupe Aggregate                 Vserver
  ----                      -----       ---------  ----  --------- ------ ---------                 -------
  foodata            online         1.0 TB    2%  1003.5 GB  True  dst2node1_data            DST2_CIFS_SVM

  PS C:\Users\Scott\Scripts> New-NcVol -controller $dst1netapp_cluster -Name $newvol  -Aggregate nonnetapp3_data -VserverCotext DST1_CIFS_SVM -SecurityStyle ntfs -spaceguarantee none -Size $newsize -ExportPolicy default -SnapshotReserve 2 -QosPlicyGroup default_qos_cifs -JunctionPath $null -type dp

  Name                      State       TotalSize  Used  Available Dedupe Aggregate                 Vserver
  ----                      -----       ---------  ----  --------- ------ ---------                 -------
  foodata            restricted     1.0 TB                         nonnetapp3_data           DST1_CIFS_SVM


  PS C:\Users\Scott\Scripts> New-NcSnapMirror -Controller $dst1netapp_cluster -Source dst2netapp_cluster://DST2_CIFS_SVM/$newvol -Destination dst1netapp_cluster://DST1_CIFS_SVM/$newvol -policy MirrorAllSnapshots -Type vault -Schedule 8hour

  SourceLocation                                DestinationLocation                           Status       MirrorState
  --------------                                -------------------                           ------       -----------
  DST2_CIFS_SVM:foodata                    DST1_CIFS_SVM:foodata                    idle

  PS C:\Users\Scott\Scripts> Invoke-NcSnapMirrorInitialize -Controller $dst1netapp_cluster -source dst2netapp_cluster://DST2_CIFS_SVM/$newvol -Destination dst1netapp_cluster://DST1_CIFS_SVM/$newvol


  NcController      : 10.21.0.20
  ResultOperationId : 92fa8814-577e-11e7-9098-00a098b3de4f
  ErrorCode         :
  ErrorMessage      :
  JobId             :
  JobVserver        :
  Status            : succeeded

Both the source and DP SnapMirror volumes are created and the SnapMirror relationship is initialized which takes moments since the source volume doesn't have content yet.

Intial ndmpcopy

I used ndmpcopy with incrementalfover after the first level 0 dump to copy data. ndmpcopy with levels specified can use only up to level 2 so if you do an initial level 0 and 2 subsequent updates, you would have to do another level 0 which could be lengthy. The incrementalforever works okay with some data sets but I did find some situations which it errored out and I was forced to use level 0s. These were smaller volumes at least so the impact was not significant. Again, NDMP may not be the way to go and you may need to choose an alternate method to move the data.

file: do_ndmcopy_cifs_src2netapp1_group1.ps1

. ".\netapp_initiate.ps1"

$dstsvm = "DST2_CIFS_SVM"
$level = 0
$volumes = @("foodata", "bardata")
$srccontroller = $src2netapp1.address.ipaddresstostring

foreach ($volume in $volumes) {
     write-host "start-nandmpcopy -SrcController $srccontroller -srcpath /vol/$volume -dstcontroller $dst2node1 -dstpath /$dstsvm/$volume -level $level -srccredential $src2netapp1_ndmpcred -dstcredential $dst2netapp_cluster_ndmpcred -srcauthtype md5 -dstauthtype md5"
     start-nandmpcopy -SrcController $srccontroller -srcpath /vol/$volume -dstcontroller $dst2node1 -dstpath /$dstsvm/$volume -level $level -srccredential $src2netapp1_ndmpcred -dstcredential $dst2netapp_cluster_ndmpcred -srcauthtype md5 -dstauthtype md5
    }

The script calls netapp_initiate.ps1 and then launches level 0 ndmpcopy jobs. I grouped them as needed into filename groupings based on our intended eventual cutover windows.

launching the first job:

  PS C:\Users\Scott\Scripts> .\do_ndmpcopy_cifs_src2netapp1_group1.ps1
  start-nandmpcopy -SrcController 10.27.0.221 -srcpath /vol/foodata -dstcontroller 10.27.0.24 -dstpath /DST1_CIFS_SVM/foodat
  a -incrementalforever -srccredential System.Management.Automation.PSCredential -dstcredential System.Management.Automati
  on.PSCredential -srcauthtype md5 -dstauthtype md5
  WARNING: PowerShell session must remain open until the NDMP copy has completed or the operation will fail.

  Id   State          SrcPath                    DstPath                        BackupBytesProcessed    BackupBytesRemain
  --   -----          -------                    -------                        --------------------    -----------------
  9   RUNNING        /vol/foodata        /DST2_CIFS_SVM/foodata/                     0                    0
  PS C:\Users\Scott\Scripts> $houdata = get-nandmpcopy -id 9;$foodata.logmessages;$foodata

Using that last command string I can repeat and get a status update as well as log entries for the particular job. Those logs are also stored on the filers but it's handy to be able to grab them here.

Update ndmpcopy runs

I edit the do_ndmpcopy_cifs_src2netapp1_group1.ps file to use incrementalforever on subsequent runs like so:

  . ".\netapp_initiate.ps1"

  $dstsvm = "DST2_CIFS_SVM"
  $level = 0
  $VOLUMES = @("foodata", "bardata")
  $srccontroller = $src2netapp1.address.ipaddresstostring

  foreach ($volume in $volumes) {
       write-host "start-nandmpcopy -SrcController $srccontroller -srcpath /vol/$volume -dstcontroller $dst2node1 -dstpath /$dstsvm/$volume -incrementalforever -srccredential $src2netapp1_ndmpcred -dstcredential $dst2netapp_cluster_ndmpcred -srcauthtype md5 -dstauthtype md5"
       start-nandmpcopy -SrcController $srccontroller -srcpath /vol/$volume -dstcontroller $dst2node1 -dstpath /$dstsvm/$volume -incrementalforever -srccredential $src2netapp1_ndmpcred -dstcredential $dst2netapp_cluster_ndmpcred -srcauthtype md5 -dstauthtype md5
      }

I have a directory of my various ndmpcopy groups that I can use a simple loop to run multiple updates:

get-childitem .\do_ndmpcopy_cifs* | foreach-object { & $_.fullname}

I can check their status using the same technique shown above for an individual job id or just Get-NaNdmpCopy for all of them.

Cutting over iSCSI

The steps below detail the additional steps needed to cutover volumes containing iSCSI lun data

Prepare for cutover aka "apply configuration"

Just as with 7mTT we want to create our igroups and mappings based off of the 7-mode source filer. First we grab an igroup. Here I use where-object to match on a wildcard string since I happen to know the igroup will contain the relevant server name.


  PS C:\Users\Scott\Scripts> $myigroups = get-naigroup -controller $src2netapp1 | where-object {($_.Name -like '*custap*')}
  PS C:\Users\Scott\Scripts> $myigroups


  Name            : viaRPC.iqn.1991-05.com.microsoft:custappsql.customer.com
  Type            : windows
  Protocol        : iscsi
  PortSet         :
  ALUA            : False
  ThrottleBorrow  : False
  ThrottleReserve : 0
  Partner         :
  VSA             : False
  Initiators      : {iqn.1991-05.com.microsoft:custappsql.customer.com}

Once I've grabbed that I can create my igroup and add the initiator(s) to it.

  PS C:\Users\Scott\Scripts> New-NcIgroup -Controller $dst2netapp_cluster -Name $myigroups.Name -Protocol iscsi -Type $myigroups.Type -vserver DST2_ISCSI_SVM


  Name            : viaRPC.iqn.1991-05.com.microsoft:custappsql.customer.com
  Type            : windows
  Protocol        : iscsi
  Portset         :
  ALUA            : True
  ThrottleBorrow  : False
  ThrottleReserve : 0
  Partner         : True
  VSA             : False
  Initiators      :
  Vserver         : DST2_ISCSI_SVM

  PS C:\Users\Scott\Scripts> $myigroups.Initiators | foreach-object {
  >> Add-NcIgroupInitiator -controller $dst2netapp_cluster -name $myigroups.Name -vservercontext DST2_ISCSI_SVM -IQN $_.initiatorName }
  >>


  Name            : viaRPC.iqn.1991-05.com.microsoft:custappsql.customer.com
  Type            : windows
  Protocol        : iscsi
  Portset         :
  ALUA            : True
  ThrottleBorrow  : False
  ThrottleReserve : 0
  Partner         : True
  VSA             : False
  Initiators      : {iqn.1991-05.com.microsoft:custappsql.customer.com}
  Vserver         : DST2_ISCSI_SVM

Now I'm ready to map the igroups to luns. /NOTE: you should probably complete this step during the final cutover when the server using the LUNs is shut down./ You can optionally map them prior to cutover but this could get confusing fast. First I capture the mapped luns from 7mode and then use that to perform the lun mapping on cdot.

  PS C:\Users\Scott\Scripts> $mymappedluns = $myigroups.initiators | foreach-object { Get-NaLunMapByInitiator -controller $src2netapp1 -initiator $_.initiatorName }
  PS C:\Users\Scott\Scripts> $mymappedluns

  InitiatorGroup                                                            LunId Path
  --------------                                                            ----- ----
  viaRPC.iqn.1991-05.com.microsoft:cus...                                       1 /vol/custapp_sql_backup/custapp_sql_...
  viaRPC.iqn.1991-05.com.microsoft:cus...                                       0 /vol/custapp_sql_db/custapp_sql_db

  PS C:\Users\Scott\Scripts> $mymappedluns | foreach-object { Add-NcLunMap -controller $dst2netapp_cluster -VserverContext DST2_ISCSI_SVM -Path $_.Path -id $_.lunid -InitiatorGroup $_.initiatorgroup }

  Path                                           Size   SizeUsed Protocol     Online Mapped  Thin  Vserver
  ----                                           ----   -------- --------     ------ ------  ----  -------
  /vol/custapp_sql_backup/custapp_sql_...    100.0 GB     4.4 GB windows_2008  True   True  False  DST2_ISCSI_SVM
  /vol/custapp_sql_db/custapp_sql_db          75.0 GB     7.5 GB windows_2008  True   True  False  DST2_ISCSI_SVM

Final cutover for iSCSI

Cutover for volumes containing iSCSI luns is similar to what 7mTT does. We shut down the machine using the luns and perform a final ndmpcopy run to get a good sync.

Once we map the luns (shown above) we need to clean up the 7-mode side prior to bringing up the server. The steps below show unmapping of the luns from the 7-mode system, offlining those luns, renaming the volumes containing them and then finally offlining the volumes on the 7-mode system.

  PS C:\Users\Scott\Scripts> $mymappedluns | foreach-object { Remove-NaLunMap -controller $src2netapp1 -Path $_.Path -InitiatorGroup $_.initiatorgroup }

  Path                                      TotalSize   SizeUsed Protocol     Online Mapped  Thin  Comment
  ----                                      ---------   -------- --------     ------ ------  ----  -------
  /vol/custapp_sql_backup/custapp_sql_backup     100.0 GB     4.5 GB windows_2008 True  False  False
  /vol/custapp_sql_db/custapp_sql_db              75.0 GB     7.6 GB windows_2008 True  False  False
  PS C:\Users\Scott\Scripts> $mymappedluns | foreach-object { Set-NaLun -controller $src2netapp1 -Path $_.Path -Offline }

  Path                                      TotalSize   SizeUsed Protocol     Online Mapped  Thin  Comment
  ----                                      ---------   -------- --------     ------ ------  ----  -------
  /vol/custapp_sql_backup/custapp_sql_backup     100.0 GB     4.5 GB windows_2008 False  False  False
  /vol/custapp_sql_db/custapp_sql_db              75.0 GB     7.6 GB windows_2008 False  False  False
  PS C:\Users\Scott\Scripts> $mymappedluns | foreach-object { $tempvol = $_.Path.Split("{/}"); $fooname = $tempvol[2]; Rename-NaVol -controller $src2netapp1 -Name $fooname -NewName "Off_$fooname" }

  Name                      State       TotalSize  Used  Available Dedupe  FilesUsed FilesTotal Aggregate
  ----                      -----       ---------  ----  --------- ------  --------- ---------- ---------
  Off_custapp_sql_backup      online       200.0 GB   51%    98.9 GB  True         103         9M aggr2
  Off_custapp_sql_db          online       150.0 GB   51%    73.9 GB  True         103         6M aggr2

  PS C:\Users\Scott\Scripts> $mymappedluns | foreach-object { $tempvol = $_.Path.Split("{/}"); $fooname = $tempvol[2]; Set-NaVol -controller $src2netapp1 -Name "Off_$fooname" -Offline }

  Name                      State       TotalSize  Used  Available Dedupe  FilesUsed FilesTotal Aggregate
  ----                      -----       ---------  ----  --------- ------  --------- ---------- ---------
  Off_custapp_sql_backup      offline             0    0%          0 False           0          0 aggr2
  Off_custapp_sql_db          offline             0    0%          0 False           0          0 aggr2

The final step I took is renaming the NDMPcopy script I used to indicate that my work is complete and make sure I don't accidentally attempt to start another copy. The cutover activities referenced here involved multiple volumes and windows rather than a single large cutover.

CIFS cutover

CIFS(SMB) cutover is a bit different. Here I elected to use the CLI on 7mode and clustered ONTAP to complete much of the work. Yes I could have done these same steps using the PSTK to transform the shares and share ACLs but it made more sense for me to complete this using the CLI.

The following things are true for this scenario:

  • The destination SVM is not taking on the active directory identity of the original 7-mode system

  • It is, however, eventually taking on the IP address of the source system

  • The DNS record for the 7 mode system therefore remains, but the AD computer account will be deleted by the AD admin during the cutover window. This is important for authentication/authorization.

  • The 7-mode system will continue to use the additional IP we added until it is decommissioned. It will be renamed as well. CIFS services will remain shut down.

  • On the 7-mode system, VLAN interfaces were created on top of LACP interface groups. The same approach is applied on clustered data ontap to create interface "ports" and the final LIF with the layer 3 IP is applied to the SVM. /NOTE: You really need to understand the networking differences between legacy 7-mode and current clustered ONTAP going forward..

Copying CIFS shares and CIFS share ACLs, aka "apply configuration"

While it should be possible to use a similar technique to copy CIFS shares and their ACLs from 7-mode to clustered ONTAP, I chose to use Cosonok's script. His approach is to grab the data from the 7 mode files and translate them into clustered ONTAP CLI commands suitable for cut and paste. Yes, this is a bit more primitive perhaps, but it's tried and true. The script is also mature and covers a lot of details that I didn't have time to research, test and debug. The tradeoff for me was making sure my cuts and pastes were accurate.

The script worked as expected creating the shares and ACLs and providing some checks along the way to ensure they were created correctly

Final cutover for CIFS

This is relatively similar to iSCSI in the sense that we shut down, perform a last copy, and bring up . However, in this scenario I control the final shutdown of CIFS services using the 7mode CLI. Of course I could do all of this with appropriate PSTK cmdlets. Below I first grab a view of what is currently connected and then terminate CIFS.

I grab the counts of connections just to have a quick check after cutover. Once the source system's AD account is deleted I expect to see many of these CIFS connections show up on the target side after a few minutes.

  ~/Downloads on  master ⌚ 19:01:42
  $ ssh -c 3des-cbc scott@src2netapp1 cifs sessions -t
    scott@src2netapp1's password:
    Using domain authentication. Domain type is Windows 2000.
    Root volume language is not set. Use vol lang.
    Number of WINS servers: 1
    Total CIFS sessions: 29
    CIFS open shares: 28
    CIFS open files: 14
    CIFS locks: 146
    CIFS credentials: 64
    IPv4 CIFS sessions: 29
    IPv6 CIFS sessions: 0
    Cumulative IPv4 CIFS sessions: 1436846
    Cumulative IPv6 CIFS sessions: 0
    CIFS sessions using security signatures: 0

    src2netapp1> cifs terminate
    Total number of connected CIFS users: 4
         Total number of open CIFS files: 3
    Warning: Terminating CIFS service while files are open may cause data loss!!
    Enter the number of minutes to wait before disconnecting [5]: 0

    CIFS local server is shutting down...

    CIFS local server has shut down...

Now I can complete the final ndmpcopy update. Following that I need to clean up the 7-mode system by removing the old IP, updating the relevant startup files /etc/rc and /etc/hosts. I used rdfile to grab the files and I use wrfile to write them out and another rdfile to verify.

new /etc/rc

  hostname src1netapp1-old
  ifgrp create lacp trunk -b mac e0a e6a e0b e6c
  vlan create trunk 10.27 127 300
  ifconfig trunk-172 10.21.0.201 netmask 255.255.0.0 up
  ifconfig trunk-300 `hostname`-trunk-300 netmask 255.255.255.0 mtusize 1500 trusted -wins up
  route add default 10.21.0.1 1
  routed on
  options dns.domainname customer.com
  options dns.enable on
  options nis.enable off
  savecore

new /etc/hosts

  127.0.0.1       localhost
  10.21.0.200    src1netapp1-old       src1netapp1-trunk-172
  192.168.221.21  src1netapp1-trunk-300
  #192.168.227.23  src1netapp1-trunk-172
  10.21.0.186    mailhost
  10.27.0.21     src2netapp1
  10.27.0.22     src2netapp2

Those files will simply be pasted in via wrfile once the final copy is complete. Then I do the following to replace the primary IP to match my new startup files.

  ifconfig trunk-172 -alias 10.21.0.201
  ifconfig trunk-172 10.21.0.201 netmask 255.255.0.0 up
  hostname src1netapp1-old

With all that done I can place the original source IP on my clustered ontap svm with:

net int create -vserver DST2_CIFS_SVM -address 10.21.0.200 -netmask 255.255.255.0 -home-node dst2node1 -home-port a0a-172 -admin-status up

Once that LIF is created on clustered ONTAP, the Active Directory computer record for the 7-mode system should be deleted while the DNS A and PTR records remain in place. Browsing the system in Windows should pull up the list of visible shares assuming you are logged in via an account with appropriate credentials. At this point my customer could do additional application testing and final checks.

Conclusions

With any transition there's some cleanup such as with snapshot policies. This is your opportunity to ensure snapshot intervals and SnapMirror policies are rationalized for your environment. Cleaning up snapshots that transitioned with 7mTT may need to be done as well but that's a subject for a follow-up post.

All of the above is very specific to NetApp systems and transitions. But the techniques and overall approach are not. The reality is we could use similar methods to move data between varying systems. The tooling and APIs for the source or destination systems may differ but you should be able to find a way to automate as much of the process as possible. Wherever you can, if you're using known good values from a source system (eg. iSCSI IQNs) you should be able to programmatically grab those values and send them to a destination system, hopefully with minimal need to transform the values. We should be able to use automation to take care of that as well. It's also preferable to cutting and pasting, though in some cases even that may be a viable approach provided you are very careful.

 Share!

 
comments powered by Disqus