Scott Harney

Using Ansible to Bootstrap My Work Environment Part 2

Besides my standard day-to-day laptop running Ubuntu, I also have a Chromebook I picked up a while back. I like the light weight and the long battery life for traveling around. ChromeOS along covers a lot of what I do daily, but I do like to have a few additional linux tools to get things done so I use crouton. With crouton there’s a few manual steps involved in preparing the Chromebook but they are well documented.

Using Ansible to Bootstrap My Work Environment - Part I

Lately, I’ve been on a journey to learn about all things cloud and “devops”. Like any long term professional, I like my desktop work environment just so. You do a lot of configuration along the way. To get a handle on that, the first thing I started doing was maintaining all my configuration files in a private git repo. This digital ocean article is my preferred method for doing that. This was really step zero in the journey.

Translating Unix Shell Processing to Powershell Equivalents

I frequently find myself comparing data sets from Unix hosts and other systems. My go to for years are Linux shell commands. I’d ssh into a system, grab some data, and process with sed, awk, sort, uniq, perl etc. These days, however, I find myself working with customers who are more comfortable working with PowerShell. It became interesting to see how I might translate work from Unix shell equivalents which is how I tend to think when given a task and use PowerShell equivalents.

Using Python and Netmiko to Automate SAN Zoning

UPDATE github repo for this (H/T Scott Lowe) One of the customers I’m currently supporting is performing a migration of their NetApp storage from classic 7-mode to Clustered Data OnTap. It’s a fiber channel environment so lots of FC zoning changes are required. To minimize mistakes and ensure consistency, some form of automation is needed. The Cisco MDS fiber channel switches run a version of NX-OS but it’s relatively old and doesn’t include an accessible API-based interface.

Integrating gitlab into my existing Apache

Intro In addition to using github a little bit, I've built my own private gitlab server. Part of my motivation is that some things I'm using this for don't belong on a public repo. Part of it is education. I already have an apache server running and wanted to integrate my gitlab access into this in a fluid fashion rather than running and exposing gitlab on some alternate port.

K10

In the past we’ve been out of town for 8⁄29. Not this year. The coverage and discussions have been impossible to avoid. The memories and emotions of those dark days have risen up. I really don’t have much else I want to say beyond what I was saying 10 years ago. My first blog post after the storm - Sept 11, 2005. Picture Album - Pictures of our home and neighborhood from that time.

Link: All-flash storage stumbles on cost per gigabyte

A recent post on the notion of the All-Flash Data Center running up against customer realities. This echoes a previous post of mine. Just as the cost flash is decreasing so is the cost of spinning media. Just as flash benefits from dedupe/compression capabilities, so does spinning media driving down the cost of ‘effective capacity’ further. Flash will of course gain against spinning media for active data but we still appear to be a long way off from an all solid state data center without low-cost, high-density magnetic media.

Powershell Scripts for Backup of cDOT NFS Exports

Intro I was working with a customer recently on their new clustered Data OnTap environment. They upgraded from 7-mode and were working on their Disaster Recovery solution. In a 7-mode world, copying SMB shares and NFS exports from a source FAS into a DR target FAS is really just a file copy. The customer environment is relatively small and they like to keep it simple. So doing a simple file compare of /etc/exports works for them.