<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Garth Kerr]]></title><description><![CDATA[Work [in|and] Progress - Software Engineer and Operations]]></description><link>https://garthkerr.com/</link><generator>Ghost 1.11</generator><lastBuildDate>Wed, 25 May 2022 22:41:41 GMT</lastBuildDate><atom:link href="https://garthkerr.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Parsing environment variables with JQ]]></title><description><![CDATA[<div class="kg-card-markdown"><p>JQ provides this nifty <code>env</code> input that enables you to reference environment variables using the JQ style query syntax.</p>
<pre><code class="language-shell"># prints all envs in JSON format
jq -n env
</code></pre>
<p>And you can use piping like you would with a normal JSON input.</p>
<pre><code class="language-shell"># print $HOME in raw format
jq -nr 'env | .HOME'</code></pre></div>]]></description><link>https://garthkerr.com/parsing-environment-variables-with-jq/</link><guid isPermaLink="false">604bb259aef4b706d7250c25</guid><category><![CDATA[Bash]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Fri, 12 Mar 2021 18:35:10 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>JQ provides this nifty <code>env</code> input that enables you to reference environment variables using the JQ style query syntax.</p>
<pre><code class="language-shell"># prints all envs in JSON format
jq -n env
</code></pre>
<p>And you can use piping like you would with a normal JSON input.</p>
<pre><code class="language-shell"># print $HOME in raw format
jq -nr 'env | .HOME'
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Bash capture STDOUT to variable without redirecting]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Capture CLI to a variable in a bash script without redirecting STDOUT.</p>
<pre><code>OUTPUT=$(command --foo &quot;${BAR}&quot; | tee &gt;(cat - &gt;&amp;2))
</code></pre>
</div>]]></description><link>https://garthkerr.com/bash-capture-stdout-to-variable-without-redirecting-output/</link><guid isPermaLink="false">5d66cdbe67df990477fe0cf2</guid><category><![CDATA[Bash]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Wed, 28 Aug 2019 18:59:58 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Capture CLI to a variable in a bash script without redirecting STDOUT.</p>
<pre><code>OUTPUT=$(command --foo &quot;${BAR}&quot; | tee &gt;(cat - &gt;&amp;2))
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Search JSON array with JQ]]></title><description><![CDATA[<div class="kg-card-markdown"><p><a href="https://stedolan.github.io/jq/manual/">JQ</a> is a powerful command-line JSON processing tool. It's super fast (C), provides solid documentation, and is easy to use. In this example, given a JSON object from a file, curl response; we look through an array nested inside an object, and return a matching object within the array. I</p></div>]]></description><link>https://garthkerr.com/search-json-array-jq/</link><guid isPermaLink="false">59cfbb493640e718213fbabd</guid><category><![CDATA[Bash]]></category><category><![CDATA[JQ]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Fri, 27 Jan 2017 21:24:22 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><a href="https://stedolan.github.io/jq/manual/">JQ</a> is a powerful command-line JSON processing tool. It's super fast (C), provides solid documentation, and is easy to use. In this example, given a JSON object from a file, curl response; we look through an array nested inside an object, and return a matching object within the array. I want to lookup the <strong>id</strong> of an item, given the name.</p>
<pre><code class="language-language-javascript">{
    &quot;ok&quot;: true,
    &quot;items&quot;: [
        {&quot;id&quot;: 123, &quot;name&quot;: &quot;thing-1&quot;},
        {&quot;id&quot;: 124, &quot;name&quot;: &quot;thing-2&quot;},
        {&quot;id&quot;: 125, &quot;name&quot;: &quot;thing-3&quot;}
    ]
}
</code></pre>
<p>We can pipe a filter result within the JQ sequence with the <code>|</code> character, just like a standard shell interface. The <code>select</code> command then searches a match pattern within an iterable array.</p>
<pre><code class="language-language-bash">cat file.json | jq '.items[] | select(.name == &quot;thing-3&quot;) | .'
</code></pre>
<p>The final select result is piped and formatted as needed. For example, we can use the <code>-r</code> (raw output) option to return unformatted (not quoted) output, and ONLY the <code>id</code> field.</p>
<pre><code class="language-language-bash">cat file.json | jq -r '.items[] | select(.name == &quot;thing-3&quot;) | .id'
</code></pre>
<p>There are numerous useful operators and functions provided in the documentation. JQ also provides a fantastic playground <a href="https://jqplay.org/s/XfDUheaXFB">here.</a></p>
</div>]]></content:encoded></item><item><title><![CDATA[Iterating over an array of bash variables]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I had a requirement for a bash script to check for required variables before running a function. Rather than creating a conditional block for each required variable (or in my case, needing to dynamically change them), let's do something that's more terse and maintainable. The example covers a few bash</p></div>]]></description><link>https://garthkerr.com/bash-iterating-over-array-variables/</link><guid isPermaLink="false">59cfbb493640e718213fbabc</guid><category><![CDATA[Bash]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Thu, 26 Jan 2017 23:06:04 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>I had a requirement for a bash script to check for required variables before running a function. Rather than creating a conditional block for each required variable (or in my case, needing to dynamically change them), let's do something that's more terse and maintainable. The example covers a few bash concepts:</p>
<ul>
<li>Generating an <strong>array</strong> of variable names</li>
<li>Variable name as a string (indirect parameter expansion)</li>
<li>Breaking a loop over an array (control flow)</li>
</ul>
<pre><code class="language-language-bash">#!/bin/bash

ERROR=0
declare -a REQUIRED=(
  &quot;APP_VAR_1&quot; &quot;APP_VAR_2&quot; &quot;APP_VAR_3&quot;
)
for REQ in &quot;${REQUIRED[@]}&quot;
do
  if [ -z &quot;${!REQ}&quot; ]; then
    ERROR=1
    printf &quot;\n  - ERROR: ${REQ} undefined.\n\n&quot;
    break
  fi
done

run ()
{
  printf &quot;\n  - Validated.\n\n&quot;
}

if [ &quot;${ERROR}&quot; == 0 ]; then
  run
fi
</code></pre>
<p>The <code>${REQUIRED[@]}</code> syntax sets up the iterable <code>REQ</code> variable. We can access the literal name of the variable with <code>${REQ}</code> syntax. Whereas, we can access the <strong>value</strong> of the variable with <code>${!REQ}</code> indirect parameter expansion. If all the required variables are not <code>-z</code> null, the <code>run()</code> function will continue as expected.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Using Ansible templates to maintain partial file blocks]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Ansible provides some excellent utilities for maintaining single lines and partial blocks of text. Both modules have support for handling template and fact variables, and a variety of options to support your use case. Here are a couple of examples using each module.</p>
<p>Using the <a href="http://docs.ansible.com/ansible/lineinfile_module.html" target="_blank">lineinfile</a> module for a single</p></div>]]></description><link>https://garthkerr.com/using-ansible-template-for-partial-file-block/</link><guid isPermaLink="false">59cfbb493640e718213fbabb</guid><category><![CDATA[Ansible]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Wed, 30 Nov 2016 19:33:15 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Ansible provides some excellent utilities for maintaining single lines and partial blocks of text. Both modules have support for handling template and fact variables, and a variety of options to support your use case. Here are a couple of examples using each module.</p>
<p>Using the <a href="http://docs.ansible.com/ansible/lineinfile_module.html" target="_blank">lineinfile</a> module for a single line. This module will not support inserting newline <code>\n</code> characters:</p>
<pre><code class="language-language-yaml">- lineinfile: &gt;
    dest=/etc/memcached.conf
    regexp='^-m [\d]*'
    line='-m {{ memory }}'
    state=present
</code></pre>
<p>Using the <a href="https://docs.ansible.com/ansible/blockinfile_module.html" target="_blank">blockinfile</a> module for multiple lines:</p>
<pre><code class="language-language-yaml">- blockinfile:
    dest: /etc/hosts
    content: |
      127.0.0.1 {{ ansible_hostname }}.local
      ::1       {{ ansible_hostname }}.local
    state: present
</code></pre>
<p>The <code>blockinfile</code> module offers several useful options such as:</p>
<ul>
<li><code>insertafter</code> and <code>insertbefore</code> to manage exactly where you need the block to be inserted. Useful for structured files like XML.</li>
<li><code>marker</code> to customize block markers, allowing you to manage multiple blocks in the same file.</li>
</ul>
<p>In my case, I have a much larger block I'd like to be able to maintain using a separate Jinja2 template file. For this we will need to use the more advanced <code>lookup</code> plugin, and capture the template content.</p>
<pre><code class="language-language-yaml">- set_fact:
    hosts_content: &quot;{{ lookup('template', 'templates/etc-hosts.j2') }}&quot;

- blockinfile:
    dest: /etc/hosts
    content: '{{ hosts_content }}'
    state: present
</code></pre>
<p>In this manner, you can keep your tasks configuration file concise by storing large blocks of content more appropriately in a separate file.</p>
<hr>
<h4 id="asinglelineshellsolution">A Single Line Shell Solution</h4>
<p>In some cases, Ansible might be overkill. If you only need to ensure that a single line exists, and the order within the file does not matter, you can use something like this simple one-liner. This will check if the line exists, and append it to the end of the file if it does not.</p>
<p><strong>Append line to file if it doesn't exist:</strong></p>
<pre><code class="language-language-bash">LN=&quot;127.0.0.1 dev-local&quot;
grep -q -F &quot;${LN}&quot; /etc/hosts || echo &quot;${LN}&quot; | sudo tee -a /etc/hosts
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Delete multiple git branches matching a prefix]]></title><description><![CDATA[<div class="kg-card-markdown"><p><em><strong>Important:</strong> this is a potentially destructive command. Please proceed cautiously and at your own risk.</em></p>
<p>I'll sometimes leave local git branches around a bit longer than I should. And, depending on your workflow, you may occasionally have a handful of ephemeral branches that need to be cleaned up.</p>
<p>In this</p></div>]]></description><link>https://garthkerr.com/batch-delete-multiple-git-branches/</link><guid isPermaLink="false">59cfbb493640e718213fbaba</guid><category><![CDATA[Git]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Mon, 31 Oct 2016 21:13:48 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><em><strong>Important:</strong> this is a potentially destructive command. Please proceed cautiously and at your own risk.</em></p>
<p>I'll sometimes leave local git branches around a bit longer than I should. And, depending on your workflow, you may occasionally have a handful of ephemeral branches that need to be cleaned up.</p>
<p>In this example, we exclude the active branch with grep invert match (not) <code>*</code> and filter out any additional branches with an escaped <code>\|</code> pipe (master, staging); this is only necessary if your match pattern includes branches you want to keep. Then we match the prefix <code>feature-</code> for the branches we intend to delete.</p>
<pre><code class="language-language-bash">git branch -D \
  $(git branch | grep -v '*\|master\|staging' | tr -d ' ' | grep -E '^feature-')
</code></pre>
<p>Modify the <code>git branch</code> command <a href="https://git-scm.com/docs/git-branch">options</a> as needed. For example, you can specify to only include previously merged branches, etc.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Install PHP 7 on Ubuntu 14.04 with Gearman support]]></title><description><![CDATA[<div class="kg-card-markdown"><p>PHP 7 has been out long enough now that it has seen a couple of patch releases, which is about the time I will start evaluating an upgrade. PHP 7 can have some significant performance benefits from previous versions, so I was eager to give it a try. With the</p></div>]]></description><link>https://garthkerr.com/install-php-7-on-ubuntu-14-04-with-gearman-support/</link><guid isPermaLink="false">59cfbb493640e718213fbab9</guid><category><![CDATA[Gearman]]></category><category><![CDATA[PHP]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Wed, 04 May 2016 03:11:47 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>PHP 7 has been out long enough now that it has seen a couple of patch releases, which is about the time I will start evaluating an upgrade. PHP 7 can have some significant performance benefits from previous versions, so I was eager to give it a try. With the <code>php-gearman</code> extension <a href="https://github.com/hjr3/pecl-gearman/issues/12">now available</a> for PHP 7, my prerequisites have been met.</p>
<p><strong>Install latest PHP 7.0 on Ubuntu 14.04</strong></p>
<pre><code class="language-language-bash">sudo add-apt-repository -y ppa:ondrej/php
sudo apt-get update  
sudo apt-get -y install php7.0 php7.0-fpm php-gearman
</code></pre>
<p><strong>Install additional extensions as needed</strong></p>
<pre><code class="language-language-bash"># common extensions
sudo apt-get -y install php7.0-curl php7.0-json php7.0-mcrypt php7.0-xml
# memcached and mongodb extensions
sudo apt-get -y install php-memcached php-mongodb
</code></pre>
<p>This PPA is maintained by <a href="https://deb.sury.org/">Ondřej Surý</a>. I have used his PHP Debian packages for several years without issue.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Bash completion for Consul nodes on Ubuntu]]></title><description><![CDATA[<div class="kg-card-markdown"><p>After making the jump to Ubuntu as my preferred distribution, I've admittedly become addicted to bash completions (aka autocomplete, tab completion, typeahead). Bash completions provide immediate hints for common commands, and even the options associated with them.</p>
<p>For example, a quick <code>\t\t</code> (double-tap of the <strong>tab</strong> key) after the</p></div>]]></description><link>https://garthkerr.com/bash-completion-for-consul-nodes-on-ubuntu/</link><guid isPermaLink="false">59cfbb493640e718213fbab7</guid><category><![CDATA[Bash]]></category><category><![CDATA[Consul]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Fri, 22 Apr 2016 22:23:06 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>After making the jump to Ubuntu as my preferred distribution, I've admittedly become addicted to bash completions (aka autocomplete, tab completion, typeahead). Bash completions provide immediate hints for common commands, and even the options associated with them.</p>
<p>For example, a quick <code>\t\t</code> (double-tap of the <strong>tab</strong> key) after the service command will provide a quick snapshot of services available on the machine, beginning with your query.</p>
<pre><code class="language-language-bash">someone@example:~$ sudo service a
acpid  anacron  apache2  atd
</code></pre>
<p>As with most things, seemingly small details, like typeahead completions, can make for a very useful productivity improvement.</p>
<hr>
<h4 id="consul">Consul</h4>
<p>I have been utilizing the <a href="https://www.consul.io/" target="_blank">Consul</a> agent from <a href="https://www.hashicorp.com/" target="_blank">HashiCorp</a> for service discovery inside an AWS VPC. Consul is a well-thought-out, distributed, service discovery agent that accomplishes a few things quite elegantly:</p>
<ul>
<li>Determines location of services via CLI and DNS</li>
<li>Keeps availability status up-to-date with health checks</li>
<li>Stores facts that can be shared across instances</li>
<li>Provides performant localhost access to everything</li>
<li>Easy to setup, highly configurable, and resilient</li>
</ul>
<p>Though individually these features seem pretty straight-forward, the combined elements of service discovery can be incredibly complex in dynamic environments, and challenging to keep synchronized. You can read more about <a href="https://www.consul.io/" target="_blank">Consul</a> on their website.</p>
<hr>
<h4 id="bashcompletion">Bash Completion</h4>
<p>Bash completion on Ubuntu provides a configuration directory for adding your own <code>complete</code> handlers. In this example, you will need to have the consul agent binary accessible in your path.</p>
<pre><code class="language-language-bash">someone@example:~$ consul members
Node       Address          Status  Type    Build  Protocol  DC
dev-web    10.10.0.17:8301  alive   server  0.6.4  2         dc-east
dev-mysql  10.10.0.44:8301  alive   client  0.6.4  2         dc-east
qa-web     10.10.0.63:8301  failed  client  0.6.4  2         dc-east
</code></pre>
<p>The <code>consul members</code> command will provide a list of nodes alongside a status of: alive, failed, or left. You may choose to filter out different statuses from your completion. I only filter out <code>left</code> because it indicates a proper shutdown and removal for my use case. The <code>failed</code> status nodes are something I would still want access to.</p>
<p><strong>We will create our completion script here:</strong></p>
<pre><code class="language-language-bash"># /etc/bash_completion.d/ssh_consul
</code></pre>
<p>With the following contents:</p>
<pre><code class="language-language-bash">#!/bin/bash
_consul_members()
{
  local cur
  local res
  cur=${COMP_WORDS[COMP_CWORD]}
  res=$(consul members | grep -v ' left ' | tail -n +2 | cut -f1 -d ' ' | sort -u)
  COMPREPLY=($(compgen -W &quot;${res}&quot; -- ${cur}))
}
complete -F _consul_members ssh
</code></pre>
<p>By default, Consul uses the <code>node.consul</code> domain to access nodes. You have some options for resolving the domain:</p>
<ul>
<li>Make the domain accessible using the consul local DNS</li>
<li>Use your own hostnames as the full name of the node</li>
<li>Or, suffix the <code>COMPREPLY</code> variable with an existing domain (hack)</li>
</ul>
<p>I have setup <a href="http://www.thekelleys.org.uk/dnsmasq/doc.html" target="_blank">dnsmasq</a> to forward the <code>.consul</code> domain requests to the consul DNS server. And, added domain search resolution here:</p>
<pre><code class="language-language-bash"># /etc/resolvconf/resolv.conf.d/base
search node.consul
</code></pre>
<p>This will attempt to resolve hostnames matching the name of the consul node, for example: <code>ssh dev-web</code> would attempt to resolve the hostname <code>{dev-web}.node.consul</code> through the local DNS server.</p>
<hr>
<h4 id="puttingitalltogether">Putting It All Together</h4>
<p>After saving the completion config, we'll need to reinitialize our bash session for it to take effect. Now our <code>ssh</code> command will provide immediate and accurate insights into what instances are available.</p>
<pre><code class="language-language-bash">someone@example:~$ ssh dev-
dev-foo-20  dev-foo-21  dev-mysql  dev-web
</code></pre>
<p>Gentoo has provided some excellent documentation for bash completions <a href="https://devmanual.gentoo.org/tasks-reference/completion/index.html" target="_blank">here.</a> You can write some pretty advanced completion scripts, if one were so inclined.</p>
<p>I highly recommend managing your Consul implementation with <a href="http://docs.ansible.com/ansible/index.html" target="_blank">Ansible</a>, or a similar configuration management tool. If you have questions or suggestions, please feel free to <a href="http://garthkerr.com">get in touch</a> via the appropriate social outlet.</p>
</div>]]></content:encoded></item><item><title><![CDATA[OpenSSL CSR generation in a single command]]></title><description><![CDATA[<div class="kg-card-markdown"><p>If you need to generate a CSR, OpenSSL has a helpful prompt interface for completing the required fields one at a time. However, if you are using automation, collecting STDIN is not always an option.</p>
<p>While working with Ansible, I learned that OpenSSL CSR generation allows you to pass the</p></div>]]></description><link>https://garthkerr.com/openssl-csr-generation/</link><guid isPermaLink="false">59cfbb493640e718213fbab5</guid><category><![CDATA[SSL]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Mon, 25 Jan 2016 07:48:18 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>If you need to generate a CSR, OpenSSL has a helpful prompt interface for completing the required fields one at a time. However, if you are using automation, collecting STDIN is not always an option.</p>
<p>While working with Ansible, I learned that OpenSSL CSR generation allows you to pass the appropriate details using the <code>-subj</code> parameter like in this example:</p>
<pre><code class="language-language-bash"># C:  Country Code
# ST: State (Region)
# L:  Locality (City)
# O:  Organization
# OU: Organizational Unit
# CN: Common Name

openssl req \
  -subj &quot;/C=US/ST=Ohio/L=Columbus/O=Acme Company/OU=Acme/CN=example.com&quot; \
  -newkey rsa:2048 -nodes \
  -keyout example_com.key \
  -out example_com.csr
</code></pre>
<p>Now you can continue uninterrupted without needing to manually enter details anytime you generate a CSR.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Transport security for HTTP/2 protocol with Nginx]]></title><description><![CDATA[<div class="kg-card-markdown"><p>With Google sunsetting the SPDY protocol, and broad support for HTTP/2 shipping with most modern browsers, I began investigating moving our SPDY support over to HTTP/2.</p>
<p>Nginx recently released official support for HTTP/2 with the mainline repository version 1.9.5. While upgrading to the new release</p></div>]]></description><link>https://garthkerr.com/transport-security-for-http2-protocol-with-nginx/</link><guid isPermaLink="false">59cfbb493640e718213fbab4</guid><category><![CDATA[Nginx]]></category><category><![CDATA[SSL]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Sun, 13 Dec 2015 01:08:56 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>With Google sunsetting the SPDY protocol, and broad support for HTTP/2 shipping with most modern browsers, I began investigating moving our SPDY support over to HTTP/2.</p>
<p>Nginx recently released official support for HTTP/2 with the mainline repository version 1.9.5. While upgrading to the new release and enabling the <code>http2</code> module we ran into an issue with transport security, as reported by Google Chrome:</p>
<pre><code class="language-language-bash"># google chrome client error
ERR_SPDY_INADEQUATE_TRANSPORT_SECURITY
# or more generically 
INADEQUATE_SECURITY (0xc)
</code></pre>
<p>According to the HTTP/2 specification, <strong>over TLS 1.2</strong> HTTP/2 SHOULD NOT use any of the cipher suites that are listed in the cipher suite black list, <a href="https://http2.github.io/http2-spec/#BadCipherSuites" target="_blank">found here.</a></p>
<p>The following settings (with changes to the <code>ssl_ciphers</code> directive) addressed the transport security issue reported by the browser.</p>
<pre><code class="language-language-apacheconf">server {

    listen 9443 ssl http2 proxy_protocol;
    port_in_redirect off;
    real_ip_header proxy_protocol;
    set_real_ip_from 10.0.0.0/16;
    server_name acme.com www.acme.com;

    ssl on;
    ssl_certificate /path/to/full-chain.pem;
    ssl_certificate_key /path/to/pivate-key.pem;
    
    # disable unsupported ciphers
    ssl_ciphers AESGCM:HIGH:!aNULL:!MD5;
    
    # ssl optimizations
    ssl_session_cache shared:SSL:30m;
    ssl_session_timeout 30m;
    add_header Strict-Transport-Security &quot;max-age=31536000&quot;;

}
</code></pre>
<p>In the example, we are also opting into the <strong>HTTP Strict Transport Security (HSTS)</strong> enhancement to prevent client communications from being sent over standard HTTP. The <code>proxy_protocol</code> configuration in use by the load balancer (ELB) continues to work as expected without any changes. You can read more about using proxy protocol with Nginx <a href="http://garthkerr.com/multiple-ssl-domains-on-elb-with-nginx/">here.</a></p>
</div>]]></content:encoded></item><item><title><![CDATA[Meaningful hostnames with Ansible]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Like anyone who spends a good deal of time in a terminal window, switching between machine instances is fairly commonplace. I keep a persistent tmux session open to manage a handful of connections.</p>
<p>Knowing <em>which</em> machine you're currently using is obviously imperative, but can be little challenging in highly dynamic</p></div>]]></description><link>https://garthkerr.com/meaningful-hostnames-with-ansible/</link><guid isPermaLink="false">59cfbb493640e718213fbab3</guid><category><![CDATA[Ansible]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Sun, 08 Nov 2015 21:34:53 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Like anyone who spends a good deal of time in a terminal window, switching between machine instances is fairly commonplace. I keep a persistent tmux session open to manage a handful of connections.</p>
<p>Knowing <em>which</em> machine you're currently using is obviously imperative, but can be little challenging in highly dynamic environments like AWS.</p>
<p>I have implemented <a href="http://www.ansible.com/" target="_blank">Ansible</a> for managing configuration on a large number AWS instances. The following script applies a dynamic hostname to the instances based on a predefined prefix.</p>
<p>The playbook should look something like this:</p>
<pre><code class="language-language-yaml">- hosts: dev
  remote_user: user
  sudo: yes
  vars:
    env: dev
    hostname: dev-web-*
  roles:
    - common
    - supervisor
</code></pre>
<p>We will dynamically replace the ampersand <code>*</code> character with the tail of the host IP address. For our environment, hosts are commonly identified by their roles and the last <code>/32</code> of the address within the current subnet.</p>
<p>We will add the following tasks to the common role:</p>
<pre><code class="language-language-yaml"># set the friendly hostname from the playbook variable
- name: set hostname
  hostname: name={{ hostname.replace('*', ansible_all_ipv4_addresses[0].split('.')[3]) }}

# add the new hostname to the hosts file
- name: add to hosts
  lineinfile: &gt;
    dest=/etc/hosts
    regexp='^127\.0\.0\.1'
    line='127.0.0.1 localhost {{ hostname.replace(&quot;*&quot;, ansible_all_ipv4_addresses[0].split(&quot;.&quot;)[3]) }}'
    state=present
</code></pre>
<p>The hostname module will work for just about any distribution (in my case, Ubuntu) you might be using. Once you have run the playbook, your hostname should look something like:</p>
<pre><code class="language-language-bash">ubuntu@dev-web-62:~$
</code></pre>
<p>The new hostname <code>dev-web-62</code> is much more meaningful than the default hostname provided by EC2. You can use this same pattern to configure DNS for each instance, and update the tag <em>Name</em> for better readability in the AWS web console.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Globally install Composer on OS X 10.11 El Capitan]]></title><description><![CDATA[<div class="kg-card-markdown"><p><em><strong>Update:</strong> this also works as expected with macOS Sierra.</em></p>
<p>The El Capitan release of OS X introduces a more strict security model around the concept of root level access to the underpinnings of the operating system. For a typical user, this is great news for avoiding malware, etc.</p>
<p>As a</p></div>]]></description><link>https://garthkerr.com/composer-install-on-os-x-10-11-el-capitan/</link><guid isPermaLink="false">59cfbb493640e718213fbab2</guid><category><![CDATA[PHP]]></category><category><![CDATA[Composer]]></category><category><![CDATA[OS X]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Tue, 06 Oct 2015 16:58:01 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><em><strong>Update:</strong> this also works as expected with macOS Sierra.</em></p>
<p>The El Capitan release of OS X introduces a more strict security model around the concept of root level access to the underpinnings of the operating system. For a typical user, this is great news for avoiding malware, etc.</p>
<p>As a developer, this requires a little more thoughtfulness when trying to behave as root. Even with the <code>sudo</code> command, there are protected locations and processes that will no longer work as expected. To be specific <code>/usr/bin</code> for this example.</p>
<p>This command will <strong>NOT</strong> work in OS X 10.11:</p>
<pre><code class="language-language-bash">curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/bin --filename=composer
</code></pre>
<p>Yields an error, unable to write to the global path.</p>
<pre><code class="language-language-bash">Downloading...
Could not create file /usr/bin/composer: fopen(/usr/bin/composer): failed to open stream: Operation not permitted
Download failed: fopen(/usr/bin/composer): failed to open stream: Operation not permitted
fwrite() expects parameter 1 to be resource, boolean given
</code></pre>
<p>Instead, let's write to the <code>/usr/local/bin</code> path for the user:</p>
<pre><code class="language-language-bash">curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
</code></pre>
<p>Now we can access the <code>composer</code> command globally, just like before.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Alias a version for Composer]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I previously published instructions on <a href="http://garthkerr.com/using-a-specific-git-commit-hash-for-composer/" target="_blank">using a specific commit hash for Composer</a>. This is a quick and useful way for referencing the most recent version of our work during the development process.</p>
<p>In some cases, specifying a commit hash as the current version may compromise the dependency tree, causing a</p></div>]]></description><link>https://garthkerr.com/alias-a-version-for-composer/</link><guid isPermaLink="false">59cfbb493640e718213fbab1</guid><category><![CDATA[PHP]]></category><category><![CDATA[Composer]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Wed, 04 Mar 2015 19:43:37 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>I previously published instructions on <a href="http://garthkerr.com/using-a-specific-git-commit-hash-for-composer/" target="_blank">using a specific commit hash for Composer</a>. This is a quick and useful way for referencing the most recent version of our work during the development process.</p>
<p>In some cases, specifying a commit hash as the current version may compromise the dependency tree, causing a conflict like:</p>
<pre><code class="language-language-bash">Your requirements could not be resolved to an installable set of packages.
</code></pre>
<p>This can be caused by a conflicting version with a child dependency:</p>
<pre><code class="language-language-javascript">acme/acme-app @ 1.0.0
requires:
 -- guzzlehttp/guzzle @ 5.0.3
 -- acme/acme-service @ 1.0.0
    requires:
    -- guzzlehttp/guzzle @ 5.0.1
</code></pre>
<p>In this case we have another dependency <code>acme/acme-service</code> that requires the <code>guzzlehttp/guzzle</code> package at a different version than the repository we are working in. Let's say we need <strong>5.0.3</strong> in the application because it fixes an issue we are currently working on.</p>
<p>As long as the API for the newer version is compatible with the older one (usually the case for most minor/patch changes), we can satisfy the <code>acme/acme-service</code> dependency by telling the package to use version <strong>5.0.3</strong> <em>as</em> <strong>5.0.1</strong>:</p>
<pre><code class="language-language-javascript">{
    &quot;require&quot;: {
        &quot;guzzlehttp/guzzle&quot;: &quot;5.0.3 as 5.0.1&quot;,
        &quot;acme/acme-service&quot;: &quot;1.0.0&quot;
    }
}
</code></pre>
<p>We can also reference a commit hash as an alias:</p>
<pre><code class="language-language-javascript">{
    &quot;require&quot;: {
        &quot;guzzlehttp/guzzle&quot;: &quot;dev-master#d61a4539f57d65620785604b8b380890225c518e as 5.0.1&quot;,
        &quot;acme/acme-service&quot;: &quot;1.0.0&quot;
    }
}
</code></pre>
<p>You can learn more about Composer Aliases <a href="https://getcomposer.org/doc/articles/aliases.md" target="_blank">here.</a></p>
<p>I do not generally use this type of version aliasing in a production scenario if it can be avoided. There are some obvious pitfalls with testing and general confusion. However, it can be useful for quickly resolving a conflict during the development process.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Multiple SSL domains on AWS ELB with Nginx]]></title><description><![CDATA[<div class="kg-card-markdown"><p><em>Is it possible to serve multiple domains (each with a unique SSL certificate) via HTTPS behind a <strong>single</strong> load balancer on AWS?</em></p>
<p><strong>Yes you can; with TCP and Proxy Protocol.</strong></p>
<p>Proxy Protocol allows you to safely and transparently forward TCP (layer 4) requests while attaching upstream client address information. More</p></div>]]></description><link>https://garthkerr.com/multiple-ssl-domains-on-elb-with-nginx/</link><guid isPermaLink="false">59cfbb493640e718213fbaaf</guid><category><![CDATA[AWS]]></category><category><![CDATA[Nginx]]></category><category><![CDATA[SSL]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Tue, 17 Feb 2015 17:18:49 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><em>Is it possible to serve multiple domains (each with a unique SSL certificate) via HTTPS behind a <strong>single</strong> load balancer on AWS?</em></p>
<p><strong>Yes you can; with TCP and Proxy Protocol.</strong></p>
<p>Proxy Protocol allows you to safely and transparently forward TCP (layer 4) requests while attaching upstream client address information. More details are available in the HAProxy <a href="http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt" target="_blank">abstract</a> about how it all actually works. As of July 2013, Amazon Web Services ELB (Elastic Load Balancer) supports distributing TCP connections with Proxy Protocol.</p>
<p>In a typical ELB/HTTPS setup, the SSL connection is negotiated by the load balancer and HTTP traffic is forwarded on to the web server. If you are only hosting one top-level domain, this setup works just fine. For our use case, we need to handle HTTPS connections for multiple domains on a single application stack. This means multiple certificates, which a single ELB instance does not support.</p>
<p>For our setup, SSL negotiation will be done by nginx on the web server, rather than by the ELB. With nginx, leveraging multiple server blocks each with its own SSL certificate is pretty straight forward. Here is what you will need:</p>
<ul>
<li>Nginx 1.6.2</li>
<li>Ubuntu 14.04 LTS</li>
<li>AWS CLI</li>
</ul>
<h4 id="installprerequisites">Install Prerequisites</h4>
<p>The nginx PPA includes the required modules, so there is no need to compile a build. Feel free to adjust to your own requirements.</p>
<pre><code class="language-language-bash"># install nginx
sudo add-apt-repository -y ppa:nginx/stable
sudo apt-get update
sudo apt-get -y install nginx=1.6.*
</code></pre>
<p>The AWS CLI will require credentials provided by your account.</p>
<pre><code class="language-language-bash"># install aws cli
sudo apt-get install awscli
aws configure

</code></pre>
<h4 id="createandconfiguretheloadbalancer">Create and Configure the Load Balancer</h4>
<p>If you are also (likely) handling standard requests over port 80, you do not <em>need</em> to enable Proxy Protocol for non-secure traffic. The HTTP traffic can remain unaffected while adding HTTPS to an existing ELB.</p>
<p>First, we need an ELB instance. If you do not already have a load balancer, you can create one using the AWS console, or by following these <a href="http://docs.aws.amazon.com/cli/latest/reference/elb/create-load-balancer.html" target="_blank">instructions</a> for AWS CLI. In the example, we use <strong>acme-balancer</strong> as the ELB name and we are forwarding to backend port 9443.</p>
<p>The listener port should be created using the <strong>TCP</strong> protocol for both the Load Balancer Protocol and the Instance Protocol. The application layer protocol (HTTPS) is not handled until we reach the nginx instance. In most cases, the public port should be the standard 443.</p>
<pre><code class="language-language-bash"># create proxy protocol policy
aws elb create-load-balancer-policy \
  --load-balancer-name acme-balancer \
  --policy-name EnableProxyProtocol \
  --policy-type-name ProxyProtocolPolicyType \
  --policy-attributes AttributeName=ProxyProtocol,AttributeValue=True

# add policy to elb
aws elb set-load-balancer-policies-for-backend-server \
  --load-balancer-name acme-balancer \
  --instance-port 9443 \
  --policy-names EnableProxyProtocol

# results
aws elb describe-load-balancers --load-balancer-name acme-balancer

</code></pre>
<h4 id="configurenginxwithproxyprotocol">Configure Nginx with Proxy Protocol</h4>
<p>If you have multiple server blocks running on the same port (virtual hosts), any port that includes <code>proxy_protocol</code> in your nginx configuration will enable proxy protocol handling for ALL traffic on this port, not just the particular server block.</p>
<pre><code class="language-language-apacheconf"># block for proxy traffic
server {

    # port elb is forwarding ssl traffic to
    listen 9443 ssl proxy_protocol;
    
    # sets the proper client ip
    real_ip_header proxy_protocol;
    
    # aws vpc subnet ip range
    set_real_ip_from 10.0.0.0/16;
    
    server_name acme.com www.acme.com;
    
    ssl on;
    ssl_certificate /etc/ssl/acme/acme.com.crt;
    ssl_certificate_key /etc/ssl/acme/acme.com.key;

}

# block for direct traffic
server {

    listen 443 ssl;
    
    server_name acme.com www.acme.com;
    
    ssl on;
    ssl_certificate /etc/ssl/acme/acme.com.crt;
    ssl_certificate_key /etc/ssl/acme/acme.com.key;

}
</code></pre>
<p>And that's it. If the real IP settings are working correctly, you should not need to setup a custom log format. I have chosen to forward proxy protocol requests to a non-standard port (9443 in the example). This provides me access to the standard port outside of the load balancer for troubleshooting and debugging.</p>
<p>Creating separate server blocks for direct and proxied traffic is more verbose, but has a few benefits. It mitigates the need for conditional blocks later down the road. I also find that it is easier for others to understand. Lastly, separate blocks allow you to maintain a distinct configuration to test direct traffic to production nodes before they are added to the load balancer.</p>
<h4 id="puttingitalltogether">Putting It All Together</h4>
<p>This example is obviously a summation of the required changes, and may vary greatly depending on your AWS setup, with respect to subnets, ports, security groups, etc. If you have questions or suggestions, please feel free to <a href="http://garthkerr.com">get in touch</a> via the appropriate social outlet.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Logging client-side JavaScript errors]]></title><description><![CDATA[<div class="kg-card-markdown"><p>JavaScript interaction is not usually an option for most modern websites (try accessing some of your favorite sites with JavaScript disabled). Given the matrix of browsers, devices, software versions and extensions, exhaustive testing can be challenging.</p>
<p>Capturing and logging JavaScript errors that occur on the client can be just as</p></div>]]></description><link>https://garthkerr.com/logging-client-side-javascript-errors/</link><guid isPermaLink="false">59cfbb493640e718213fbaad</guid><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Garth Kerr]]></dc:creator><pubDate>Tue, 11 Nov 2014 18:50:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>JavaScript interaction is not usually an option for most modern websites (try accessing some of your favorite sites with JavaScript disabled). Given the matrix of browsers, devices, software versions and extensions, exhaustive testing can be challenging.</p>
<p>Capturing and logging JavaScript errors that occur on the client can be just as important as inspecting application and server logs.</p>
<p>The following example assumes you have access to jQuery, and adds an error event handler to the current window. In the case of an error event, we dispatch a POST request with the pertinent details.</p>
<pre><code class="language-language-javascript">(function() {

    var logError = function( e )
    {
        if (window.jQuery)
        {
            var data = {
                f : e.filename,
                l : e.lineno,
                m : e.message
            };
            $.post('/log/error', data);
        }
    };

    if (window.addEventListener)
    {
        window.addEventListener('error', logError, false);
    }
    else
    {
        window.attachEvent('onerror', logError);
    }

})();
</code></pre>
<p>Depending on your application, additional information about the error can be logged server-side: referring URL, user agent, timestamp and possibly session information. Capturing these errors should give you better insight into where visitors may be experiencing an issue.</p>
</div>]]></content:encoded></item></channel></rss>