<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Her Dark Materials]]></title><description><![CDATA[A psuedorandom collection of miscellaneous projects and information.]]></description><link>https://agirlhasnona.me/</link><generator>Ghost 1.25</generator><lastBuildDate>Thu, 02 Apr 2026 12:44:17 GMT</lastBuildDate><atom:link href="https://agirlhasnona.me/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Cost of Being Alive and $60k Salaries]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Something that I keep wanting to address, but don't have the words to dissect, is just how much Billions of wealth is in US Dollars (USD). Yes, the B was intentionally capitalized - because it's Big. I can't bring myself to make a Bigly joke now that the Chief MAGAt</p></div>]]></description><link>https://agirlhasnona.me/calculating-60k-salaries/</link><guid isPermaLink="false">608ec467ce0d01071ae4bc05</guid><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Sun, 02 May 2021 16:33:10 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Something that I keep wanting to address, but don't have the words to dissect, is just how much Billions of wealth is in US Dollars (USD). Yes, the B was intentionally capitalized - because it's Big. I can't bring myself to make a Bigly joke now that the Chief MAGAt is no longer in office, just know that the intention was there.</p>
<p>Onto the numbers. I encountered a tweet, no surprises there:</p>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Elon Musk’s wealth:<br>2020: $24.6 billion<br>2021: $151 billion<br><br>Jeff Bezos’ wealth: <br>2020: $113 billion<br>2021: $177 billion<br><br>Mark Zuckerberg’s wealth:<br>2020: $54.7 billion<br>2021: $97 billion<br><br>Bill Gates’ wealth:<br>2020: $98 billion<br>2021: $124 billion<br><br>Tax the damn rich.</p>&mdash; Public Citizen (@Public_Citizen) <a href="https://twitter.com/Public_Citizen/status/1388543246943535111?ref_src=twsrc%5Etfw">May 1, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>Doing a quick verification of the numbers:</p>
<p>((( steps here )))</p>
<p>Looks like that is correct. A quick tangent on how much billions means, and how much larger billion is than million.</p>
<p>((( stuff here )))</p>
<p>Now back to our originally scheduled programming. In the tweet was have some of the most lucrative of the lucrative billionaires. I was reading some articles to help me come up with a more true &quot;cost of living&quot; calculator - as in what it actually costs to be alive with medical debt, student loan debt, etc. as this post focuses on the United States and well. Yeah. We don't take care of our own. We trick our own into thinking we take care of our own but anyway.</p>
<h3 id="studentloandebt">Student Loan Debt</h3>
<p>For student loan debt, the <a href="https://www.valuepenguin.com/average-student-loan-debt">average debt is about $32,000</a>. It's worth mentioning that this can vary pretty significantly for a variety of factors. Also before I get too far from this topic, there's also two scary graphics about student loan debt. The first is how much student loan debt takes up as a percentage of income, based on average income in a student's first year post-graduation:</p>
<img src="http://res.cloudinary.com/value-penguin/image/upload/c_limit,dpr_1.0,f_auto,h_1600,q_auto,w_1600/v1/Student_Loan_Debt_Gender_b5dbgv">
<p>And while you're taking a deep breath about that, also take a look at this:</p>
<img src="http://res.cloudinary.com/value-penguin/image/upload/c_limit,dpr_1.0,f_auto,h_1600,q_auto,w_1600/v1/student-loan-balance_wnt4cf">
<p>So while the average debt may be ~$32k, there's still <strong>~7.8 million</strong> students with debt <strong>greater than $50,000</strong>. According to <a href="https://www.statista.com/statistics/241488/population-of-the-us-by-sex-and-age/">Statista for the same year (2019)</a>, there are about 22 million 20-24 year olds. (Study done with binary gender, no breakdowns on race / etc., only the age brackets.) So with ~8 million who have a debt &gt;$50k, that's still about about <strong>one third</strong>. One third of students with debt greater than $50k.</p>
<p>Taking another deep breath.</p>
<p>Due to the disparity here, I feel like running numbers for both $32k and $100k, to accommoate the sheer difference between the &quot;bottom&quot; two thirds and the top in terms of debt. Assuming a 4.66% interest rate on student loans, which is another sort of average, what these humans are looking for monthly student loan payments looks a little like this:</p>
<p>$32k, 10 year loan: $334<br>
$32k, 20 year loan: $205<br>
$100k, 10 year loan: $1044<br>
$100k, 20 year loan: $641</p>
<p>Calculated with this <a href="https://smartasset.com/student-loans/student-loan-calculator#rGFB68qj3W">handy calculator right here</a>.</p>
<h3 id="medicaldebt">Medical Debt</h3>
<p>There is a lot of variance for medical debt, depending on whether or not you're &quot;relatively healthy&quot;, chronically ill, and so on. According to CNBC, adults spend an average of $5,000 per year on medical debt. Due to what I mentioned about varying health concerns, I won't be scaling this number at all for &quot;younger vs older&quot;.</p>
<p>It's also worth mentioning that <em>some</em>, but not all, jobs come with a healthcare benefit where you can estimate your out of pocket medical costs and take them out of your income pre-tax. This can end up helping you significantly - <em>if</em> you can successfully estimate this number. I've been burned more than once on this: calculating what I thought my yearly cost was going to be over the course of the year, then changing jobs and 1 ) having higher or lower out of pocket fees at a new job and/or 2 ) losing the pre-tax money. No, you don't &quot;get it back&quot; at a taxed rate if you miscalculate. It's just a loss.</p>
<p>So much greatness here in a America, you just wanna implode, right?</p>
<p>(For those reading this that are unfamiliar, take a look at Flex Savings Accounts and Health Savings Accounts, typically referred to as FSA and HSA respectively.)</p>
<p>As an aside, before I get into the next mini-section, that I didn't encounter my first FSA or HSA until I was about 5-7 years into adult-jobs, which is something to consider too.</p>
<h3 id="savingsforfuture">Savings for Future</h3>
<p>Cue boomer joke about why is there even this section, just stop eating avocadoes and you'll be able to afford a house.</p>
<p>((( Now actually write up what the savings recommendations are for here. )))</p>
<h3 id="thebasics">&quot;The Basics&quot;</h3>
<p>((( rent / mortgage / utilities / computer / food )))</p>
<p>((( Less a justification and more a reminder that we <em>need</em> phones and computers now, unless you plan on doing a phone screen without a phone and responding to your potential employer emails with ... not a computer )))</p>
<h3 id="atrulylivingwage">A Truly Living Wage</h3>
<p>A living wage should support your ability to <em>be alive</em>. This means food, water, housing, medical, education, and so on.</p>
<p>((( Basedon the above numbers, that number is... )))</p>
<h3 id="selfsustainingincome">Self-Sustaining Income</h3>
<p>This part is for the billionaires. Well, not just for them but also for them.</p>
<p>((( Calculating a minimum number for self sustaining wealth, even for a lucrative life style )))</p>
<p>((( Subtracting everything above that )))</p>
<p>((( Here is the part where I actually do some math, and will roll it into the intro so people can get the quick answer before they get to the &quot;how&quot; )))</p>
</div>]]></content:encoded></item><item><title><![CDATA[(Un)Resolved Issues with the IRS and User Hostile Government Processes]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Top level note: while exhausting, this is nothing compared to the injustices and level of violence that some citizens of our country are exposed to. So while you read, and hopefully enjoy, this process malfunction piece please keep in mind it's not the biggest problem we're facing.</p>
<p>And now to</p></div>]]></description><link>https://agirlhasnona.me/un-resolved-issues-with-the-irs/</link><guid isPermaLink="false">5f32d459ce0d01071ae4bbbf</guid><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Tue, 11 Aug 2020 18:21:04 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2020/08/cristian-palmer-XexawgzYOBc-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://agirlhasnona.me/content/images/2020/08/cristian-palmer-XexawgzYOBc-unsplash.jpg" alt="(Un)Resolved Issues with the IRS and User Hostile Government Processes"><p>Top level note: while exhausting, this is nothing compared to the injustices and level of violence that some citizens of our country are exposed to. So while you read, and hopefully enjoy, this process malfunction piece please keep in mind it's not the biggest problem we're facing.</p>
<p>And now to the post.</p>
<p>I recently had an issue where I needed to call the IRS, multiple times, about the status of both my 2018 and 2019 tax returns. It was ...  a trainwreck, to put it mildly. Yes, there is a pandemic and yes, their offices have been closed for a long time - as the representative on the line reminded me. But these issues aren't confined to the pandemic.</p>
<p>Allow me to begin.</p>
<p>It started when I tried to e-file my 2019 taxes, same as I have been doing for years. Filing was rejected with an error code I couldn't easily look up. I contacted my e-filing provider, which took awhile as this was after everything closed down here in the States. In the interim, I tried just filing again - and was unsurprisingly met with the same error code.</p>
<p>I finally heard back from my e-filing provider and apparently the error was that I didn't enter &quot;my PIN&quot;. I don't have a PIN, &quot;you should have a PIN, check your mail&quot;. I don't have a PIN, &quot;have you moved recently?&quot; Not for <em>years</em>. &quot;Ok, paper file and call the IRS.&quot;</p>
<p>Ok.</p>
<p>Calling the IRS is no joke. Especially when everything was closed. I tried calling a local office when I failed to get a human on the main line, but oh right that's closed. (It should be, mind you, people shouldn't be exposed to COVID because I should have a PIN that I don't have.) Then I Duck for &quot;how to reach a human at the IRS&quot; and find some instructions. Doing the straightforward method about &quot;questions about my tax return&quot; resulted in me verifying my SSN, amount of my 2019 return, etc., only to have the phone tree terminate with a &quot;we cannot access your return at this time&quot; message.</p>
<p>Ok. (Again.)</p>
<p>So I manage to reach a human, and was bounced over to Taxpayer Protection (TPP) as apparently there was a fraud alert on my account. Great. I have an abusive ex and lots of accounts locked down as a result, so my anxiety spikes wondering if she could have done something.</p>
<p>But who needs an abusive ex when you have the IRS.</p>
<p>I kid. Kind of.</p>
<p>I finally reach another human at TPP. The summary of the next 20ish minutes is:</p>
<p>Rep: You should have a PIN.<br>
Me: I don't have a PIN.<br>
Rep: Did you check your mail?<br>
Me: ... yes.<br>
Rep: Did you move recently?<br>
Me: ... no.<br>
Rep: I'll check your account again, please hold for 5-7 minutes.<br>
Me: Ok.<br>
Rep: (Returns) Oh. We flagged your account but never sent you a PIN. Sorry about that. Did you paper file?<br>
Me: I did.<br>
Rep: Great, that resolves that. But there's an issue with your 2018 return.<br>
Me: ... what? E-File Provider says that the return was filed successfully, etc.?</p>
<p>Then there was a long session of me Verifying I Am Who I AM, as you might intuit when you're speaking to TPP. What's your mother's full maiden name, what's your father's full name, what's your date of birth, city of birth, and so on.</p>
<p>Ok. (I keep say it's ok for my own emotional state. This will not be the last Ok in this post, I promise.)</p>
<p>Then they tell me they mailed me something that I didn't answer, describing it as &quot;it might have looked like spam but it wasn't spam&quot;.</p>
<p>Great.</p>
<p>When was that sent out? No exact date. But last year. Sometime.</p>
<p>Ok.</p>
<p>There's confusion about whether or not I received my 2018 refund / return, which I'd think I'd notice, so I leave to check paperwork and get off the multihour phone call, looking for anything on the federal (not state) 2018 return.</p>
<p>This isn't even the end. It's only Mayish still.</p>
<p>The lead time for a paperfiled return is about 45 days. So after hearing nothing on the 2019 return, I call again in July, but before the new tax deadline, to try to verify that my paper return was at least received if not processed. The rep at that point seemd to indicate that it was received, but not processed. (Why I phrased it that way will become clear in a moment.)</p>
<p>I wait another month. At this point I'm due for a change of address, for the first time in years, and need to make sure that the &quot;totally should have arrived by now&quot; returns will go to the correct place.</p>
<p>The joke's on me.</p>
<p>I call the IRS. I had forgotten, somehow, in the intervening weeks that the phone tree terminates in a hangup and no human when I select the options that make sense for questions about about the 1040. I Duck, again, for reaching a human and follow the instructions. I spent 2 hours and 16 minutes on the phone. Most of that on hold.</p>
<p>Results?</p>
<p>They still don't know about my 2018 return. They're sending me a letter. They cannot change my address, but the letter <em>should</em> forward if it doesn't arrive before my move out date so I can address it (i.e. call again) at that time.</p>
<p>Great.</p>
<p>Despite the previous rep seeming to be able to verify that my paper return had at least been received <em>somewhere</em>, the current rep could not. That conversation can be summarized as &quot;your paper return would be stored at our processing facility, this is the call center, don't you know those are separate systems?&quot; As someone who works in systems, all I could think of the metaphorically broken, stripped gears that were failing to interlock for someone on the phone not to be able to get that kind of data. Don't know which of the multiple filing centers across the US would even have my return to directly call, the received returns cannot be scanned or otherwise logged into a system so that people who answer the phones can actually answer questions. No wonder the phone tree terminates - that said, it should terminate before having me verify my SSN, return amount, and filing status as apparently it can't even be known. Just put that &quot;Cannot know&quot; response at the top of the tree.</p>
<p>Anyway.</p>
<p>They cannot update my address in their system as it'll revert whenever they process my 2019 return. No they cannot override it. Federal documents, like paper returns, do not forward. I point out that means the new owner of my current residence will then have access to my refund. The rep said she &quot;knows&quot; but that's how it's setup. It cannot forward. I asked how I would know whether or not my return processed, if it won't forward. Apparently the answer is that when my tax return is returned to them, that will kick off a letter to be sent to the <em>same address</em> that my tax return was returned to sender. That letter <em>will</em> forward. At which point I can then call them, they can update my address, and then resend my tax return. Oh, and don't re-file electronically in the interim, that'll flag my account and create More Problems. Or as the rep said, &quot;I cannot advise you to do that at this time.&quot;</p>
<p>As someone who lives in processes and systems, this is just So Much for my brain.</p>
<p>First of all, there doesn't seem to be a way to input that paper filed returns are received. Can't print a barcode to put on the envelope, even if the envelope isn't opened, to be scanned on arrival at a facility? This should be randomized so someone can't (easily) get the tax information from the envelope, literally just a tracking number.</p>
<p>To the rep's point that &quot;this is the call center, that's the processing facility&quot; - since the users need these systems to talk to each other, they <em>should</em>. Put some tracking and confirmation information and make sure it reports into a centralized source of truth <em>that is accessible to the people in the call center</em> so they can answer questions like &quot;did you even receive my return?&quot; In fact, track the &quot;most common questions&quot; that need to be answered by a human, i.e that you cannot put a canned resonse in your phone tree for, and make sure that those working the call center are able to actually pull the data they need to answer those caller-specific questions.</p>
<p>Be able to change someone's address. People move all the time, and with the current global crisis that's likely only to become more often not less. Since we're required to verify our identity on the call, allow the rep to override the address with the caller's data with a flag that the override was requested and approved by the person who corresponds with the SSN/taxpayer ID.</p>
<p>Allow mail forwarding for all documents. When I had to temporarily live out of my home with the craziness of abusive ex, I set up a PO Box. I had initially tried to set up a mail forward to the PO Box, then set up a mail hold for anything that didn't forward. Unfortunately, as a different problem, you cannot have both a mail hold and a forward in place. Since there's currently no way to forward certain documents, then people should be able to place a hold on anything that cannot forward. Let me show my photo ID to pick up my mail: license, passport - whatever. Even better, if you can verify my identity for fraud protection use the same level of protection for forwarding sensitive documents. &quot;Regular mail forward&quot; can just be requested, or &quot;all mail forward&quot; requires a photo ID and whatever to verify the requestor is the person who is supposed to be receiving the documents. Because honestly, what's the point of a mail forward that doesn't forward?</p>
<p>Anyway, that's my rant. Thank you for reading this far, and maybe if we defund the police we can use a subset of the funds to redesigning the process flows that are so horribly broken.</p>
<p><small>Header image: <span>Photo by <a href="https://unsplash.com/@cristianpalmer?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Cristian Palmer</a> on <a href="https://unsplash.com/s/photos/underwater?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></span></small></p>
</div>]]></content:encoded></item><item><title><![CDATA[I'm not dead!]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Not the greatest joke in the world, granted, but the reason I haven't been posting is because I've embarked on the great and wonderful journey called &quot;career switching&quot;. In my particular case, from &quot;infrastructure engineer&quot; (a.k.a. Site Reliability Engineer / SRE, Cloud Engineer, Ops, etc.</p></div>]]></description><link>https://agirlhasnona.me/im-not-dead/</link><guid isPermaLink="false">5d1a2f17487c31356191fba1</guid><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Mon, 01 Jul 2019 16:30:19 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2019/07/HyperboleAndAHalf-NotDead.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://agirlhasnona.me/content/images/2019/07/HyperboleAndAHalf-NotDead.png" alt="I'm not dead!"><p>Not the greatest joke in the world, granted, but the reason I haven't been posting is because I've embarked on the great and wonderful journey called &quot;career switching&quot;. In my particular case, from &quot;infrastructure engineer&quot; (a.k.a. Site Reliability Engineer / SRE, Cloud Engineer, Ops, etc.) to &quot;Developer Advocate&quot;. It's been a lovely, but busy, journey.</p>
<p>I've also picked up some Hebrew along the way:<br>
!שלום</p>
<p>(Note that regardless of the medium, blog or Slack, nothing handles changing carriage returns particularly well. Especially if you're trying to change direction on the same line, rather than a new one. Joy ;) )</p>
<p>In any event it's been a busy yearish. I'll be posting some things to come, but to start I had a funny idea wherein which I'll post a quick stub about a domain and/or handle idea that I had, that someone else had first and therefore is either in use or, regrettably, just poached. Here's some quick ones:</p>
<p>Twitter: @quintessence<br>
This is pretty straightforward, as it's just my forename, but what &quot;grinds my gears&quot; about this one is that Twitter, unlike Github, does <em>not</em> have a &quot;no parking&quot; policy. Someone made two tweets in 2007 (<em>twelve years ago</em>) and now the handle is indefinitely in use :(</p>
<p>Twitter: @blackpajamas<br>
I had an idea recently that I wanted to start learning how to cook Indian food like some of my snazzy international friends. This involves a lot of tasty and unforgiving spices (stains!), so I had the idea that I should cook in black clothes. In black pajamas, specifically. I was also writing up a blog post on Chef at the time, which means that I had the doublethink that it'd be funny to rename <em>everything</em> (even the blog ... 😅) to &quot;I Code in Black Pajamas&quot;. Additional fun to be had when working from home, because work pajamas <a href="https://legendarysuitjamas.com/"><em>are definitely a thing</em></a>.</p>
<p>URL: quintessence.dev<br>
When the new .dev TLD was about to be released I was waiting on the trigger. I was finally going to have something in <em>my</em> name. It spurred a new era of hope: if I was able to secure &quot;my&quot; Twitter handle, that would mean consistent branding across Twitter, Github, and domain. Trifecta! Alas I was always a poor gambler and didn't want to to pay so I waited until it was opened up to the general public. The last cost tier was $125, so non-trivial even for a URL. Alas someone else managed to snag it, I blame bots, and now it is... parked.</p>
<p>Variations of &quot;my (fore)name as a domain&quot; that others put to use first:<br>
quintessence.is -&gt; Redirects to a Tumblr owned by a possibly current (?) Director of Content for Parachute Home, but hasn't been updated since 2015.<br>
quintessence.com -&gt; professional sound equiment. From the looks of it, exceeding even &quot;prosumer grade&quot;. Cool.<br>
quintessence.me -&gt; Parked<br>
quintessence.space -&gt; Parked, but &quot;for sale&quot; for $450 USD. 😅</p>
<p>Anyway, y'all get the picture on the domains. I might reveal some of the other iterations of my name that I tried for things over time, if they're funny.</p>
<p><small>Banner source: &quot;I'm not dead&quot; from the greatly missed <a href="https://hyperboleandahalf.blogspot.com/2010/04/im-definitely-not-dead.html">Hyperbole &amp; A Half</a> and some shameless backdrop by yours truly.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Today I played around with Github & iCloud]]></title><description><![CDATA[Will git repos function as expected when syncing across a shared iCloud folder? Spoiler: don't try this with a repo you care about.]]></description><link>https://agirlhasnona.me/github-icloud/</link><guid isPermaLink="false">5ad26469ad0a0306e0540252</guid><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Sat, 14 Apr 2018 22:39:24 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2018/04/github-like-an-adult-banner.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://agirlhasnona.me/content/images/2018/04/github-like-an-adult-banner.png" alt="Today I played around with Github & iCloud"><p>So as recently as only a few minutes ago, I had a minor panic attack. See, normally when I <code>cd</code> around in shell I <code>tab</code> to auto-complete like a fiend. Just now, I actually typed out a directory and was met with an error: it didn't exist. So then I tried to tab auto-complete. It still. didn't. exist. After suffering a minor panic attach I realized: I cloned the repo on my <em>other</em> laptop. No big. Since it was a repo for my own software projects, I should move the cloned repo to be a subdirectory of a shared iCloud directory and then iCloud magic will save me from future self. Right?</p>
<p>Wrong.</p>
<pre><code class="language-bash">→  git co my-branch
fatal: unable to read tree ████████████████████████████████████████
</code></pre>
<p>Oh, no problem. I'll just pull the files back down.</p>
<p>Wrong. Again.</p>
<pre><code class="language-bash">→  git pull
error: refs/heads/gh-pages does not point to a valid object!
error: refs/remotes/origin/HEAD does not point to a valid object!
error: refs/remotes/origin/gh-pages does not point to a valid object!
error: refs/heads/gh-pages does not point to a valid object!
error: refs/remotes/origin/HEAD does not point to a valid object!
error: refs/remotes/origin/gh-pages does not point to a valid object!
error: refs/heads/gh-pages does not point to a valid object!
error: refs/remotes/origin/HEAD does not point to a valid object!
error: refs/remotes/origin/gh-pages does not point to a valid object!
error: Could not read ████████████████████████████████████████
error: Could not read ████████████████████████████████████████
error: refs/heads/gh-pages does not point to a valid object!
error: refs/remotes/origin/HEAD does not point to a valid object!
error: refs/remotes/origin/gh-pages does not point to a valid object!
remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0
fatal: bad object ████████████████████████████████████████
error: github.com:&lt;ORG&gt;/&lt;REPO&gt;.git did not send all necessary objects
</code></pre>
<h2 id="whatsgoingon">What's going on?</h2>
<p>In short? Optimization. See, iCloud storage has a &quot;neat&quot; optimization feature so that &quot;unnecessary&quot; files aren't downloaded locally. The problem with this, as you may have guessed already if you're here, is that <code>git</code> uses a <em>lot</em> of artifacts that get left out of the party and a lack of them causes issues like what I'm seeing above.</p>
<h2 id="whattodo">What to do?</h2>
<h3 id="option1disableicloudoptimization">Option 1: Disable iCloud Optimization</h3>
<p>After some research there's no way to exempt a directory from Optimization like you can with Time Machine. Since optimization is either &quot;on&quot; or &quot;off&quot;, it's up to you to see if you'd like to turn off optimization overall just for <code>git</code> in particular. As a one-off, if you don't want to turn off optimization, you can try to trigger a file to download by recusively trying to <code>touch</code> and/or <code>open</code> individual files. Caveat: an initial test of this method was not fruitful.</p>
<h3 id="option2justpushthechanges">Option 2: Just push the changes</h3>
<p>The alternative of course is to just use <code>git</code> as the tool is intended to be used and regularly push your changes. In terms of coolness I do wish I had found a relatively quick-and-easy way to force the iCloud sync situation to work, but in hindsight this would just encourage me to not push my changes as frequently as I so obviously should. And since I have access to my own repos, there's nothing stopping me from using a shell script to clone / pull changes from repos in my org every time I switch between my work and personal computers.</p>
<p><small>Banner sources: &quot;Like an adult&quot; image shamelessly stolen from the greatly missed <a href="http://hyperboleandahalf.blogspot.com/2010/06/this-is-why-ill-never-be-adult.html">Hyperbole &amp; a Half</a> comic. Also iCloud and Github logos.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Building a .sql file with `vim`]]></title><description><![CDATA[<div class="kg-card-markdown"><p>While playing around with MySQL today I had an idea that generating various <code>mysql</code> statements would actually be a good way to practice getting familiar with <code>vim</code>. I remember back when I was getting started that I actually found <code>vim</code> really daunting, almost so much so I nearly didn't get</p></div>]]></description><link>https://agirlhasnona.me/vim-mysql-exercise/</link><guid isPermaLink="false">5a5d48d74f8397281724eb11</guid><category><![CDATA[vim]]></category><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Fri, 09 Mar 2018 19:06:05 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>While playing around with MySQL today I had an idea that generating various <code>mysql</code> statements would actually be a good way to practice getting familiar with <code>vim</code>. I remember back when I was getting started that I actually found <code>vim</code> really daunting, almost so much so I nearly didn't get the habit off the ground, but then I found myself doing a ton of sysad-esque tasks and it was just so much easier to use once I got past the fear.</p>
<p>It's worth mentioning that there's a lot of new information for the beginning <code>vim</code> user in this post. The idea is not for you to remember it all, it's to give you an arsenal of common commands that you can make use of by practicing getting a file from practically nothing to a more useful form.</p>
<h2 id="prerequisities">Prerequisities</h2>
<p>To see if your computer already has <code>vim</code>, please open Teminal / iTerm / your terminal app of choice and enter the following command:</p>
<pre><code>vim mysql-vim-playground
</code></pre>
<p>This will create an empty file named <code>mysql-vim-playground</code> if you have <code>vim</code> installed. If you do not, and you see a command not recognized error, please install <code>vim</code> with your package manager (<code>yum</code> / <code>apt</code> for most Linux distros and <code>brew</code> for macOS).</p>
<p><strong>Windows Users</strong></p>
<p>If you are running a Windows machine, this exercise is for the Unix-based OS learner in you. If you are running Windows 10 you can try <a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">installing the Linux Subsystem for Windows</a>. Alternatively, if you have access you can spin up a <a href="https://www.digitalocean.com/">DigitalOcean droplet</a> or <a href="https://www.linode.com/">Linode instance</a> with your Linux distro of choice.</p>
<p><strong>A quick note</strong></p>
<p>Although you <em>can</em> copy and paste commands directly from this blog post, to help with your muscle memory I do recommend actually typing them out each time.</p>
<h2 id="putyourrightfootin">Put your right foot in...</h2>
<p>Now that you have <code>vim</code> opened, let's quickly get familiar with the basic-basics. There are two &quot;modes&quot; called &quot;command mode&quot; and &quot;<strong>i</strong>nsert mode&quot; (i.e. &quot;edit mode&quot;). <code>vim</code> will open in command mode by default, so to enter insert mode you simply hit the <code>i</code> key. To return to command mode, hit the ESC key.</p>
<p>To get started, our users are Jayne Cobb, Inara Serra, George Parley, and Antimony Carver. You can copy and paste them, like so:</p>
<pre><code>Jayne Cobb, Inara Serra, George Parley, and Antimony Carver
</code></pre>
<p>When you go to paste them in make sure you're in insert mode by hitting <code>i</code>, otherwise you'll enter insert mode when <code>vim</code> hits the <code>a</code> character in <code>Jayne</code> -&gt; <code>a</code> is for <strong>a</strong>ppend, which basically changes the cursor position when you enter insert mode.</p>
<h3 id="makesomenewlines">Make some newlines</h3>
<p>In <code>vim</code> you can make substitutions with <code>:s</code>. Please enter the following when in command mode:</p>
<pre><code>:s/, /\r/g
</code></pre>
<p>We've just swapped the <code>,</code> (that's a comma <em>and</em> a space) with a newline, which is signified by <code>\r</code> (for &quot;carriage <strong>r</strong>eturn). The <code>g</code> signifies a <strong>g</strong>lobal change on that line, so every <code>,</code> is replaced with a newline instead of only the first.</p>
<h3 id="peskywords">Pesky words</h3>
<p>Although this was mostly successful, on our last line we have <code>and Antimony Carver</code>. We want just the name, so in command mode navigate to the last line by hitting <code>shift+G</code> and <code>dw</code>.</p>
<p><code>d</code> is the <strong>d</strong>elete character, and <strong>w</strong> means &quot;word&quot;. How <code>vim</code> breaks up words can be a topic unto itself when discussing code lines, but in general when you are using &quot;regular ol' written language&quot; a word is what you'd expect.</p>
<h3 id="makingusernamesoutofusers">Making usernames out of users</h3>
<p>Now to do some next-level deletes. The usernames for these people are going to be their first names and last initials. We're going to delete the last word of every line except for the first character. The most brain-friendly way to do this is with a macro, unless you want to also make regex your friend on your first foray into <code>vim</code> 😉</p>
<p>Making a macro will allow you to perform the action on the first line, then repeat it on other lines. To get started recording, when in command mode enter <code>qq</code>. You'll see at the bottom of the screen you now have an <code>@q</code>. The first <code>q</code> is to record, the second <code>q</code> is the name of the register where it's being stored. What's a register? Think of it as &quot;Save to Slot Q&quot;.</p>
<p>On the first line do the following commands:</p>
<ul>
<li><code>w</code></li>
<li>right arrow (to go over one character)</li>
<li><code>dw</code></li>
<li>left arrow</li>
<li><code>x</code></li>
</ul>
<p>Now to stop recording hit <code>q</code> (yes, <em>again</em>). What we just did was go from the start of line one, move forward a word, move forward a character, delete to the end of the word, move back a character, and delete the character under the cursor with <code>x</code>.</p>
<p>Now to repeat.</p>
<pre><code>:2,$norm @q
</code></pre>
<p><code>2,$</code> tells <code>vim</code> to start on line <code>2</code> and repeat until the last line of the file, <code>$</code>. You'll encounter a lot of scenarios where <code>$</code> means &quot;end&quot;. As a quick side exercise, hit <code>0</code> to go to the beginning of your current line and <code>$</code> to go to the end. Oscillate between the two a couple of times to get a feel for it.</p>
<p><code>norm @q</code> plays the macro &quot;stored in slot Q&quot;.</p>
<h3 id="herdsomecamels">Herd some camels</h3>
<p>At this juncture, there's just one more step to getting our usernames: removing the camel casing. Do this, we're going to make use of <strong>v</strong>isual mode. Go to the beginning of line 1 and do the following:</p>
<ul>
<li><code>v</code></li>
<li><code>$</code></li>
<li>shift+G</li>
<li><code>$</code></li>
<li><code>u</code></li>
</ul>
<p>Visual mode allows you to visually select a series of text and make changes on the whole selected text. <code>u</code> makes all the selected characters lowercase and <code>U</code> would make them all uppercase. If you did not want to use visual mode, you could alternatively navigate to each character and use <code>~</code> to flip its case.</p>
<h2 id="beforeweproceed">Before we proceed</h2>
<p>At this point you may have noticed a few things about <code>vim</code> commands. That they usually come in individual letters or short words, like <code>d</code>, <code>w</code>, <code>s</code>, and <code>norm</code>, and that they are stackable. For example, <code>dw</code> stacks to delete a word. You can also specify ranges on command lines, like <code>2,$</code> for <code>s</code> or <code>norm</code>. Keep these properties in mind when you start branching out to learn new <code>vim</code> commands, e.g. <code>e</code> to hop words but landing on the last letter rather than the first.</p>
<h2 id="mysqlcommands">MySQL commands</h2>
<p>We're going to create our users on a MySQL schema with multiple databases and going to grant a few different commands to give our users read only or read and write access to different databases.</p>
<p>Please note that since the primary goal of this exercise is to practice <code>vim</code> with a real world example, that no MySQL databases have been set up. You're more than free to practice on your own, though, if you have some handy.</p>
<p>We're going to plan ahead and know that the databases in question are <code>firefly</code> and <code>gunnerkrigg</code> so we'll need to create the users once and apply permissions to two different databases.</p>
<h3 id="donorepeatyourselfwellmaybethisonetime">Do No Repeat Yourself ... well maybe this one time</h3>
<p>Planning is important. Since we'll need to create users once and then add them to two databases, it'll help us if we copy out our usernames a few times. To <strong>y</strong>ank a few lines of text, in command mode entery the following:</p>
<pre><code>:1,$y
</code></pre>
<p>Navigate to the bottom of the file with <code>shift+G</code> and <em>without entering insert mode</em> paste your copied items with <code>p</code>. Go to the bottom with <code>shift+G</code> and paste with <code>p</code> again. It's important not to just paste twice with <code>pp</code> (although you can) due to where your cursor falls: you'll have a couple of usernames not be in order any more.</p>
<p>To make the lines a little more readable, I've inserted newlines between each block of usernames. So my text file now looks like this:</p>
<pre><code>jaynec
inaras
georgep
antimonyc

jaynec
inaras
georgep
antimonyc

jaynec
inaras
georgep
antimonyc
</code></pre>
<p>The line numbers I'll use on subsequent steps will assume you have done the same.</p>
<h3 id="creation">Creation</h3>
<p>Let's set up our creates on lines 1-4. Skipping over the finer points of MySQL, the command for creating a user looks like this:</p>
<pre><code class="language-mysql">create user 'USER'@'%' identified by 'SOOPERPASSWORD';
</code></pre>
<p>So let's do some swapping:</p>
<pre><code>:1,4s/^/create user '/
</code></pre>
<p>Just like <code>$</code> means &quot;end&quot;, <code>^</code> means &quot;beginning&quot;. So here we're replacing the beginning of each line with <code>create user '</code>. Now for the rest:</p>
<pre><code>:1,4s/$/'@'%' identified by 'SOOPERPASSWORD';/
</code></pre>
<p>Now for the next block. The Firefly crew need RW access to their own ship's database, but maybe the students of Gunnerkrigg do not. They can have RO (read only) access though. The syntax for this, again skipping over MySQL learnings, is:</p>
<pre><code class="language-mysql">grant all privileges on &lt;database&gt;.&lt;table&gt; to 'USER'@'%';
grant select on &lt;database&gt;.&lt;table&gt; to 'USER'@'%';
</code></pre>
<p>For our file, let's enable line numberings with <code>:set nu</code>. There, that'll make this next bit easier! Since in each case we're going to allow access to all tables in a database, instead of naming table <code>&lt;table&gt;</code> we're going to use the wildcard character <code>*</code>:</p>
<pre><code>:6,$s/^/grant all privileges on firefly.* to '/
</code></pre>
<p>And to finish the command:</p>
<pre><code>:6,$s/$/'@'%';
</code></pre>
<p>So now your file should look like this:</p>
<pre><code class="language-mysql">create user 'jaynec'@'%' identified by 'SOOPERPASSWORD';
create user 'inaras'@'%' identified by 'SOOPERPASSWORD';
create user 'georgep'@'%' identified by 'SOOPERPASSWORD';
create user 'antimonyc'@'%' identified by 'SOOPERPASSWORD';

grant all privileges on firefly.* to 'jaynec'@'%';
grant all privileges on firefly.* to 'inaras'@'%';
grant all privileges on firefly.* to 'georgep'@'%';
grant all privileges on firefly.* to 'antimonyc'@'%';
grant all privileges on firefly.* to ''@'%';
grant all privileges on firefly.* to 'jaynec'@'%';
grant all privileges on firefly.* to 'inaras'@'%';
grant all privileges on firefly.* to 'georgep'@'%';
grant all privileges on firefly.* to 'antimonyc'@'%';
</code></pre>
<p>Ah, looks like we were a little overzealous with our <code>6,$</code>.</p>
<h3 id="alwaysmovingforward">Always moving forward</h3>
<p>Next step: delete the line where there was no username at all by navgiating to line <code>10</code> and entering <code>d$</code>. Now, you recall that we don't want George and Antimony to have unchecked access to all of Firefly's database, so we'll fix that with:</p>
<pre><code>:8,9s/all privileges/select/
</code></pre>
<p>Now to clean up the next batch for <code>gunnerkrigg</code>. First:</p>
<pre><code>:11,14s/firefly/gunnerkrigg/
</code></pre>
<p>And you may have guessed it, but:</p>
<pre><code>:11,12s/all privileges/select/
</code></pre>
<h2 id="yourfinalform">Your final form</h2>
<p>The final form of your file should look like this:</p>
<pre><code class="language-mysql">create user 'jaynec'@'%' identified by 'SOOPERPASSWORD';
create user 'inaras'@'%' identified by 'SOOPERPASSWORD';
create user 'georgep'@'%' identified by 'SOOPERPASSWORD';
create user 'antimonyc'@'%' identified by 'SOOPERPASSWORD';

grant all privileges on firefly.* to 'jaynec'@'%';
grant all privileges on firefly.* to 'inaras'@'%';
grant select on firefly.* to 'georgep'@'%';
grant select on firefly.* to 'antimonyc'@'%';

grant select on gunnerkrigg.* to 'jaynec'@'%';
grant select on gunnerkrigg.* to 'inaras'@'%';
grant all privileges on gunnerkrigg.* to 'georgep'@'%';
grant all privileges on gunnerkrigg.* to 'antimonyc'@'%';
</code></pre>
<p>Hardly recognizable from where we started, isn't it?</p>
<p>I hope this helps you ease into more <code>vim</code> explorations 😁</p>
</div>]]></content:encoded></item><item><title><![CDATA[Ops Tutorial: SSL Setup for Jenkins]]></title><description><![CDATA[Describing how to setup SSL with Jenkins, with explanation for what did and did not work.]]></description><link>https://agirlhasnona.me/ops-tutorial-ssl-jenkins/</link><guid isPermaLink="false">5a7d086c85e83208fca505d2</guid><category><![CDATA[tutorial]]></category><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Sun, 11 Feb 2018 01:27:30 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2018/02/jenkins-ssl-howto-wordcloud.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://agirlhasnona.me/content/images/2018/02/jenkins-ssl-howto-wordcloud.png" alt="Ops Tutorial: SSL Setup for Jenkins"><p>As I was digging around the internet encountering conflicting configurations and advice about how to setup SSL for Jenkins, I decided that I should really split this bit out of the Jenkins Post Mortem (still in progress).</p>
<h2 id="intothetrenches">Into the Trenches</h2>
<p>It took a few iterations of failure before finding a working configuration. Here's what I tried that didn't work, along with the error behavior, to hopefully aid others in their troubleshooting.</p>
<h3 id="usingjavascacertsstore">Using Java's <code>cacerts</code> store</h3>
<p>I started by basically doing what anyone else would do: googling / <a href="https://duckduckgo.com/">ducking</a> for variations of &quot;how to setup ssl for Jenkins&quot;. I started by trying to follow the instructions provided by <a href="https://support.cloudbees.com/hc/en-us/articles/203821254-How-to-install-a-new-SSL-certificate-">CloudBees</a>. These instructions worked after a fashion, and I've included them below for a quick reference:</p>
<pre><code class="language-bash">sudo mkdir $JENKINS_HOME/.keystore

sudo chown jenkins:jenkins $JENKINS_HOME/.keystore

cp $JAVA_HOME/jre/lib/security/cacerts $JENKINS_HOME/.keystore

$JAVA_HOME/bin/keytool -keystore $JENKINS_HOME/.keystore/cacerts -import -alias &lt;YOUR_ALIAS_HERE&gt; -file &lt;YOUR_CA_FILE&gt;
</code></pre>
<p><code>&lt;YOUR_ALIAS_HERE\&gt;</code> should be a helpful name, e.g. <code>jenkins-wildcard</code> if you're using a wildcard cert, and <code>&lt;YOUR_CA_FILE&gt;</code> would be the name of your x509 cert, e.g. <code>wildcard-example-com-x509.crt</code>. The <code>crt</code> you need to download from your certificate provider.</p>
<p>After setting all that up, I ran into a little snag:</p>
<pre><code class="language-bash">→  sudo service jenkins start
Starting Jenkins                                           [  OK  ]

→  sudo service jenkins status
jenkins dead but pid file exists
</code></pre>
<p>Why is Jenkins dead? Well, the instructions I linked / copied above neglect to tell you to disable the HTTP port and set the HTTPS port, like so:</p>
<pre><code class="language-bash">#disable HTTP
JENKINS_PORT=&quot;-1&quot;

#enable HTTPS
JENKINS_HTTPS_PORT=&quot;8443&quot;
</code></pre>
<p><img src="https://agirlhasnona.me/content/images/2018/02/smile-flush-emoji.png" alt="Ops Tutorial: SSL Setup for Jenkins"><br>
<small>Source: <a href="http://www.iconarchive.com/show/yolks-icons-by-bad-blood.html">IconArchive</a></small></p>
<p>Once that's all working, though, you'll still end up with...</p>
<p><strong>Self Signed Cert Error / Warning</strong></p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/ssl_self-signed_cert_chrome64.png" alt="Ops Tutorial: SSL Setup for Jenkins"><br>
<small>For the curious, the browser plugins are 1Password, Momentum, AdBlocker Plus, Amazon, and Chromecast in that order.</small></p>
<p>I suspected part of that was perhaps it wasn't able to pull my specific cert out of the store, and I was a bit curious about what all was in there, so I took a look:</p>
<pre><code class="language-bash">→  $JAVA_HOME/bin/keytool -list -keystore $JAVA_HOME/lib/security/cacerts
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN

Your keystore contains 167 entries
{{ snip }}
</code></pre>
<p>Well if nothing else there are 167 entries in there and that's an unwiedly beast to troubleshoot to know for sure. Instead, I tried to create my own store that would only house my wildcard cert. Things got a bit unfortunate here, and I spun my wheels for quite a bit.</p>
<h3 id="tryingtomakemycertstore">Trying to make my cert store</h3>
<p>This was a bit difficult because it seems like some of the errors it threw were red herrings. I followed the instructions <a href="http://sam.gleske.net/blog/engineering/2016/05/04/jenkins-with-ssl.html">here</a> to setup a keystore and they were as follows, my errors included:</p>
<pre><code class="language-bash">→  openssl pkcs12 -export -out jenkins_keystore.p12 -passout 'pass:changeit' -inkey wildcard.example.com.key -in wildcard.example.com.crt -certfile ca-bundle.crt -name wildcard-example

→  $JAVA_HOME/bin/keytool -importkeystore -srckeystore jenkins_keystore.p12 -srcstorepass 'changeit' -srcstoretype PKCS12 -srcalias wildcard.example -deststoretype JKS -destkeystore jenkins_keystore.jks -deststorepass 'changeit' -destalias wildcard.example
Importing keystore jenkins_keystore.p12 to jenkins_keystore.jks...
keytool error: java.lang.Exception: Alias &lt;wildcard.example&gt; does not exist

→  $JAVA_HOME/bin/keytool -list -keystore jenkins_keystore.p12
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

wildcard-example, Feb 8, 2018, PrivateKeyEntry,
Certificate fingerprint (SHA1): ██:██:██:██:██:██:██:██:██:██:██:██:██:██:██:██:██:██:██:██

→  $JAVA_HOME/bin/keytool -importkeystore -srckeystore jenkins_keystore.p12 -srcstorepass 'changeit' -srcstoretype PKCS12 -srcalias wildcard-example -deststoretype JKS -destkeystore jenkins_keystore.jks -deststorepass 'changeit' -destalias wildcard-example
Importing keystore jenkins_keystore.p12 to jenkins_keystore.jks...

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using &quot;keytool -importkeystore -srckeystore jenkins_keystore.jks -destkeystore jenkins_keystore.jks -deststoretype pkcs12&quot;.

→  sudo cp jenkins_keystore.jks $JENKINS_HOME/.keystore/

→  sudo chown jenkins:jenkins $JENKINS_HOME/.keystore/jenkins_keystore.jks

→  sudo chmod 600 !$
sudo chmod 600 $JENKINS_HOME/.keystore/jenkins_keystore.jks


→  sudo vim /etc/sysconfig/jenkins

→  sudo service jenkins restart
Shutting down Jenkins                                      [FAILED]
Starting Jenkins                                           [  OK  ]

→  ^restart^status
sudo service jenkins status
jenkins dead but pid file exists
</code></pre>
<p>And what was in the log?</p>
<pre><code class="language-bash">→  sudo tail -n 25 /var/log/jenkins/jenkins.log
        at Main._main(Main.java:294)
        at Main.main(Main.java:132)
Caused by: winstone.WinstoneException: No SSL key store found at /etc/jenkins/jenkins_keystore.jks
        at winstone.AbstractSecuredConnectorFactory.configureSsl(AbstractSecuredConnectorFactory.java:64)
        at winstone.HttpsConnectorFactory.start(HttpsConnectorFactory.java:41)
        at winstone.Launcher.spawnListener(Launcher.java:207)
        ... 8 more
Feb 08, 2018 9:12:28 PM winstone.Logger logInternal
SEVERE: Container startup failed
java.io.IOException: Failed to start a listener: winstone.HttpsConnectorFactory
        at winstone.Launcher.spawnListener(Launcher.java:209)
        at winstone.Launcher.&lt;init&gt;(Launcher.java:150)
        at winstone.Launcher.main(Launcher.java:354)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at Main._main(Main.java:294)
        at Main.main(Main.java:132)
Caused by: winstone.WinstoneException: No SSL key store found at /etc/jenkins/jenkins_keystore.jks
        at winstone.AbstractSecuredConnectorFactory.configureSsl(AbstractSecuredConnectorFactory.java:64)
        at winstone.HttpsConnectorFactory.start(HttpsConnectorFactory.java:41)
        at winstone.Launcher.spawnListener(Launcher.java:207)
        ... 8 more
</code></pre>
<p>So a couple of things. You can see at the top there I neglected my alias and typoed it, but I left the mistake so you could see the command to retreive the alias from the store.</p>
<p>Also, <code>JENKINS_HOME</code> is <code>/var/lib/jenkins</code>. This is the standard configuration. You may notice that the Winstone log is looking in <code>/etc/jenkins</code>, which I didn't set, so it appears on the surface at least that this is defaulted somewhere. Noticing the path descrepancy I moved the keystore there, but it unfortunately still threw the same error. I took back to the internet and found <a href="http://jenkins-ci.361315.n4.nabble.com/Unable-to-get-Java-8-generated-keystore-to-be-recognized-by-Jenkins-2-101-Winstone-td4904570.html">this</a>, which I took as a sign that maybe, just maybe, I needed to pursue a different solution.</p>
<h2 id="simplesetupjenkinsnginxreverseproxy">Simple Setup: Jenkins + Nginx Reverse Proxy</h2>
<h3 id="jenkins">Jenkins</h3>
<p>To start, you'll need to set the Jenkins variables in <code>/etc/sysconfig/jenkins</code>:</p>
<pre><code class="language-bash">→  sudo cat /etc/sysconfig/jenkins | grep -v &quot;\#&quot;
JENKINS_HOME=&quot;/var/lib/jenkins&quot;

JENKINS_JAVA_CMD=&quot;&quot;

JENKINS_USER=&quot;jenkins&quot;


JENKINS_JAVA_OPTIONS=&quot;-Djava.awt.headless=true&quot;

JENKINS_PORT=&quot;8080&quot;

JENKINS_LISTEN_ADDRESS=&quot;127.0.0.1&quot;

JENKINS_HTTPS_PORT=&quot;&quot;

JENKINS_HTTPS_KEYSTORE=&quot;&quot;

JENKINS_HTTPS_KEYSTORE_PASSWORD=&quot;&quot;

JENKINS_HTTPS_LISTEN_ADDRESS=&quot;&quot;


JENKINS_DEBUG_LEVEL=&quot;5&quot;

JENKINS_ENABLE_ACCESS_LOG=&quot;no&quot;

JENKINS_HANDLER_MAX=&quot;100&quot;

JENKINS_HANDLER_IDLE=&quot;20&quot;

JENKINS_ARGS=&quot;&quot;
</code></pre>
<p>This is a minimal configuration, so you may have modified some values like <code>JENKINS_ARGS</code> if you have an existing Jenkins setup. This is fine. The main values to focus on here are <code>JENKINS_PORT</code> and <code>JENKINS_LISTEN_ADDRESS</code>. You should not set any of the <code>JENKINS_HTTPS_*</code> variables as the HTTPS configuration, if you choose it, will be handled by nginx.</p>
<h3 id="nginx">Nginx</h3>
<p>Setting up the Nginx proxy was so simple that I wish I had started there, but hey you live and learn. To set this up, install nginx:</p>
<pre><code class="language-bash">sudo yum install nginx -y
</code></pre>
<p>I don't want to use the default nginx configurations as it does a bunch of wizarding that I don't want to inherit or troubleshoot. The latter being the usual case.</p>
<pre><code class="language-bash">→  cd /etc/nginx
→  sudo mkdir nginx_defaults
→  sudo mv * nginx_defaults/
mv: cannot move ‘nginx_defaults’ to a subdirectory of itself, ‘nginx_defaults/nginx_defaults’
→  sudo mv nginx_defaults/mime.types .
→  sudo mkdir certs
→  tree
.
├── certs
├── mime.types
└── nginx_defaults
    ├── conf.d
    │   └── virtual.conf
    ├── default.d
    ├── fastcgi.conf
    ├── fastcgi.conf.default
    ├── fastcgi_params
    ├── fastcgi_params.default
    ├── koi-utf
    ├── koi-win
    ├── mime.types.default
    ├── nginx.conf
    ├── nginx.conf.default
    ├── scgi_params
    ├── scgi_params.default
    ├── uwsgi_params
    ├── uwsgi_params.default
    └── win-utf
</code></pre>
<p>We're going to be using <code>mime.types</code>, so definitely keep that one in the parent directory. Also, in this case ignore the <code>cannot move</code> error since that is expected.</p>
<h4 id="httpconfiguration">HTTP Configuration</h4>
<p>The following is a complete <code>nginx.conf</code> file for the HTTP proxy configuration, just make sure to change <code>jenkins.example.com</code> to your Jenkins URL.</p>
<p>The bulk of this configuration is taken from the <a href="https://wiki.jenkins.io/display/JENKINS/Running+Jenkins+behind+Nginx">Jenkins wiki</a> page with the relevant information added in from the default <code>nginx.conf</code>.</p>
<pre><code class="language-nginx">user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] &quot;$request&quot; '
                      '$status $body_bytes_sent &quot;$http_referer&quot; '
                      '&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    index   index.html index.htm;

    server {
      listen          80;       # Listen on port 80 for IPv4 requests

      server_name     jenkins.example.com;  ## FIXME: This should be your URL

      #this is the jenkins web root directory (mentioned in the /etc/default/jenkins file)
      root            /var/run/jenkins/war/;

      access_log      /var/log/nginx/access.log;
      error_log       /var/log/nginx/error.log;
      ignore_invalid_headers off; #pass through headers from Jenkins which are considered invalid by Nginx server.

      location ~ &quot;^/static/[0-9a-fA-F]{8}\/(.*)$&quot; {
        #rewrite all static files into requests to the root
        #E.g /static/12345678/css/something.css will become /css/something.css
        rewrite &quot;^/static/[0-9a-fA-F]{8}\/(.*)&quot; /$1 last;
      }

      location /userContent {
        #have nginx handle all the static requests to the userContent folder files
        #note : This is the $JENKINS_HOME dir
        root /var/lib/jenkins/;
        if (!-f $request_filename){
          #this file does not exist, might be a directory or a /**view** url
          rewrite (.*) /$1 last;
          break;
        }
        sendfile on;
      }

      location @jenkins {
          sendfile off;
          proxy_pass         http://127.0.0.1:8080;
          proxy_redirect     default;
          proxy_http_version 1.1;

          proxy_set_header   Host              $host;
          proxy_set_header   X-Real-IP         $remote_addr;
          proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
          proxy_set_header   X-Forwarded-Proto $scheme;
          proxy_max_temp_file_size 0;

          #this is the maximum upload size
          client_max_body_size       10m;
          client_body_buffer_size    128k;

          proxy_connect_timeout      90;
          proxy_send_timeout         90;
          proxy_read_timeout         90;
          proxy_request_buffering    off; # Required for HTTP CLI commands in Jenkins &gt; 2.54
      }

      location / {
        # Optional configuration to detect and redirect iPhones
        if ($http_user_agent ~* '(iPhone|iPod)') {
          rewrite ^/$ /view/iphone/ redirect;
        }

        try_files $uri @jenkins;
      }
    }
}
</code></pre>
<h4 id="httpsconfiguration">HTTPS Configuration</h4>
<p>The HTTPS configuration is almost identical, save a few changes:</p>
<ul>
<li>Changing the listen port from <code>80</code> to <code>443</code></li>
<li>Adding the section I've labeled <code># HTTP redirect</code>, so that <code>http://jenkins.example.com</code> is redirected to <code>https://jenkins.example.com</code></li>
<li>Adding the block that I've noted as <code># SSL</code> in the <code>nginx.conf</code> which provides the path to the cert files
<ul>
<li>Make sure to update the paths / filenames to match your setup</li>
</ul>
</li>
<li>Updating the <code>proxy_redirect</code> value</li>
</ul>
<p><strong>Quick cert detour</strong></p>
<p>For your certs: you'll need the key file and the x509 cert. Depending on your provider, these may be hard to identify so to hopefully help you I'm going to show how I inspected the certs I downloaded from our cert provider:</p>
<pre><code class="language-bash">→  sha1sum *
f1c██████████████████████████████████c71  wildcard.example.com.apache.crt
819██████████████████████████████████444  wildcard.example.com.ee_x509.crt
304██████████████████████████████████992  wildcard.example.com.i1_issuer.crt
1a8██████████████████████████████████c4c  wildcard.example.com.pkcs7.p7s
f1c██████████████████████████████████c71  wildcard.example.com.plesk.crt
</code></pre>
<p>From this you can see that the top and bottom files are the same, but the middle three are different from each other.</p>
<p>Inspecting further:</p>
<pre><code class="language-bash">→  openssl x509 -noout -text -in wildcard.example.com.apache.crt
Certificate:
    Data:
        Version: █████
        Serial Number: ████████
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=GeoTrust Inc., CN=GeoTrust Global CA
        Validity
            Not Before: Aug ██ 21:39:32 ████ GMT
            Not After : May ██ 21:39:32 ████ GMT
        Subject: C=US, O=GeoTrust Inc., CN=RapidSSL SHA256 CA - G3
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
{{{ SNIP }}}
</code></pre>
<p>This <code>wildcard.example.com.apache.crt</code> is the intermediary cert issued by &quot;GeoTrust Global CA&quot; (the root) to &quot;GeoTrust RapidSSL SHA256 CA - G3&quot; (the intermediary). An intermediary cert sits between the wildcard cert issued to us/me and the root cert. Usually when you download a bunch of certificate files from your provider this a file like this is included because not all intermediary CAs are trusted by all sources, so in some cases you may need to provide the intermediate cert. To see if your intermediate cert is trusted by your browser you can check that browser's certificate store.</p>
<p>Quick example for how to check a cert store in Firefox: go to <code>about:preferences</code> in the address bar and scroll to the bottom of the page. Here, you can see that &quot;GeoTrust's RapidSSL SHA256 CA - 3&quot; is trusted by Firefox:</p>
<p><a href="https://agirlhasnona.me/content/images/2018/02/firefox-sslcert-trust.png"><img src="https://agirlhasnona.me/content/images/2018/02/firefox-sslcert-trust.png" alt="Ops Tutorial: SSL Setup for Jenkins"></a><br>
<small>Click image to view full size.</small></p>
<pre><code class="language-bash">→  openssl x509 -noout -text -in wildcard.example.com.ee_x509.crt
Certificate:
    Data:
        Version: █████
        Serial Number: ████████
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=GeoTrust Inc., CN=RapidSSL SHA256 CA - G3
        Validity
            Not Before: Dec ██ 19:21:43 ████ GMT
            Not After : Jan ██ 16:01:28 ████ GMT
        Subject: CN=*.example.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
{{{ SNIP }}}
</code></pre>
<p>The <code>Subject: CN=*.example.com</code> tells us that this is the file we want to use as the <code>ssl_certificate</code> in the nginx configuration. This is actually a type of <code>pem</code> file, so I'm actually going to change the extension when I move this file and the key file to <code>/etc/nginx/certs</code>:</p>
<pre><code class="language-bash">→  sudo cp wildcard.example.com.ee_x509.crt /etc/nginx/certs/wildcard.example.com.pem
→  sudo cp wildcard.example.com.key /etc/nginx/certs/
</code></pre>
<p><strong>The HTTPS Configuration</strong></p>
<p>As before, the complete HTTPS configuration is below, making sure you change <code>jenkins.example.com</code> and the cert paths to match your configuration:</p>
<pre><code class="language-nginx">user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] &quot;$request&quot; '
                      '$status $body_bytes_sent &quot;$http_referer&quot; '
                      '&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    index   index.html index.htm;

    # HTTP redirect
    server {
        listen 80;
        server_name jenkins.example.com;
        return 301 https://$host$request_uri;
    }

    server {
      listen          443;       # Listen on port 443 for IPv4 requests

      server_name     jenkins.example.com;

      # SSL
      ssl on;
      ssl_certificate            /etc/nginx/certs/wildcard.example.com.pem;
      ssl_certificate_key        /etc/nginx/certs/wildcard.example.com.key;
      ssl_protocols              TLSv1.2;
      ssl_ciphers                'EECDH+AESGCM:EDH+AESGCM';
      ssl_prefer_server_ciphers  on;
      ssl_session_cache          shared:SSL:10m;

      #this is the jenkins web root directory (mentioned in the /etc/default/jenkins file)
      root            /var/run/jenkins/war/;

      access_log      /var/log/nginx/access.log;
      error_log       /var/log/nginx/error.log;
      ignore_invalid_headers off; #pass through headers from Jenkins which are considered invalid by Nginx server.

      location ~ &quot;^/static/[0-9a-fA-F]{8}\/(.*)$&quot; {
        #rewrite all static files into requests to the root
        #E.g /static/12345678/css/something.css will become /css/something.css
        rewrite &quot;^/static/[0-9a-fA-F]{8}\/(.*)&quot; /$1 last;
      }

      location /userContent {
        #have nginx handle all the static requests to the userContent folder files
        #note : This is the $JENKINS_HOME dir
        root /var/lib/jenkins/;
        if (!-f $request_filename){
          #this file does not exist, might be a directory or a /**view** url
          rewrite (.*) /$1 last;
          break;
        }
        sendfile on;
      }

      location @jenkins {
          sendfile off;
          proxy_pass         http://127.0.0.1:8080;
          proxy_redirect     http://localhost:8080 $scheme://jenkins.example.com;
          proxy_http_version 1.1;

          proxy_set_header   Host              $host;
          proxy_set_header   X-Real-IP         $remote_addr;
          proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
          proxy_set_header   X-Forwarded-Proto $scheme;
          proxy_max_temp_file_size 0;

          #this is the maximum upload size
          client_max_body_size       10m;
          client_body_buffer_size    128k;

          proxy_connect_timeout      90;
          proxy_send_timeout         90;
          proxy_read_timeout         90;
          proxy_request_buffering    off; # Required for HTTP CLI commands in Jenkins &gt; 2.54
      }

      location / {
        # Optional configuration to detect and redirect iPhones
        if ($http_user_agent ~* '(iPhone|iPod)') {
          rewrite ^/$ /view/iphone/ redirect;
        }

        try_files $uri @jenkins;
      }
    }
}
</code></pre>
<h2 id="cleaningupthelooseends">Cleaning up the loose ends</h2>
<h3 id="fixingthereverseproxyerror">Fixing the Reverse Proxy Error</h3>
<p>Once you have your reverse proxy setup in either case, you'll most likely encounter this error:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/jenkins-reverse-proxy-broken.png" alt="Ops Tutorial: SSL Setup for Jenkins"></p>
<p>To resolve this, go to Manage Jenkins -&gt; Configure System, or go to <code>https://${YOUR_JENKINS_URL}/configure</code>, and update your configuration to use your new HTTP or HTTPS address, as can be seen in the side by side configuration below:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/jenkins-proxy-fix.png" alt="Ops Tutorial: SSL Setup for Jenkins"></p>
<h3 id="fixinggithuboauth">Fixing Github Oauth</h3>
<p>If you have Github oauth configured you might have a mild heart attack when auth fails to redirect because of your proxy. Not to fear, this is another quick fix. Log into your Github account / Github org and go to where you've configured oauth:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/github-oauth-v2.png" alt="Ops Tutorial: SSL Setup for Jenkins"></p>
<p>And then go into the configuation and update both the Homepage URL and the Authorization Callback URL to your new HTTP or HTTPS address.</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/github-oauth-fix.png" alt="Ops Tutorial: SSL Setup for Jenkins"></p>
<p>That's it! You can now auth with Github normally.</p>
<h3 id="updatinggithubwebhooks">Updating Github webhooks</h3>
<p>If you are using Github webhooks, you'll need to update them from something like:</p>
<pre><code>http://jenkins.example.com:8080/github-webhook/
</code></pre>
<p>To either your new HTTP or HTTPS URL, e.g.:</p>
<pre><code>http://jenkins.example.com/github-webhook/
https://jenkins.example.com/github-webhook/
</code></pre>
<p>Make sure you scroll to the bottom and save your configuration!</p>
<h4 id="fixingsslwebhookspeercertificatecannotbeauthenticated">Fixing SSL Webhooks - &quot;Peer certificate cannot be authenticated&quot;</h4>
<p>If you are using SSL, as you likely are if you're reading this, you may encounter the following after updating your webhooks:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/github-webhook-fail-birdseye.png" alt="Ops Tutorial: SSL Setup for Jenkins"></p>
<p>Opening up the most recent one to take a look:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/github-webhook-ssl-fail-detail.png" alt="Ops Tutorial: SSL Setup for Jenkins"></p>
<p>What does this mean?</p>
<p>If you recall, earlier I checked the Firefox cert store to see if it had &quot;RapidSSL SHA256 CA - G3&quot; in it - this is the intermediary that signed the certificate that I'm using. I also mentioned that not all intermediate sources are trusted. Here, it appears that while Firefox <em>does</em> trust my intermediate source, Github does not.</p>
<p>How to fix this? Certificate bundles are actually just a series of PEM files in a single file, so we need to either pre-pend or append the intermediary to the cert file that nginx is using.</p>
<p>I quickly tried pre-pending first, and encountered this error when I restarted nginx:</p>
<pre><code class="language-bash">→  sudo service nginx restart
nginx: [emerg] SSL_CTX_use_PrivateKey_file(&quot;/etc/nginx/certs/wildcard.example.com.key&quot;) failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)
nginx: configuration file /etc/nginx/nginx.conf test failed
</code></pre>
<p>So I reversed the cert order and violá, nginx started and the page loaded. Next, I tried resending that last payload to see if SSL verification passed:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/github-webhook-pass-detail.png" alt="Ops Tutorial: SSL Setup for Jenkins"></p>
<p>Excellent.</p>
<p>For completeness, I checked back in after letting a few builds run to verify that the webhook history was green and healthy once more, and indeed it was/is:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/github-webhook-pass-birdseye.png" alt="Ops Tutorial: SSL Setup for Jenkins"></p>
<p><small>Header image: Word Cloud drawn by <a href="https://www.wordclouds.com/">Word Clouds Generator</a></small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Opsfire: Recovering Jenkins after Complete Failure]]></title><description><![CDATA[Backups before every rollback, always.]]></description><link>https://agirlhasnona.me/opsfire-pearshaped-jenkins/</link><guid isPermaLink="false">5a74929185e83208fca505c9</guid><category><![CDATA[opsfire]]></category><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Tue, 06 Feb 2018 21:43:39 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2018/02/jenkins-fire-rebuild-feb2018.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><h2 id="holdontoyourseats">Hold on to your seats</h2>
<img src="https://agirlhasnona.me/content/images/2018/02/jenkins-fire-rebuild-feb2018.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><p>Cause this is going to be a bit of a ride. This is a tale of explosions, defeat, perserverance, and ultimate victory.</p>
<p>I'm giving you a spoiler for the ultimate victory because, like all situations of prolonged pain, there were several points that I didn't think we were going to get there.</p>
<h2 id="hookedhereshowitbegan">Hooked? Here's how it began</h2>
<p>We were having an issue where some of our Jenkins jobs were hanging on the <code>git clone</code>. This is something relatively new that started happening this week, so after manually killing and kicking a couple of jobs until they Just Worked (<a href="https://agirlhasnona.me/opsfire-pearshaped-jenkins/opsfire-the-case-of-the-html-pem-file/">already forgetting the HTML/PEM file lesson</a>) I decided to do a rollback.</p>
<p>This wasn't too huge of a deal, really. We use the weekly Jenkins builds, and I keep the previous week's build handy, so:</p>
<pre><code class="language-shell">sudo service jenkins stop
sudo cp ~/backup/jenkins-v2.103.war /usr/lib/jenkins/jenkins.war
sudo service jenkins start
</code></pre>
<p>Everything came up normally, except for the fact that the jobs still hung on <code>git clone</code>. Oh well. Rinse repeat the above, replace <code>jenkins-v2.103.war</code> with <code>jenkins-v2.104.war</code>. Everything came back live, nothing to see here.</p>
<p>A few of the plugins were upgraded this week as well. Since one of them was the Github plugin, and the issue was with <code>git clone</code>, I figured I'd try to roll back that first.</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/fire-horizontal-rule.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p><a href="https://agirlhasnona.me/content/images/2018/02/Screen-Shot-2018-02-02-at-11.04.37-AM.png"><img src="https://agirlhasnona.me/content/images/2018/02/Screen-Shot-2018-02-02-at-11.04.37-AM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></a><br>
<small>Feel free to click so you can view this insanity in all its glory.</small></p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/fire-horizontal-rule.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/dear-god-firefly.gif" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: Giphy, Firefly</small></p>
<h2 id="butwaitwhat">But wait, what?</h2>
<p>It's worth now noting what fired through my brain in rapid succession:</p>
<ol>
<li>We use Github Oauth</li>
<li>This appears to be something with Github core</li>
<li>How could could a plugin rollback do this</li>
<li>I knew this box was fragile, I should have snapshotted it <em>before</em></li>
<li>Is there a snapshot?</li>
<li>...Of course the last one is from <em>freaking December</em>.</li>
</ol>
<h2 id="whattodoact1scenes13">What to do: Act 1, Scenes 1-3</h2>
<p>My first instinct, as any battle hardened person will tell you, was</p>
<p>to</p>
<p>Google</p>
<p>everything.</p>
<p>Filtering out the seemingly unending volumes of advice about how to roll plugin versions forward and backward, <em>using the UI thank you very much for that</em>, I found some advice about how to install plugins using their CLI.</p>
<p>YIL that Jenkins has a CLI.</p>
<p>I went to find where to download it, but lo: you need a working version of Jenkins to download the CLI. And the version of the CLI is dependent on the Jenkins release version as well. And you can only download it from using your actual Jenkins install, e.g.</p>
<pre><code class="language-shell">wget -O /desired/path/to/jenkins-cli.war https://${JENKINS_URL}/jnlpJars/jenkins-cli.jar
</code></pre>
<p>You can also go to the latter path in your browser if your installation is up and running.</p>
<p>Which mine was not.</p>
<p>Oh! I know! I'll make a fresh Jenkins box with the same version and download the CLI from there!</p>
<p>I'm not going to detail this part for you (yet), but stay tuned. I got the CLI, but primary Jenkins was <em>so hosed</em> that pointing the CLI at it threw a Java exception. Even if this hadn't happened, though, I was reminded when I created a sandbox Jenkins that I probably still would have run into difficulties without the exception since Github auth was still enabled and I would have needed to disable it or create a token for the CLI. Which I would have needed access to the UI to do. (Jenkins is <em>really</em> reliant on having that UI up and running.)</p>
<p>It's worth mentioning that while all <em>that</em> was going on I had concurrently attempted to spin up a new instance using an AMI I had made from the December image, but when I tried to start Jenkins on that instance it also died and threw exceptions. Not in the web browser, in the browser I was just presented with a lovely site unreachable error. I <code>ssh</code>ed into the instance to see Jenkins' logs (<code>/var/log/jenkins/jenkins.log</code>) and there were several Java exceptions everywhere. Also, importantly, errors referencing missing jobs.</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/annie-gunnerkrigg-cool-beans.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="http://www.gunnerkrigg.com/">Gunnerkrigg Court</a>. It's an awesome web comic.</small></p>
<p>It was at this point I practiced some deep breathing.</p>
<h2 id="whattodoact2">What to do: Act 2</h2>
<p>So I was half hoping that at least <em>some</em> of the exceptions could be handled by copying over the <code>jobs</code> from the dying dying dead Jenkins to the Dec Jenkins. I did this by shutting off the dead one's EC2 instance, detaching the volume, and attaching it to the December Jenkins instance. Amazon has how to attach a volume in the console <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-attaching-volume.html">documented very well</a>. It's worth noting you use essentially the same process, when the instances in question are powered off, to detach the volume.</p>
<p>I think it's important to clarify again, since you may encounter instructions of other ways to unmount disk volumes while a system is <em>powered on</em>, e.g. this <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html">EBS doc by Amazon</a>, that in this case those instructions <em><strong>will not work</strong></em> as you can't (safely) detach the root and only volume from a running system.</p>
<p>Anywho, after the volume is attached then just create the mount point, get volume list, mount volume:</p>
<pre><code class="language-shell">→ sudo mkdir /jenkins-defunct

→ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  512G  0 disk
└─xvda1 202:1    0  512G  0 part /
xvdf    202:2    0  512G  0 disk
└─xvdf1 202:3    0  512G  0 part /

→ sudo mount /dev/xvdf1 /jenkins-defunct
</code></pre>
<p>Breathing. Ok.</p>
<p>Now it's time to rsync.</p>
<pre><code class="language-shell">→ sudo mv /var/lib/jenkins/jobs{,--bkp}
→ sudo rsync -a /jenkins-defunct/var/lib/jenkins/jobs /var/lib/jenkins/jobs
</code></pre>
<p>And now we wait.</p>
<p>And wait.</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/fire-horizontal-rule.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<pre><code class="language-shell">*** Skipping any contents from this failed directory ***
rsync: recv_generator: mkdir &quot;/var/lib/jenkins/jobs/jobs/${SOME_PLACEHOLDER_JOB}/workspace&quot; failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
rsync: mkstemp &quot;/var/lib/jenkins/jobs/jobs/${SOME_PLACEHOLDER_JOB}/.config.xml.dovJOF&quot; failed: No space left on device (28)
rsync: mkstemp &quot;/var/lib/jenkins/jobs/jobs/${SOME_PLACEHOLDER_JOB}/.disk-usage.xml.xnYWQ8&quot; failed: No space left on device (28)
rsync: mkstemp &quot;/var/lib/jenkins/jobs/jobs/${SOME_PLACEHOLDER_JOB}/.github-polling.log.RVYdTB&quot; failed: No space left on device (28)
rsync: mkstemp &quot;/var/lib/jenkins/jobs/jobs/${SOME_PLACEHOLDER_JOB}/.nextBuildNumber.EtLAV4&quot; failed: No space left on device (28)
rsync: recv_generator: mkdir &quot;/var/lib/jenkins/jobs/jobs/${SOME_PLACEHOLDER_JOB}/workspace@tmp&quot; failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
</code></pre>
<p>After a couple hours of seemingly silent, copying bliss I was abruptedly introduced to an unhumanly countable number of lines like that.</p>
<p>But wait, I filled up the whole drive? With jobs?</p>
<pre><code class="language-shell">$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      504G  276G  229G  55% /
devtmpfs        7.9G   64K  7.9G   1% /dev
tmpfs           7.9G     0  7.9G   0% /dev/shm
/dev/xvdf1      504G  258G  247G  52% /jenkins-defunct
</code></pre>
<p>I am using... half the drive. What.</p>
<p>One of my coworker's offered to <code>ssh</code> in at this point to see if he saw anything.</p>
<p>But he could not, he was given an out of disk error.</p>
<p>I looked at CloudWatch metrics for the alarm and thankfully <code>df</code> didn't lie there: using half disk.</p>
<p>What gives?</p>
<p><strong>inodes</strong></p>
<p>The short version, since they aren't the focus here, is that inodes store file and directory metadata on *nix filesystems. To the passerby, they become important when you have a ton of tiny files, as Jenkins clearly does, and you run out:</p>
<pre><code class="language-shell">$ df -ih
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/xvda1        32M   32M     0  100% /
devtmpfs         2.0M   477  2.0M    1% /dev
tmpfs            2.0M     1  2.0M    1% /dev/shm
/dev/xvdf1        32M   27M  5.7M   83% /jenkins-defunct
</code></pre>
<p>(If you'd like to read more on what inodes are, please check out <a href="http://www.linux-mag.com/id/8658/">this Linux Magazine article</a>.)</p>
<p>Of <em>course</em> I ran out of inodes.</p>
<p>The quickest way to address this problem is to resize the disk. This is the option I went with, since this is production Jenkins and we're now several hours into this outage.</p>
<p>Since this is a short term solution, I stopped the instance, resized the root volume to 2 TB, rebooted, and restarted the rsync.</p>
<p>By the way, here is an example of why to <code>rsync</code> rather than <code>cp</code>: the latter would just completely start over and write over what it had already done as needed, taking more time, whereas <code>rsync</code> will pick up where it left off.</p>
<p>That said, it still took another hour or two for the sync to complete.</p>
<p>While this was going on, I was pondering my next move. Also: getting really tired as it was late now.</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/zzz-horizontal-rule.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<h2 id="whattodoact3alltheclimaticscenes">What to do: Act 3, all the climatic scenes</h2>
<p>I read a bit more on Jenkins admin. Since everything should be hypothetically recoverable from <code>JENKINS_HOME</code>, I decided to try re-installing all the plugins on a new Jenkins instance (don't ask me how many I have running now), and the copy that's instances <code>/var/lib/jenkins/plugins</code> directory back to the original.</p>
<p>This ultimately ended in failure, but I ended up with a very useful shell script that allowed me to install plugins very quickly.</p>
<p>Another spoiler: <code>vim</code> tricks are super handy.</p>
<h3 id="spinningupjenkinsonalinuxami">Spinning up Jenkins on a Linux AMI</h3>
<p>I'm going to go a little detailed here for those new to installing Jenkins.</p>
<p>The instance configuration:</p>
<ul>
<li>Instance Type: <code>m5.xlarge</code></li>
<li>Security: security group with ports <code>22</code>, <code>80</code>, <code>8080</code>, <code>443</code>, and <code>4443</code> are open / accessible on our VPC and via VPN.</li>
<li>EBS type: 512 GB GP2
<ul>
<li>In hindsight, though, I recommend using IO1. Cost is similar and would help speed up <code>rsync</code> later.</li>
</ul>
</li>
<li>AMI: Amazon Linux AMI 2017.09.1 (HVM)</li>
</ul>
<p><code>ssh</code> into the instance and:</p>
<ul>
<li>Update</li>
<li>Install some basic tools</li>
<li>Create a group</li>
<li>Use <code>visudo</code> to enable passwordless <code>sudo</code></li>
<li>Create your user</li>
<li>Add your user to the group</li>
<li>Switch to the user account</li>
<li>Install a handy prompt and useful rc files</li>
</ul>
<p>Here we go.</p>
<pre><code class="language-shell">$ sudo yum upgrade -y
$ sudo yum install git tmux tree htop ack unzip -y
$ sudo groupadd admin
$ sudo useradd quintessence
$ sudo usermod -aG admin quintessence
$ sudo EDITOR=vim visudo
$ sudo su - quintessence
Last login: Fri Feb  2 16:49:01 UTC 2018 on pts/0
[quintessence@ip-███-███-███-███ ~]$ sudo ls /etc/
acpi               blkid                      csh.cshrc
{{{ snip }}}

[quintessence@ip-███-███-███-███ ~]$ git clone https://github.com/jhunt/env
Cloning into 'env'...
remote: Counting objects: 713, done.
remote: Total 713 (delta 0), reused 0 (delta 0), pack-reused 713
Receiving objects: 100% (713/713), 128.96 KiB | 18.42 MiB/s, done.
Resolving deltas: 100% (419/419), done.
[quintessence@ip-███-███-███-███ ~]$ cd env/
[quintessence@ip-███-███-███-███ env]$ ./install
setting up dot files in ~
configuring vim...
copying in ~/bin scripts...
  installing jq...
  installing spruce (v1.8.2)...
configuring git...
setting up ~/.bashrc...
hostname: No address associated with name
[quintessence@ip-███-███-███-███ env]$ cd ..
[quintessence@ip-███-███-███-███ ~]$ vim .host
[quintessence@ip-███-███-███-███ ~]$ source ~/.bashrc
+033+16:52:54:8:0 ███.███.███.███/20 quintessence@jenkins ~
→  
</code></pre>
<p>For <code>visudo</code>, here's the magic line to allow <code>admin</code> group passwordless <code>sudo</code>:</p>
<pre><code>%admin        ALL=(ALL)       NOPASSWD: ALL
</code></pre>
<p>As a quick aside: the <code>.host</code> file is used by the prompt to display the <code>hostname</code> if none is set, which is the case for this dev instance. Right now it just has <code>jenkins</code> in it and that's what you see in the prompt above. For the remainder of the blog post when you see <code>→</code>, that's just part of my prompt.</p>
<p><strong>Now for the Jenkins install</strong></p>
<p>I'm going to use <code>yum</code> to install the last stable release of Jenkins to make sure that loads. Here's the needful, spaced out with output:</p>
<pre><code class="language-shell">→  sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
--2018-02-02 16:53:25--  http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
Resolving pkg.jenkins-ci.org (pkg.jenkins-ci.org)... 52.202.51.185
Connecting to pkg.jenkins-ci.org (pkg.jenkins-ci.org)|52.202.51.185|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 85
Saving to: ‘/etc/yum.repos.d/jenkins.repo’

/etc/yum.repos.d/jenkins.repo                                       100%[===================================================================================================================================================================&gt;]      85  --.-KB/s    in 0s

2018-02-02 16:53:26 (30.9 MB/s) - ‘/etc/yum.repos.d/jenkins.repo’ saved [85/85]



→  sudo rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key



→  sudo yum install jenkins -y
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main                                                                                                                                                                                                                                                | 2.1 kB  00:00:00
amzn-updates                                                                                                                                                                                                                                             | 2.5 kB  00:00:00
jenkins                                                                                                                                                                                                                                                  | 2.9 kB  00:00:00
jenkins/primary_db                                                                                                                                                                                                                                       |  23 kB  00:00:00
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package jenkins.noarch 0:2.89.3-1.1 will be installed
--&gt; Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================================================================================================================
 Package                                                          Arch                                                            Version                                                                Repository                                                        Size
================================================================================================================================================================================================================================================================================
Installing:
 jenkins                                                          noarch                                                          2.89.3-1.1                                                             jenkins                                                           71 M

Transaction Summary
================================================================================================================================================================================================================================================================================
Install  1 Package

Total download size: 71 M
Installed size: 71 M
Downloading packages:
jenkins-2.89.3-1.1.noarch.rpm                                                                                                                                                                                                                            |  71 MB  00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : jenkins-2.89.3-1.1.noarch                                                                                                                                                                                                                                    1/1
  Verifying  : jenkins-2.89.3-1.1.noarch                                                                                                                                                                                                                                    1/1

Installed:
  jenkins.noarch 0:2.89.3-1.1

Complete!
</code></pre>
<p>When I ran <code>sudo service jenkins start</code> to start Jenkins, I received the following because Jenkins needs Java 8 and apparently Amazon Linux is shipping with Java 7:</p>
<pre><code class="language-shell">→  sudo service jenkins start
Starting Jenkins Jenkins requires Java8 or later, but you are running 1.7.0_161-mockbuild_2017_12_19_23_46-b00 from /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.161.x86_64/jre
java.lang.UnsupportedClassVersionError: 51.0
        at Main.main(Main.java:124)
                                                           [  OK  ]
</code></pre>
<p>I'm going to remove Java 7 and install Java 8 (output not included for the <code>yum</code> commands):</p>
<pre><code class="language-shell">→  sudo yum remove java-1.7.0-openjdk -y
→  sudo yum install java-1.8.0 -y
→  sudo service jenkins start
Starting Jenkins                                           [  OK  ]
</code></pre>
<p>I'm also going to add the <code>jenkins</code> service to start on boot:</p>
<pre><code class="language-shell">→  sudo chkconfig jenkins on
</code></pre>
<p>Ok, now that I have all of that: does bare Jenkins load?</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/Screen-Shot-2018-02-02-at-11.56.54-AM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p>Yes.</p>
<p>This the first moment of relief that I've had in hours.</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/relief-emoji.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="http://www.iconarchive.com/show/yolks-2-icons-by-bad-blood/relief-icon.html">IconArchive</a></small></p>
<p>As a result, I breezed through the next bit:</p>
<ul>
<li>Unlocked grabbing the initial admin password as indicated on the splash page</li>
<li>Selected to have Jenkins Install the Recommended Plugins</li>
<li>Set up my Jenkins username + password (also part of the setup wizard)
<ul>
<li>Made sure my Jenkins username matched my Github username to prevent redundancy when hooking up Github Oauth</li>
</ul>
</li>
<li>Made sure Jenkins loaded</li>
</ul>
<p><a href="https://agirlhasnona.me/content/images/2018/02/Screen-Shot-2018-02-02-at-12.02.33-PM.png"><img src="https://agirlhasnona.me/content/images/2018/02/Screen-Shot-2018-02-02-at-12.02.33-PM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></a><br>
<small>You can click on this image for full size.</small></p>
<p>This is the happy time. So happy, I'm gonna use that emoji again:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/relief-emoji.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p>Ok, now to stop the service, update to the latest weekly build so it matches the desired environment, and then restart the bare Jenkins. Not anticipating any issues since this is still a bare environment. Skipping the step where I download the war file:</p>
<pre><code>→  ls
bin  code  env  jenkins-v2.104.war

→  sudo mv /usr/lib/jenkins/jenkins.war{,_old}

→  sudo cp jenkins-v2.104.war /usr/lib/jenkins/jenkins.war

→  sudo ls /usr/lib/jenkins/
jenkins.war  jenkins.war_old

→  sudo service jenkins restart
</code></pre>
<p>On restart this looks mostly the same:</p>
<p><a href="https://agirlhasnona.me/content/images/2018/02/Screen-Shot-2018-02-02-at-12.02.53-PM.png"><img src="https://agirlhasnona.me/content/images/2018/02/Screen-Shot-2018-02-02-at-12.02.53-PM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></a><br>
<small>You can click on this image for full size.</small></p>
<p>With a crucial difference:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/jenkins-stacked-versions.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p>At this point, I think we can upgrade to a full on smile:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/smile-flush-emoji.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="http://www.iconarchive.com/show/yolks-icons-by-bad-blood/hope-my-fake-smile-works-again-icon.html">IconArchive</a></small></p>
<h3 id="plugininstalls">Plugin installs</h3>
<p>I discovered very quickly that installing these plugins via the UI was going to be a nightmare because their search feature is a <em><strong>challenge</strong></em>. And not in the &quot;what doesn't kill you makes you stronger&quot; way. It was more like this:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/waste-time-fake-work-paul-graham.jpg" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="https://www.lifehacker.com.au/2010/08/unproductive-work-is-a-more-sinister-time-sink-than-goofing-off/">This LifeHacker AUS article</a>. I'll admit to not verifying the quote because it fits my needs here.</small></p>
<p>I'll show you what I mean, and then I'll show you how I used the Jenkins CLI (remember that?) to get around it.</p>
<p><strong>Searching ... Searching ... Searching ...</strong></p>
<p>You can see the names of the plugins, as Jenkins understands them, in <code>/var/lib/jenkins/plugins</code>. One of the plugins we have is <code>cloudbees-folder</code> so let's try to find that with the UI and install it.</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/Screen-Shot-2018-02-02-at-12.42.40-PM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p>Searching for &quot;folder&quot; by itself was really generic, so I tried to search for &quot;cloudbees&quot;. It was also unhelpful. I did happen to notice, though, the URL for the plugins is actually linked to the download directory:</p>
<p><a href="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_12_43_06_PM.png"><img src="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_12_43_06_PM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></a><br>
<small>You can click on this image for full size.</small></p>
<p>This is somewhat helpful as it shows me to go to <a href="http://updates.jenkins-ci.org/download/plugins/">http://updates.jenkins-ci.org/download/plugins/</a> for the download list. Here, the plugins appear the same as they do once installed rather than the &quot;friendly&quot; or &quot;long&quot; names that they are given.</p>
<p>Which is great and all, and sure this is a lot easier to manage than that hot mess of a search, but how to install them?</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/thinking-emoji.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: Emojipedia</small></p>
<p><strong>Jenkins CLI to save the day</strong></p>
<p>It is at this point that I remember the only thing stopping me from using the CLI <em>before</em> was that Jenkins was so hosed it wouldn't even talk to it.</p>
<p>That isn't the case now though. So:</p>
<pre><code class="language-shell">→  wget -O ~/jenkins-cli.jar  https://${JENKINS_PUBLIC_URL}/jnlpJars/jenkins-cli.jar
</code></pre>
<p>Swap out your Jenkins instance's route or public IP for <code>${JENKINS_PUBLIC_URL}</code> and you're in business.</p>
<p>Once I have the CLI, I run it against the local Jenkins and just supply <code>help</code> to see if it has a help page:</p>
<pre><code class="language-shell">→  java -jar jenkins-cli.jar -s http://127.0.0.1:8080/ help

ERROR: You must authenticate to access this Jenkins.
Jenkins CLI
Usage: java -jar jenkins-cli.jar [-s URL] command [opts...] args...
Options:
...
</code></pre>
<p>Ah, ok. I need to auth. This Jenkins doesn't have Github Oauth enabled, so I'm still using a username and password. This actually makes my CLI life easy, so I'm going to leave that alone and just run it with my username and password like so:</p>
<pre><code class="language-shell">→  java -jar jenkins-cli.jar --username quintessence --password █████████████ -s http://127.0.0.1:8080/ help
Neither -s nor the JENKINS_URL env var is specified.
Jenkins CLI
Usage: java -jar jenkins-cli.jar [-s URL] command [opts...] args...
Options:
-s URL       : the server URL (defaults to the JENKINS_URL env var)
{{{ SNIP }}}

→  export JENKINS_URL=http://127.0.0.1:8080/

→  java -jar jenkins-cli.jar help --username quintessence --password █████████████
  add-job-to-view
    Adds jobs to view.
  build
    Builds a job, and optionally waits until its completion.
  cancel-quiet-down
    Cancel the effect of the &quot;quiet-down&quot; command.
  clear-queue
  {{{ SNIP }}}
</code></pre>
<p>Now to test with <code>cloudbees-folder</code>:</p>
<pre><code class="language-shell">→  java -jar jenkins-cli.jar install-plugin cloudbees-folder --username quintessence --password █████████████
Installing cloudbees-folder from update center

→  sudo service jenkins restart
Shutting down Jenkins                                      [  OK  ]
Starting Jenkins                                           [  OK  ]

→  sudo ls /var/lib/jenkins/plugins/cloud*
/var/lib/jenkins/plugins/cloudbees-folder.jpi

/var/lib/jenkins/plugins/cloudbees-folder:
images  META-INF  WEB-INF

→  sudo cat /var/lib/jenkins/plugins/cloudbees-folder/META-INF/MANIFEST.MF
Manifest-Version: 1.0
Archiver-Version: Plexus Archiver
Created-By: Apache Maven
Built-By: jglick
Build-Jdk: 1.8.0_151
Extension-Name: cloudbees-folder
Specification-Title: This plugin allows users to create &quot;folders&quot; to o
 rganize jobs. Users can define custom taxonomies (like
     by project type, organization type etc). Folders are nestable and
  you can define views within folders. Maintained by CloudBees, Inc.
Implementation-Title: cloudbees-folder
Implementation-Version: 6.3
Group-Id: org.jenkins-ci.plugins
Short-Name: cloudbees-folder
Long-Name: Folders Plugin
Url: https://wiki.jenkins.io/display/JENKINS/CloudBees+Folders+Plugin
Compatible-Since-Version: 5.2
Plugin-Version: 6.3
Hudson-Version: 2.60.3
Jenkins-Version: 2.60.3
Plugin-Dependencies: credentials:2.1.11;resolution:=optional
Plugin-Developers:
</code></pre>
<p><a href="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_1_09_44_PM.png"><img src="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_1_09_44_PM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></a><br>
<small>You can click the image to view full size, note that the path is now to the plugin doc page.</small></p>
<p>Here we learn a few things:</p>
<ul>
<li>Plugin is installed</li>
<li>&quot;cloudbees-folder&quot; is what is called the &quot;short name&quot;</li>
<li>&quot;Folders Plugin&quot; is what's called the &quot;long name&quot;, which is what the search feature uses. Not helpful if you only know this name after you find your plugin though.</li>
</ul>
<p>It works! Hooray!</p>
<p>Now for for the rest.</p>
<p>First get the <em>complete</em> list of plugins from the <code>/var/lib/jenkins/plugins</code> directory of the defunct Jenkins instance.</p>
<pre><code class="language-shell">→  cd /defunct-jenkins/var/lib/jenkins/plugins
→  ls -d */
ace-editor/                          blueocean-github-pipeline/          cloudbees-folder/         git-changelog/               jenkins-multijob-plugin/      parameterized-trigger/             resource-disposer/  warnings/
amazon-ecr/                          blueocean-git-pipeline/             cobertura/                git-client/                  jira/                         performance/                       run-condition/      windows-slaves/
analysis-core/                       blueocean-i18n/                     codedeploy/               github/                      jira-ext/                     phabricator-plugin/                runscope/           workflow-aggregator/
ansicolor/                           blueocean-jira/                     command-launcher/         github-api/                  jquery-detached/              pipeline-build-step/               saferestart/        workflow-api/
ant/                                 blueocean-jwt/                      conditional-buildstep/    github-branch-source/        jsch/                         pipeline-github-lib/               sauce-ondemand/     workflow-basic-steps/
antisamy-markup-formatter/           blueocean-personalization/          credentials/              github-oauth/                junit/                        pipeline-graph-analysis/           schedule-build/     workflow-cps/
apache-httpcomponents-client-4-api/  blueocean-pipeline-api-impl/        credentials-binding/      github-organization-folder/  ldap/                         pipeline-input-step/               scm-api/            workflow-cps-global-lib/
authentication-tokens/               blueocean-pipeline-editor/          cvs/                      github-pr-comment-build/     liquibase-runner/             pipeline-milestone-step/           script-security/    workflow-durable-task-step/
aws-credentials/                     blueocean-pipeline-scm-api/         disk-usage/               github-pullrequest/          mailer/                       pipeline-model-api/                slack/              workflow-job/
aws-java-sdk/                        blueocean-rest/                     display-url-api/          git-server/                  mapdb-api/                    pipeline-model-declarative-agent/  sse-gateway/        workflow-multibranch/
BlazeMeterJenkinsPlugin/             blueocean-rest-impl/                docker-commons/           git-userContent/             matrix-auth/                  pipeline-model-definition/         ssh/                workflow-scm-step/
blueocean/                           blueocean-web/                      docker-workflow/          greenballs/                  matrix-project/               pipeline-model-extensions/         ssh-agent/          workflow-step-api/
blueocean-autofavorite/              bouncycastle-api/                   durable-task/             handlebars/                  maven-plugin/                 pipeline-rest-api/                 ssh-credentials/    workflow-support/
blueocean-bitbucket-pipeline/        branch-api/                         envinject/                handy-uri-templates-2-api/   memegen/                      pipeline-stage-step/               ssh-slaves/         ws-cleanup/
blueocean-commons/                   build-environment/                  envinject-api/            htmlpublisher/               mercurial/                    pipeline-stage-tags-metadata/      structs/
blueocean-config/                    build-monitor-plugin/               external-monitor-job/     icon-shim/                   metrics/                      pipeline-stage-view/               subversion/
blueocean-core-js/                   build-timeout/                      favorite/                 jackson2-api/                momentjs/                     plain-credentials/                 token-macro/
blueocean-dashboard/                 built-on-column/                    feature-branch-notifier/  jacoco/                      multi-branch-project-plugin/  port-allocator/                    translation/
blueocean-display-url/               chucknorris/                        ghprb/                    javadoc/                     multiple-scms/                postbuild-task/                    variant/
blueocean-events/                    cloudbees-bitbucket-branch-source/  git/                      jenkins-design-language/     pam-auth/                     pubsub-light/                      violations/
</code></pre>
<p><small>Don't forget to scroll sideways...</small></p>
<p>At this point you may also realize &quot;Oh great, all I need to do is run this&quot;:</p>
<pre><code>java -jar jenkins-cli.jar install-plugin ${PLUGIN_NAME} --username quintessence --password █████████████
</code></pre>
<p>....for all of these?</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/sweat-smile-icon.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="http://www.iconarchive.com/show/yolks-icons-by-bad-blood.html">IconArchive</a></small></p>
<p><strong>Itty Bitty Shell Script Saves the Day</strong></p>
<p>Well, as they say: play to your strengths.<br>
<img src="https://agirlhasnona.me/content/images/2018/02/quote-play-to-your-strengths-i-haven-t-got-any-said-harry-before-he-could-stop-himself-excuse-j-k-rowling-40-22-94.jpg" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p>I don't know about you, but one of my strengths is using shell scripts of any size to save my sanity. Even better in this case as minimal typing required and, true to magical fashion, only a little <code>vim</code> wizardry is required.</p>
<p>To get started, copy all of the plugins on the above list into a new file in <code>vim</code> and run three commands:</p>
<pre><code>:%s/\//\r/g
:%s/^\s\+//e
:g/^$/d
</code></pre>
<p>These commands will:</p>
<ul>
<li>Change all of the trailing <code>/</code> characters to new lines (<code>\r</code> is carriage return)</li>
<li>Replace all the whitespaces (<code>\s</code>) at the beginning of each line (<code>^</code>) with nothing, eliminating them.</li>
<li>Replace all the lines that only have a start and end of line with nothing, eliminating them.</li>
</ul>
<p>If you used my plugin list to practice your <code>vim</code> fu you, like I at this point, realize that I have 154 plugins to install. 153, if you don't count the one that's already there.</p>
<p>Keep that file open and run these three commands:</p>
<pre><code>:%s/^/&quot;/
:%s/$/&quot; /
:%s/\n//
</code></pre>
<p>These commands will:</p>
<ul>
<li>Add a <code>&quot;</code> to the beginning of each line</li>
<li>Add a <code>&quot;</code> to the end of each line (don't neglect the space here!)
<ul>
<li>But if you do, run <code>:%s/$/ /</code>, which will add a space to the end of each line</li>
</ul>
</li>
<li>This last one deletes the newline character, so you get a nice handy blob</li>
</ul>
<p>You can use that blob to make an array, and then loop through that array like follows:</p>
<pre><code class="language-shell">#!/bin/bash

PLUGIN_LIST=( &quot;ace-editor&quot; &quot;blueocean-github-pipeline&quot; &quot;cloudbees-folder&quot; &quot;git-changelog&quot; &quot;jenkins-multijob-plugin&quot; &quot;parameterized-trigger&quot; &quot;resource-disposer&quot; &quot;warnings&quot; &quot;amazon-ecr&quot; &quot;blueocean-git-pipeline&quot; &quot;cobertura&quot; &quot;git-client&quot; &quot;jira&quot; &quot;performance&quot; &quot;run-condition&quot; &quot;windows-slaves&quot; &quot;analysis-core&quot; &quot;blueocean-i18n&quot; &quot;codedeploy&quot; &quot;github&quot; &quot;jira-ext&quot; &quot;phabricator-plugin&quot; &quot;runscope&quot; &quot;workflow-aggregator&quot; &quot;ansicolor&quot; &quot;blueocean-jira&quot; &quot;command-launcher&quot; &quot;github-api&quot; &quot;jquery-detached&quot; &quot;pipeline-build-step&quot; &quot;saferestart&quot; &quot;workflow-api&quot; &quot;ant&quot; &quot;blueocean-jwt&quot; &quot;conditional-buildstep&quot; &quot;github-branch-source&quot; &quot;jsch&quot; &quot;pipeline-github-lib&quot; &quot;sauce-ondemand&quot; &quot;workflow-basic-steps&quot; &quot;antisamy-markup-formatter&quot; &quot;blueocean-personalization&quot; &quot;credentials&quot; &quot;github-oauth&quot; &quot;junit&quot; &quot;pipeline-graph-analysis&quot; &quot;schedule-build&quot; &quot;workflow-cps&quot; &quot;apache-httpcomponents-client-4-api&quot; &quot;blueocean-pipeline-api-impl&quot; &quot;credentials-binding&quot; &quot;github-organization-folder&quot; &quot;ldap&quot; &quot;pipeline-input-step&quot; &quot;scm-api&quot; &quot;workflow-cps-global-lib&quot; &quot;authentication-tokens&quot; &quot;blueocean-pipeline-editor&quot; &quot;cvs&quot; &quot;github-pr-comment-build&quot; &quot;liquibase-runner&quot; &quot;pipeline-milestone-step&quot; &quot;script-security&quot; &quot;workflow-durable-task-step&quot; &quot;aws-credentials&quot; &quot;blueocean-pipeline-scm-api&quot; &quot;disk-usage&quot; &quot;github-pullrequest&quot; &quot;mailer&quot; &quot;pipeline-model-api&quot; &quot;slack&quot; &quot;workflow-job&quot; &quot;aws-java-sdk&quot; &quot;blueocean-rest&quot; &quot;display-url-api&quot; &quot;git-server&quot; &quot;mapdb-api&quot; &quot;pipeline-model-declarative-agent&quot; &quot;sse-gateway&quot; &quot;workflow-multibranch&quot; &quot;BlazeMeterJenkinsPlugin&quot; &quot;blueocean-rest-impl&quot; &quot;docker-commons&quot; &quot;git-userContent&quot; &quot;matrix-auth&quot; &quot;pipeline-model-definition&quot; &quot;ssh&quot; &quot;workflow-scm-step&quot; &quot;blueocean&quot; &quot;blueocean-web&quot; &quot;docker-workflow&quot; &quot;greenballs&quot; &quot;matrix-project&quot; &quot;pipeline-model-extensions&quot; &quot;ssh-agent&quot; &quot;workflow-step-api&quot; &quot;blueocean-autofavorite&quot; &quot;bouncycastle-api&quot; &quot;durable-task&quot; &quot;handlebars&quot; &quot;maven-plugin&quot; &quot;pipeline-rest-api&quot; &quot;ssh-credentials&quot; &quot;workflow-support&quot; &quot;blueocean-bitbucket-pipeline&quot; &quot;branch-api&quot; &quot;envinject&quot; &quot;handy-uri-templates-2-api&quot; &quot;memegen&quot; &quot;pipeline-stage-step&quot; &quot;ssh-slaves&quot; &quot;ws-cleanup&quot; &quot;blueocean-commons&quot; &quot;build-environment&quot; &quot;envinject-api&quot; &quot;htmlpublisher&quot; &quot;mercurial&quot; &quot;pipeline-stage-tags-metadata&quot; &quot;structs&quot; &quot;blueocean-config&quot; &quot;build-monitor-plugin&quot; &quot;external-monitor-job&quot; &quot;icon-shim&quot; &quot;metrics&quot; &quot;pipeline-stage-view&quot; &quot;subversion&quot; &quot;blueocean-core-js&quot; &quot;build-timeout&quot; &quot;favorite&quot; &quot;jackson2-api&quot; &quot;momentjs&quot; &quot;plain-credentials&quot; &quot;token-macro&quot; &quot;blueocean-dashboard&quot; &quot;built-on-column&quot; &quot;feature-branch-notifier&quot; &quot;jacoco&quot; &quot;multi-branch-project-plugin&quot; &quot;port-allocator&quot; &quot;translation&quot; &quot;blueocean-display-url&quot; &quot;chucknorris&quot; &quot;ghprb&quot; &quot;javadoc&quot; &quot;multiple-scms&quot; &quot;postbuild-task&quot; &quot;variant&quot; &quot;blueocean-events&quot; &quot;cloudbees-bitbucket-branch-source&quot; &quot;git&quot; &quot;jenkins-design-language&quot; &quot;pam-auth&quot; &quot;pubsub-light&quot; &quot;violations&quot; )

for PLUGIN in &quot;${PLUGIN_LIST[@]}&quot;
do
#    echo &quot;Plugin name: ${PLUGIN}&quot;
    java -jar jenkins-cli.jar install-plugin ${PLUGIN} --username quintessence --password █████████████
done

exit 0
</code></pre>
<p>Note that I only needed to add a few lines around the big blob of text, most of our savings here are <code>vim</code> manipulations to change a directory list to a useful blob. The <code>echo</code> line is for you to test printing out the plugin names if you would like - just comment out the <code>jenkins-cli</code> line.</p>
<p>Now for the mass plugin install. Fingers crossed.</p>
<pre><code class="language-shell">→  chmod +x plugin-install.sh

→  ./plugin-install.sh
Installing ace-editor from update center
Installing blueocean-github-pipeline from update center
Installing cloudbees-folder from update center
Installing git-changelog from update center
Installing jenkins-multijob-plugin from update center
Installing parameterized-trigger from update center
Installing resource-disposer from update center
Installing warnings from update center
Installing amazon-ecr from update center
Installing blueocean-git-pipeline from update center
Installing cobertura from update center
...
</code></pre>
<p><img src="https://agirlhasnona.me/content/images/2018/02/starry-eyed-icon.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="http://www.iconarchive.com/show/yolks-icons-by-bad-blood.html">IconArchive</a></small></p>
<h3 id="themomentoftruth">The Moment of Truth</h3>
<p>As you recall me mentioning more than once, <code>rsync</code>ing these directories was a time consuming affair. On the order of hours. But I've had a moment of inspiration: what if I just put the <code>jobs</code> and <code>workspaces</code> directories in to the <em>working</em> Jenkins instead? Since copying the <code>plugins</code> directory into borked Jenkins didn't de-bork it.</p>
<p>To test this a little faster and saner, since I have the <code>jenkins-defunct</code> volume attached and mounted to the new Jenkins, I decided to test this by creating symlinks to the defunct Jenkins' <code>jobs</code> and <code>workspaces</code> directories.</p>
<p><em><strong>Important note:</strong></em> This is not production ready, please do not do this in production. This is a drill.</p>
<p>Now to continue: I'm going to both backup the empty <code>jobs</code> directory of the bare Jenkins install as well as its whole HOME directory so, if all else fails, I can swiftly get the bare install back. Then I'm going to make the symlinks.</p>
<pre><code>→  sudo mkdir JENKINS_BARE_v2.104_BKP_WITH_PLUGINS

→  sudo cp -r /var/lib/jenkins JENKINS_BARE_v2.104_BKP_WITH_PLUGINS/

→  sudo service jenkins stop
Shutting down Jenkins                                      [  OK  ]

→  sudo mv /var/lib/jenkins/jobs{,--bkp}

→  sudo ln -s /jenkins-ci-old/var/lib/jenkins/jobs /var/lib/jenkins/jobs

→  sudo ls -lh /var/lib/jenkins/job*
lrwxrwxrwx 1 root    root      36 Feb  2 19:21 /var/lib/jenkins/jobs -&gt; /jenkins-defunct/var/lib/jenkins/jobs

/var/lib/jenkins/jobs--bkp:
total 0

→  sudo ln -s /jenkins-defunct/var/lib/jenkins/workspace /var/lib/jenkins/workspace

→  sudo ls -lh /var/lib/jenkins/work*
lrwxrwxrwx 1 root    root      41 Feb  2 19:22 /var/lib/jenkins/workspace -&gt; /jenkins-defunct/var/lib/jenkins/workspace
</code></pre>
<p><small>Note: there was no existing <code>workspaces</code> directory as that involves a plugin that we use / that was just installed.</small></p>
<p>Now.</p>
<p>to.</p>
<p>Restart.</p>
<p>Jenkins.</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/sweat-smile-icon.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="http://www.iconarchive.com/show/yolks-icons-by-bad-blood.html">IconArchive</a></small></p>
<p><strong>MOMENT OF TRUTH</strong></p>
<p><a href="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_2_24_43_PM--sanit.png"><img src="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_2_24_43_PM--sanit.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></a><br>
<small>JOBS JOBS JOBS</small></p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/starry-eyed-icon.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="http://www.iconarchive.com/show/yolks-icons-by-bad-blood.html">IconArchive</a></small></p>
<p>As a bit of a throwback: those two hanging jobs are what, if you still remember the top of this post, inspired the rollback and caused all this drama.</p>
<h2 id="whattodoresolution">What to do: Resolution</h2>
<p>Of course since this works that means that I need to unlink the symlinks and <code>rsync</code> the actual data where it belongs. This is a short section by word count, but it took it's 2-3 hours to do. To unlink:</p>
<pre><code class="language-shell">→  sudo unlink /var/lib/jenkins/jobs

→  sudo unlink /var/lib/jenkins/workspace
</code></pre>
<p>And now for the <code>rsync</code>. I didn't mention it directly before, but the reason I was able to get up and walk away, open other sessions with ease, etc. is because I was using <code>tmux</code> sessions. You may have noticed that was one of the &quot;packages I like to install&quot; above. This is why.</p>
<pre><code class="language-shell">→  tmux new -s rsync
</code></pre>
<p>Here's a <a href="https://gist.github.com/henrik/1967800">tmux cheatsheet</a> If you're new to <code>tmux</code>. If you're using the <code>env</code> I cloned above, there is a <code>tmux</code> configuration in there and to create new windows you'll use control+a+c. If you're using the default / not that <code>env</code>, then I believe the default is control+b+c.</p>
<p>In one window:</p>
<pre><code>sudo rsync -a /jenkins-defunct/var/lib/jenkins/jobs /var/lib/jenkins/jobs
</code></pre>
<p>And in the other:</p>
<pre><code>sudo rsync -a /jenkins-defunct/var/lib/jenkins/workspace /var/lib/jenkins/workspace
</code></pre>
<p>And then you wait.</p>
<p>As a quick tip: I mentioned above that I had initially made this instance with a GP2 type SSD in AWS. In hindsight, it would have been nice to have had IO1 and then it would have made more sense to up the instance type to something beefier, at least just for the transfer, so it'd have been less slow. There's no way to change a volume from GP2 to IO1, though, so I would have needed to snapshot and recreate with a new instance. Alas.</p>
<p>I can verify that after the <code>rsync</code> completed that Jenkins booted up successfully. Let's take another look at that sweet, sweet image.</p>
<p><a href="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_2_24_43_PM--sanit.png"><img src="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_2_24_43_PM--sanit.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></a><br>
<small>JOBS JOBS JOBS</small></p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/starry-eyed-icon.png" alt="Opsfire: Recovering Jenkins after Complete Failure"><br>
<small>Source: <a href="http://www.iconarchive.com/show/yolks-icons-by-bad-blood.html">IconArchive</a></small></p>
<p><strong>Github Oauth Note</strong></p>
<p>When I was flipping the routes around to swap the new Jenkins to production, I noticed it kept trying to preserve the old route. There are a couple of ways to add the Jenkins route. One is in the UI, if it's working. To do that go to Manage Jenkins -&gt; Configure System and scroll to the Jenkins location section:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_10_56_19_PM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_10_57_48_PM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p>The other place to do it is by editing the <code>/var/lib/jenkins/jenkins.model.JenkinsLocationConfiguration.xml</code>.</p>
<p>I verified that the URL was correctly set in both of these places; however, Jenkins kept bouncing back to the <code>jenkins-dev.example.com</code> route I had made for it and it also popped a notification that I had a broken reverse proxy. So what gives?</p>
<p>Apparently the culprit was Github Oauth. When you configure the Oauth app in Github it looks like this:</p>
<p><img src="https://agirlhasnona.me/content/images/2018/02/Screen_Shot_2018-02-02_at_11_02_52_PM.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p>The fields indicated with arrows were using the old route, so Jenkins kept redirecting due to auth.</p>
<h2 id="postmortemaddingresiliencyandwhatnot">Post Mortem: Adding Resiliency and What Not</h2>
<p>So one of the reasons I ended up in this mess is the lack of backups for a single point of Jenkins shaped failure. There were also some undocumented dependencies and a few other pain points that were uncovered as we did the first round of jobs. There are actually enough points here that I'm splitting this portion into it's own post, which will be released shortly.</p>
<p><img src="https://agirlhasnona.me/content/images/2017/05/opsfire_ribbon_300x300.png" alt="Opsfire: Recovering Jenkins after Complete Failure"></p>
<p><small>Documented on my <a href="http://agirlhasnona.me/frequently-used-images/">frequently used assets</a> page.</small></p>
<hr>
<p>Sources for header: <a href="https://wiki.jenkins.io/display/JENKINS/Logo">Jenkins logo</a> and <a href="https://jenkins.io/artwork/">Jenkins art: Fire</a> from Jenkins site, <a href="https://adorabless.deviantart.com/art/Level-2-Health-Potion-OPEN-373505284">Health Potion by adorabless @ DeviantArt</a>, and a <a href="https://www.freepik.com/free-icon/curved-arrow-point-to-down_695368.htm">curved arrow from FreePik</a>. Fiery background is from <a href="https://www.shutterstock.com/g/bernatskaya%20oxana">Shutterstock user Bernatskaya Oxana</a>.<br>
</p>
</div>]]></content:encoded></item><item><title><![CDATA[OpsFire: The case of the HTML PEM file]]></title><description><![CDATA[Spoilers: the PEM file was an HTML document, but still gave the appearance of working.]]></description><link>https://agirlhasnona.me/opsfire-the-case-of-the-html-pem-file/</link><guid isPermaLink="false">5a6f8f5785e83208fca505c5</guid><category><![CDATA[opsfire]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Mon, 29 Jan 2018 23:55:51 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><blockquote>
<p>A tale as old as time: always check your assumptions.</p>
</blockquote>
<p>I was contacted a couple weeks ago by a team member who said they could not <code>ssh</code> into an EC2 instance using what should have been the appropriate key, which I'll refer to as <code>aws.pem</code>. I asked them to try and <code>ssh</code> into the instance and send me the output, so they sent me this:</p>
<pre><code class="language-shell">$ ssh -i aws.pem georgeparley@dev.example.com
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for 'aws.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key &quot;aws.pem&quot;: bad permissions
georgeparley@dev.example.com: Permission denied (publickey).
</code></pre>
<p>Seemed easy enough, change the permissions to <code>0400</code> and the problem will resolve, right?</p>
<p>Wrong.</p>
<pre><code class="language-shell">$ ssh -i aws.pem georgeparley@dev.example.com
Load key &quot;aws.pem&quot;: invalid format
georgeparley@dev.example.com: Permission denied (publickey).
</code></pre>
<p>Wait, what?</p>
<p>Since the user was on a Mac, I asked them to send me the contents of their pem file using <code>pbcopy</code> so I could be sure the whole file would be picked up:</p>
<pre><code class="language-shell">cat /path/to/aws.pem | pbcopy
</code></pre>
<p>I'll paste the first couple of lines here:</p>
<pre><code class="language-html">&lt;!DOCTYPE html&gt;
&lt;html lang=&quot;en&quot; &gt;

&lt;head&gt;
</code></pre>
<p>Well that's a wholly unusual pem key, isn't it? It turns out that the actual contents of what would have been the pem key were buried alllll the way in a lengthy HTML doc. Not sure how that happened. The user claimed they got the key from someone else and it &quot;works for them&quot;. 🤔</p>
<p>In any event, I had them rip out the HTML bits and keep only the relevant piece, i.e.:</p>
<pre><code>-----BEGIN RSA PRIVATE KEY-----
{{{{ Key Contents }}}}
-----END RSA PRIVATE KEY-----
</code></pre>
<p>That worked, so hooray.</p>
<p>Why am I telling you this now? Well today I was contacted by a different team member who was having trouble <code>ssh</code>ing into an instance with <code>aws.pem</code>.</p>
<p>Suspicions. Raised.</p>
<pre><code class="language-shell">$ ssh traceysmith@dev.example.com
Load key &quot;/home/traceysmith/.ssh/aws.pem&quot;: invalid format
Permission denied (publickey).
</code></pre>
<p>I again addressed the permission denied first: indeed the file had <code>777</code> and yeah, <code>ssh</code> didn't like that at all. Changed the key to <code>400</code> and asked for the first few lines of the file.</p>
<pre><code class="language-shell">$ head -n3 .ssh/aws.pem 
&lt;!DOCTYPE html&gt;
&lt;html lang=&quot;en&quot; &gt;
</code></pre>
<p>Oh, look, another HTML pem file. What?</p>
<p>I asked him to strip out the HTML bits, and lo the key worked, but he swore up and down that the key was working before. And that he got the key from the same source user, so I decided to search that guy down.</p>
<p>I asked him for the same as above, and yes he had the inception HTML pem file. I asked him if he could <code>ssh</code> to instances and he said yes, so I asked him to try to <code>ssh</code> and send me the output:</p>
<pre><code class="language-shell">→  ssh -i ~/.ssh/aws.pem jaynecobb@dev.example.com
Last login: Mon Jan 29 15:22:10 2018 from █.█.█.█
{{snip}}
</code></pre>
<p>Wait. WHAT?!</p>
<p>So then I asked him to append <code>-vv</code>. And behold, the truth will out. As the muggles say.</p>
<pre><code class="language-shell">$ ssh -i ~/.ssh/aws.pem jaynecobb@dev.example.com -vv
Warning: Identity file /Users/jaynecobb/.ssh/aws.pem not accessible: No such file or directory.
OpenSSH_6.9p1, LibreSSL 2.1.8
debug1: Reading configuration data /Users/jaynecobb/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to dev.example.com [█.█.█.█] port 22.
debug1: Connection established.
debug1: identity file /Users/jaynecobb/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/jaynecobb/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/jaynecobb/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/jaynecobb/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/jaynecobb/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/jaynecobb/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/jaynecobb/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/jaynecobb/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.9
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4
debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: Authenticating to dev.example.com:22 as 'jaynecobb'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-ed25519,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-md5-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-md5-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: 
debug2: kex_parse_kexinit: 
debug2: kex_parse_kexinit: first_kex_follows 0 
debug2: kex_parse_kexinit: reserved 0 
debug2: kex_parse_kexinit: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
debug2: kex_parse_kexinit: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc
debug2: kex_parse_kexinit: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc
debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: 
debug2: kex_parse_kexinit: 
debug2: kex_parse_kexinit: first_kex_follows 0 
debug2: kex_parse_kexinit: reserved 0 
debug1: kex: server-&gt;client chacha20-poly1305@openssh.com &lt;implicit&gt; none
debug1: kex: client-&gt;server chacha20-poly1305@openssh.com &lt;implicit&gt; none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:███████████████████████████████████████████
debug1: Host 'dev.example.com' is known and matches the ECDSA host key.
debug1: Found key in /Users/jaynecobb/.ssh/known_hosts:9
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /Users/jaynecobb/.ssh/id_rsa (0x████████████),
debug2: key: /Users/jaynecobb/.ssh/id_dsa (0x█),
debug2: key: /Users/jaynecobb/.ssh/id_ecdsa (0x█),
debug2: key: /Users/jaynecobb/.ssh/id_ed25519 (0x█),
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/jaynecobb/.ssh/id_rsa
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 535
debug2: input_userauth_pk_ok: fp SHA256:███████████████████████████████████████████
debug1: Authentication succeeded (publickey).
Authenticated to dev.example.com ([█.█.█.█]:22).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0
debug2: callback start
debug2: fd 3 setting TCP_NODELAY
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug2: channel 0: request shell confirm 1
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Last login: Mon Jan 29 15:34:04 2018 from █.█.█.█
</code></pre>
<p>To recap: <code>aws.pem</code> was <em>not</em> working, in fact he had it in <code>~/Downloads/aws.pem</code> not <code>~/.ssh/aws.pem</code>; however, his <em>personal</em> key was on the instance in question, so when <code>aws.pem</code> was rejected <code>ssh</code> picked up his <code>id_rsa</code> key that was hanging out in his keychain and <em>that</em> succeeded. And since the <code>ssh</code> command wasn't run verbosely, it did so silently and appeared to Just Work.</p>
<p>Things never Just Work though, always check your assumptions.</p>
<p><img src="https://agirlhasnona.me/content/images/2017/05/opsfire_ribbon_300x300.png" alt="OpsFire Badge"></p>
<p><small>Documented on my <a href="http://agirlhasnona.me/frequently-used-images/">frequently used assets</a> page.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[OpsFire: Not All Databases]]></title><description><![CDATA[Your databases aren't replicating. Did you try turning them off and on again? No, really.]]></description><link>https://agirlhasnona.me/opsfire-not-all-databases/</link><guid isPermaLink="false">5a37f5997d330222d155822f</guid><category><![CDATA[opsfire]]></category><category><![CDATA[databases]]></category><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Mon, 18 Dec 2017 21:16:41 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2017/12/opsfire-notalldatabases-v2.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://agirlhasnona.me/content/images/2017/12/opsfire-notalldatabases-v2.png" alt="OpsFire: Not All Databases"><p>Over the weekend we consolidated some of our databases onto a single RDS instance, as a dual effort to reduce costs as well as to allow for cross database joins.</p>
<p>Our primary RDS instance has a read only replica, so imagine my surprise when the next morning I see this:</p>
<pre><code class="language-sql">primadmin@mysql_prod &gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| prod_db_1          |
| prod_db_2          |
| innodb             |
| mysql              |
| performance_schema |
| sys                |
| tmp                |
+--------------------+
8 rows in set (0.01 sec)

---

readadmin@mysql_prod-ro ]&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| prod_db_1          |
+--------------------+
2 rows in set (0.00 sec)
</code></pre>
<p>Looks like something, somewhere is missing from how we added <code>prod_db_2</code> to <code>mysql_prod</code> last night. I took a look at the read only replica properties to verify its parameter group and noticed this:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/rds_mysql_instance_pending_reboot.png" alt="OpsFire: Not All Databases"></p>
<p>Hrm 🤔</p>
<p>Since it's the middle of the day, I needed to schedule a reboot rather than just rebooting the instance. So we're going to cycle back around to this later.</p>
<p>While I await my reboot window, I decide to rule out two other possibilities and see if there are any &quot;suprises&quot; for how AWS implements MySQL v5.7.19. Specifically I want to know if the read only replica will pick up a second database in both of the following scenarios, as opposed to only grabbing the primary database:</p>
<ol>
<li>Create the read only replica <em>before</em> creating a second database</li>
<li>Create the read only replica <em>after</em> creating a second database</li>
</ol>
<p>As a quick note about the instance creation process itself: I noticed I couldn't create a new instance with the same version of MySQL that our production instance is running:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/rds_mysql_no_instance_types.png" alt="OpsFire: Not All Databases"><br>
<small>Note the lack of instance types available.</small></p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/rds_mysql_all_the_instance_types.png" alt="OpsFire: Not All Databases"><br>
<small>And here are all the instance types.</small></p>
<p>So to even start my test, I needed to create an instance with 5.7.16, then <em>upgrade</em> to 5.7.19. Lovely.</p>
<p>Skipping past that part of today's mini-🔥, after spinning up the test instance I went through the above test cases. For brevity, the steps for test case #2 are shown below:</p>
<pre><code class="language-sql">primary mysql&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| innodb             |
| mysql              |
| performance_schema |
| sys                |
| testingonetwothree |
+--------------------+
6 rows in set (0.00 sec)

primary mysql&gt; create database iambluedabudee;
Query OK, 1 row affected (0.01 sec)

primary mysql&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| iambluedabudee     |
| innodb             |
| mysql              |
| performance_schema |
| sys                |
| testingonetwothree |
+--------------------+
7 rows in set (0.00 sec)
</code></pre>
<p>Moment of truth: did the read only replica pick up the second database?</p>
<pre><code class="language-sql">replica mysql&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| iambluedabudee     |
| innodb             |
| mysql              |
| performance_schema |
| sys                |
| testingonetwothree |
+--------------------+
7 rows in set (0.00 sec)
</code></pre>
<p>Brilliant 😄 It turns out that in both cases (#1 not being documented above) that the replica picked up the second database.</p>
<p>Upon the arrival of the reboot window I was able to reboot the production replica without an issue and found that both databases were now in the replica:</p>
<pre><code class="language-sql">readadmin@mysql_prod-ro ]&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| prod_db_1          |
| prod_db_2          |
+--------------------+
3 rows in set (0.00 sec)
</code></pre>
<p>Something else that occurred to me to check, that I neglected to run before the reboot, is checking the version <em>in console</em> like so:</p>
<pre><code class="language-sql">readadmin@mysql_prod-ro ]&gt; show variables like '%version%';
+-------------------------+------------------------------+
| Variable_name           | Value                        |
+-------------------------+------------------------------+
| innodb_version          | 5.7.19                       |
| protocol_version        | 10                           |
| slave_type_conversions  |                              |
| tls_version             | TLSv1,TLSv1.1                |
| version                 | 5.7.19-log                   |
| version_comment         | MySQL Community Server (GPL) |
| version_compile_machine | x86_64                       |
| version_compile_os      | Linux                        |
+-------------------------+------------------------------+
8 rows in set (0.00 sec)
</code></pre>
<p>It's entirely probable, nay likely, that the version being reported in the <em>AWS console</em> (below) was incorrect:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/rds_mysql_database_version.png" alt="OpsFire: Not All Databases"><br>
<small>Pinocchio versioning.</small></p>
<p>The actual version was probably a prior minor release, the upgrade being applied only on reboot.</p>
<p><img src="https://agirlhasnona.me/content/images/2017/05/opsfire_ribbon_300x300.png" alt="OpsFire: Not All Databases"></p>
<p><small>Documented on my <a href="http://agirlhasnona.me/frequently-used-images/">frequently used assets</a> page.</small></p>
<p><small>Source for header: Cloud database image from <a href="https://www.iconfinder.com/icons/379336/cloud_database_icon">Iconfinder</a>, firey background from burnt embers created by <a href="https://www.shutterstock.com/g/bernatskaya%20oxana">Shutterstock user Bernatskaya Oxana</a>.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Defeating Invisicharacters with Pie]]></title><description><![CDATA[Saving your sanity with a code sanitizing dessert.]]></description><link>https://agirlhasnona.me/defeating-invisicharacters-with-pie/</link><guid isPermaLink="false">5a316650dfcfc806cf8ac75a</guid><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Thu, 14 Dec 2017 04:32:36 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2017/12/perl-pie-header.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://agirlhasnona.me/content/images/2017/12/perl-pie-header.png" alt="Defeating Invisicharacters with Pie"><p>Happy Santa Lucia Day to those who celebrate!</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/santa-lucia-wreath-ebay.jpg" alt="Defeating Invisicharacters with Pie"><br>
<small>Source: Table wreath from an eBay blog post, <a href="http://www.ebay.com/gds/How-to-Make-an-Advent-Wreath-/10000000205661025/g.html">here</a>.</small></p>
<h2 id="dessertscansavetheday">Desserts Can Save the Day</h2>
<p>Although Santa Lucia Day is typically celebrated with lussekatter, i.e. rolls of sweet deliciousness, what is going to help us today is actually pie.</p>
<p>That's right, pie.</p>
<p>I recently ran into <a href="https://agirlhasnona.me/carriage-returns-matter/">multiple issues</a> where I had invisible characters in a CSV file. Notably, issues with carriage returns and a <code>feff</code> at the beginning of each line, which is a zero break no space. You may recall from that post I mentioned that there was always more than one error - and here we are.</p>
<p>In a subsequent CSV file, I noticed even. more. invisicharacters. Visually, it looked something like this:</p>
<pre><code>12345  ,&quot;some text&quot;
</code></pre>
<p>And I thought: oh, some white spaces. Instead of going right for the kill, my recent invisiperience taught me caution. I moved the cursor over the character and hit <code>x</code>.</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/map-x-marks-the-spot.png" alt="Defeating Invisicharacters with Pie"><br>
<small>Source: X Marks the Spot map, <a href="https://www.earlychildhoodireland.ie/blog/x-marks-spot-following-childrens-interests/">here</a>.</small></p>
<p>Except it doesn't.</p>
<p>This one was a little harder to troubleshoot. I use <a href="https://github.com/jhunt/env">my friend's vimrc</a> configuration, which means that I could use ctrl-H to view the hexidecimal characters. Turns out, they weren't white spaces, in fact:</p>
<pre><code class="language-bash">00000000: 31 32 33 34 35 c2 a0 2c 22 73 6f 6d 65 20 74 65  12345..,&quot;some te$
</code></pre>
<p><code>c2 a0</code> in UTF-8 translates to <code>00A0</code>, which is a non-breaking space.</p>
<p>!@#$ invisicharacters.</p>
<p>Of course right now it <em>isn't</em> UTF-8, which is why the hex is <code>c2 a0</code>, which means that using <code>perl -CSD</code> in my replacement doesn't work like it did for <code>feff</code>, which <em>was</em> UTF-8.</p>
<p>Why doesn't it work?</p>
<p><code>perl -CSD</code> is shorthand for <code>-CIOEio</code>, which breaks down as:</p>
<pre><code>C          Command Switch
I          stdin is UTF-8
O          stdout is UTF-8
E          stderr is UTF-8
i          Perl input stream is UTF-8
o          Perl output stream is UTF-8
</code></pre>
<p>You can read more about this on the <a href="https://perldoc.perl.org/perlrun.html#Command-Switches">Perldoc</a>.</p>
<p>So, if <code>CSD</code> can't help in this case, what will?</p>
<p><code>-pie</code>:</p>
<pre><code>p          Loops throgh args similar to sed
i          Edit in place
e          Perl command line expression
</code></pre>
<p>To fix this specifically:</p>
<pre><code class="language-command-line">perl -pie `s/\x{c2}\x{a0}//` $FILE
</code></pre>
<p>This tells <code>perl</code> to replace the first (and only, in this case) instance of <code>c2 a0</code> with nothing, thus removing it from the line, for the file <code>$FILE</code>.</p>
<p><small>Header source: <a href="https://static.vecteezy.com/system/resources/previews/000/094/018/original/apple-pie-pieces-chart-vector.png">Pie vector from VectEezy</a> and ASCII conversion from <a href="http://picascii.com/">picascii</a>.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Running macOS in Vagrant]]></title><description><![CDATA[Twitter user wanted to know how to run macOS High Sierra in a Vagrant VM. Let the investigation begin!]]></description><link>https://agirlhasnona.me/running-macos-in-vagrant/</link><guid isPermaLink="false">5a27664397a23327bcf903db</guid><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Wed, 13 Dec 2017 03:22:23 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2017/12/macos-vagrant-header-v2.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://agirlhasnona.me/content/images/2017/12/macos-vagrant-header-v2.png" alt="Running macOS in Vagrant"><p>Well Holy Hannah, I received my first Twitter DM over a blog post I've written. I'm officially a pro now 🧐</p>
<p>The DM was from a user that was having some issues with the instructions I provided on a post I wrote in 2015 for S&amp;W <a href="http://www.starkandwayne.com/blog/running-a-mac-vm-on-a-mac-using-virtualbox/">here</a> about how to run the then-current release of macOS inside a Vagrant box.</p>
<p>So I thought, what the heck that blog post is 2 yrs old and in computer time that's pratically middle aged. And Trump's been president for essentially a full year now and let's not even talk about what that's done to the flow of time.</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/black-hole-spacedotcom.png" alt="Running macOS in Vagrant"><br>
<small>Source: Space.com article, <a href="https://www.space.com/34281-do-black-holes-die.html">here</a>. Article credits NASA/JPL-CalTech.</small></p>
<p>tldr - it's worth revisiting, Me Said To Me</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/evil-miss-piggy-me-to-me.png" alt="Running macOS in Vagrant"></p>
<h2 id="laptopspecs">Laptop Specs</h2>
<ul>
<li>15&quot; 2015 Macbook Pro
<ul>
<li>16 GB of RAM</li>
<li>2.8 GHz CPU</li>
<li>1 TB SSD</li>
</ul>
</li>
<li>macOS High Sierra, 10.13.2</li>
</ul>
<h2 id="downloads">Downloads</h2>
<ul>
<li><a href="https://itunes.apple.com/us/app/macos-high-sierra/id1246284741?mt=12">macOS High Sierra from the App Store</a>
<ul>
<li>Image is approximately 5.2 GB</li>
</ul>
</li>
<li><a href="https://www.vagrantup.com/downloads.html">latest Vagrant from Hashicorp</a>
<ul>
<li>As of this post the latest release is 2.0.1</li>
<li>Image is approximately 70 MB</li>
</ul>
</li>
<li><a href="https://www.virtualbox.org/wiki/Downloads">latest release of VirtualBox from Oracle</a>
<ul>
<li>As of this post the latest release is 5.2.2</li>
<li>Image is approximately 93 MB</li>
</ul>
</li>
<li><a href="https://brew.sh/">Homebrew</a> - a package installer for macOS. Not required but recommended / encouraged.</li>
<li><a href="https://www.iterm2.com/version3.html">iTerm</a> - an alternative to the Terminal app included in macOS. Again, not required but recommended. For the rest of this post &quot;terminal&quot; can be used interchangeably for iTerm or Terminal, whichever you chose.</li>
</ul>
<p>Protip: you can install Vagrant and VirtualBox using Homebrew:</p>
<ul>
<li>Open terminal</li>
<li>Install Homebrew (<code>brew</code>) using the command provided on the link above</li>
<li>Install Vagrant<br><code>brew cask install vagrant</code></li>
<li>Install VirtualBox<br><code>brew cask install virtualbox</code></li>
</ul>
<p>The main benefit of using a package installer like this is that your packages, like Vagrant and VirtualBox, can now be kept updated with <code>brew update</code> rather than manually downloading / updating package installers from various websites.</p>
<p>For the VirtualBox install you will be prompted to open <code>System Preferences → Security &amp; Privacy</code> at some point. If you don't do this &quot;fast enough&quot; (by the system's pre-programmed determination) then the install may error out. Fear not! It caches the install like so:</p>
<pre><code class="language-shell">The incomplete download is cached at /Users/quintessence/Library/Caches/Homebrew/Cask/virtualbox--5.2.2-119230.dmg.incomplete
</code></pre>
<p>So if the install halts for this reason, or because <code>Connection reset by peer</code> (typically that means slow internet connection), or for any other reason, then you can just hit the up arrow to re-run the last command and hit Enter. Just make sure you fix whatever it asks you to first 😉</p>
<p>What this will essentially look like in terminal:</p>
<pre><code class="language-shell">→  brew cask install vagrant
==&gt; Satisfying dependencies
==&gt; Downloading https://releases.hashicorp.com/vagrant/2.0.1/vagrant_2.0.1_x86_64.dmg
######################################################################## 100.0%
==&gt; Verifying checksum for Cask vagrant
==&gt; Installing Cask vagrant
==&gt; Running installer for vagrant; your password may be necessary.
==&gt; Package installers may write to any location; options such as --appdir are ignored.
Password: ██████████████████████████
==&gt; installer: Package name is Vagrant
==&gt; installer: Installing at base path /
==&gt; installer: The install was successful.
🍺  vagrant was successfully installed!

→  brew cask install virtualbox
==&gt; Tapping caskroom/cask
Cloning into '/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask'...
remote: Counting objects: 3962, done.
remote: Compressing objects: 100% (3928/3928), done.
remote: Total 3962 (delta 38), reused 921 (delta 30), pack-reused 0
Receiving objects: 100% (3962/3962), 1.35 MiB | 9.10 MiB/s, done.
Resolving deltas: 100% (38/38), done.
Tapped 0 formulae (3,971 files, 4.2MB)
==&gt; Creating Caskroom at /usr/local/Caskroom
==&gt; We'll set permissions properly so we won't need sudo in the future
Password:
==&gt; Caveats
To install and/or use virtualbox you may need to enable their kernel extension in

  System Preferences → Security &amp; Privacy → General

For more information refer to vendor documentation or the Apple Technical Note:

  https://developer.apple.com/library/content/technotes/tn2459/_index.html

==&gt; Satisfying dependencies
==&gt; Downloading http://download.virtualbox.org/virtualbox/5.2.2/VirtualBox-5.2.2-119230-OSX.dmg
######################################################################## 100.0%
==&gt; Verifying checksum for Cask virtualbox
==&gt; Installing Cask virtualbox
==&gt; Running installer for virtualbox; your password may be necessary.
==&gt; Package installers may write to any location; options such as --appdir are ignored.
Password: ██████████████████████████
==&gt; installer: Package name is Oracle VM VirtualBox
==&gt; installer: Installing at base path /
==&gt; installer: The install failed (The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.)
==&gt; Purging files for version 5.2.2-119230 of Cask virtualbox
==&gt; installer: Package name is Oracle VM VirtualBox
==&gt; installer: Installing at base path /
==&gt; installer: The install was successful.
🍺  virtualbox was successfully installed!
</code></pre>
<p><img src="https://agirlhasnona.me/content/images/2017/12/coffee-swirl-divider-full-half.png" alt="Running macOS in Vagrant"><br>
<small>Documented on my <a href="http://agirlhasnona.me/frequently-used-images/">frequently used assets</a> page.</small></p>
<h2 id="thegrittybits">The Gritty Bits</h2>
<p>Ok, so now that we have All the Things we need to do ... quite a few steps, actually.</p>
<h3 id="creatingyourisofile">Creating your ISO File</h3>
<p>First, when you download High Sierra, or any other macOS release to date, you are downloading the file as a DMG, which is a proprietary Apple disk image format. In order to run macOS in VirtualBox, we'll need to convert the DMG file to a more general format such as an ISO. To do this, we're going to start by converting the installer DMG that we downloaded from the App Store into an image of the full OS, store that in a DMG, and then use a utility called PowerISO to do the final conversion.</p>
<p>If that just read like a paragraph of panic, don't worry! We're going through it step by step.</p>
<pre><code class="language-shell">hdiutil attach /Applications/Install\ macOS\ High\ Sierra.app/Contents/SharedSupport/InstallESD.dmg -noverify -mountpoint /Volumes/HighSierraInstall

hdiutil create -o /tmp/HighSierra.cdr -size 5130m -layout SPUD -fs HFS+J
created: /tmp/HighSierra.cdr.dmg

hdiutil attach /tmp/HighSierra.cdr.dmg -noverify -mountpoint /Volumes/InstallBuild

sudo /Applications/Install\ macOS\ High\ Sierra.app/Contents/Resources/createinstallmedia --volume /Volumes/InstallBuild

mv /tmp/HighSierra.cdr.dmg ~/Desktop/HighSierraToMakeISO.dmg
</code></pre>
<p>The output will look like this:</p>
<pre><code class="language-shell">→  hdiutil attach /Applications/Install\ macOS\ High\ Sierra.app/Contents/SharedSupport/InstallESD.dmg -noverify -mountpoint /Volumes/HighSierraInstall
/dev/disk2              GUID_partition_scheme
/dev/disk2s1            EFI
/dev/disk2s2            Apple_HFS                       /Volumes/HighSierraInstall

→  hdiutil create -o /tmp/HighSierra.cdr -size 5130m -layout SPUD -fs HFS+J
created: /tmp/HighSierra.cdr.dmg

→  hdiutil attach /tmp/HighSierra.cdr.dmg -noverify -mountpoint /Volumes/InstallBuild
/dev/disk3              Apple_partition_scheme
/dev/disk3s1            Apple_partition_map
/dev/disk3s2            Apple_HFS                       /Volumes/InstallBuild

→  sudo /Applications/Install\ macOS\ High\ Sierra.app/Contents/Resources/createinstallmedia --volume /Volumes/InstallBuild
Password:
Ready to start.
To continue we need to erase the volume at /Volumes/InstallBuild.
If you wish to continue type (Y) then press return: Y
Erasing Disk: 0%... 10%... 20%... 30%...100%...
Copying installer files to disk...
Copy complete.
Making disk bootable...
Copying boot files...
Failed to copy kernelcache, “prelinkedkernel” couldn’t be copied to “.IABootFiles”.
Done.

→  mv /tmp/HighSierra.cdr.dmg ~/Desktop/HighSierraToMakeISO.dmg
</code></pre>
<h4 id="troubleshooting">Troubleshooting</h4>
<p>What to do if you see the following error:</p>
<pre><code class="language-shell">→  sudo hdiutil attach /Applications/Install\ macOS\ High\ Sierra.app/Contents/SharedSupport/InstallESD.dmg -noverify -mountpoint /Volumes/HighSierraInstall
Password:
hdiutil: attach failed - Resource busy
</code></pre>
<p>This most likely means that you've already mounted the <code>InstallESD</code> image. Go into <code>Disk Utility</code> and unmount it:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/macos-virtualbox-umount-error.png" alt="Running macOS in Vagrant"></p>
<p>This likely happened when the High Sierra download completed and immediately opened the installer.</p>
<h3 id="usingyourvhdfilewithvirualbox">Using your VHD file with VirualBox</h3>
<p>Open VirtualBox:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/macos-virtualbox-find.png" alt="Running macOS in Vagrant"></p>
<p>Click New:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/macos-virtualbox-new.png" alt="Running macOS in Vagrant"></p>
<p>Choose a name, Type is &quot;Mac OS X&quot;, and Version is &quot;macOS 10.13 High Sierra (64-bit)&quot;. Then click &quot;Continue&quot;:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/macos-virtualbox-imagetype.png" alt="Running macOS in Vagrant"></p>
<p>The default memory is 2 GB, I recommend upping it to 4 GB though if you can:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/macos-virtualbox-resources.png" alt="Running macOS in Vagrant"></p>
<p>Choose &quot;Create a Virtual Hard Disk File now&quot; and &quot;Create&quot;:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/12/macos-virtualbox-createvhd.png" alt="Running macOS in Vagrant"></p>
<p><strong>For the curious</strong><br>
The image manipulations above, with arrow / blurring of text / etc., were done with the <a href="https://itunes.apple.com/us/app/skitch-snap-mark-up-share/id425955336?mt=12">Skitch</a> app. You only need an account to save in Evernote - account-less you can just locally save / export to whatever image file format you prefer.</p>
<h2 id="quickestvagranttutoralever">Quickest Vagrant Tutoral Ever</h2>
<pre><code>Add the Vagrant box you want to use. We'll use Ubuntu 12.04 for the following example.

$ vagrant box add precise64 http://files.vagrantup.com/precise64.box
You can find more boxes at Vagrant Cloud

Now create a test directory and cd into the test directory. Then we'll initialize the vagrant machine.

$ vagrant init precise64
Now lets start the machine using the following command.

$ vagrant up
You can ssh into the machine now.

$ vagrant ssh
Halt the vagrant machine now.

$ vagrant halt
Other useful commands are suspend, destroy etc.
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Feeding the Line: Carriage Returns Matter]]></title><description><![CDATA[<div class="kg-card-markdown"><p>It all started as a request as easy as any other requests: please take this file and use it to upload data to the database.</p>
<p>Sure, no problem. First, open the Excel sheet ... clean it up a bit ... save as a CSV ... and then I have it in the form</p></div>]]></description><link>https://agirlhasnona.me/carriage-returns-matter/</link><guid isPermaLink="false">5a2eb92297a23327bcf903de</guid><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Mon, 11 Dec 2017 17:38:52 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>It all started as a request as easy as any other requests: please take this file and use it to upload data to the database.</p>
<p>Sure, no problem. First, open the Excel sheet ... clean it up a bit ... save as a CSV ... and then I have it in the form that this handy little Perl script can upload to the database.</p>
<p>But when I run the script, which is handling zip codes for context, all the zip codes are <code>00000</code>. Not only is this not a valid zip code, even if it were the script basically would have taken several hundred unique zip codes and their location data and made them all into <code>00000</code>. 🤔</p>
<p>The first thing that I notice on opening the file is that all the rows are actually on a single line and are separated by <code>^M</code> in <code>vim</code>.</p>
<h2 id="sowhatsmandhowdoifixit">So what's <code>^M</code> and how do I fix it?</h2>
<p>This is where we circle back around to our title about carriage returns and line feeds. A little relevant info bites to help us get started:</p>
<ul>
<li>A <em>carriage return</em> (CR) means that the cursor moves to the beginning of the line, this is denoted by <code>\r</code>.</li>
<li>A <em>line feed</em> (LF) means moving to the next line, this is denoted by <code>\n</code>.</li>
</ul>
<p>And a few more bits of information:</p>
<ul>
<li>Windows uses <code>\r\n</code> as its End of Line sequence.</li>
<li>Mac uses <code>\r</code> as its EOL sequence.</li>
<li>Unix uess <code>\n</code> as its EOL sequence.</li>
</ul>
<p>You can see the trouble brewing now, can't you? 🔮</p>
<p>So when I used Excel on a Mac to convert the file to a CSV file, it was using <code>\r</code> as the newline character. When I uploaded that to the server of interest, running CentOS, it showed it as being one long line because I was now in Linux not macOS. So when I supplied the CSV file to the Perl script, a script that wasn't written to handle all three scenarios, it nope'd right outta there.</p>
<p>But that leads us to our purpose:</p>
<ol>
<li>How do I fix this?</li>
<li>Why <code>^M</code>?</li>
</ol>
<p>For the first, in <code>vim</code> I can do a global replacement on the new line mish-mash with:</p>
<pre><code>:%s/\r/\r/g
</code></pre>
<p>Breaking this down <code>\r</code> in <code>vim</code> is finding any existing new line character, in this case all the <code>^M</code>s, and replacing them with the OS's new line character, in this case <code>\n</code>. That solves our new line issue. <code>%s</code> applies this change to all lines of the file and <code>g</code> applies the change to all instances on each line. Without the <code>g</code> only the first instance of <code>^M</code> will be changed per line. (In this single line file, that means it'll only be replaced once.)</p>
<p>Now for the <code>^M</code>. If you pull up an <a href="http://www.bluesock.org/~willg/dev/ascii.html">ASCII chart</a>, you'll find that the line feed character, <code>\n</code>, is <code>0xA</code> (or <code>0x0A</code>); whereas the carriage return character, <code>\r</code>, is <code>0xD</code> (or <code>0x0D</code>). The reason <code>vim</code> displayed the carriage returns as <code>^M</code> is because <code>D</code> is 13 in hexidecimal (0-9, then A-F for 10-15) and the 13th letter of the English alphabet is ... 🥁 ... M.</p>
<h2 id="butthatsnotyouronlyproblem">But that's not your only problem</h2>
<p>After feeling all happy that I fixed my new lines, I found another problem. Because there's always more than one 😉</p>
<p>When the file was being read in by Perl every line was prefixed with: <code>\x{feff}</code>. This is a zero width no break space.</p>
<p>Invisicharacters are the bane of my existance today, it seems. A quick way to fix this is:</p>
<pre><code>perl -CSD -pe 's/^\x{feff}//' ${FILENAME}.csv
</code></pre>
<p>Since I had a few files with this problem I just wrapped this into a single line <code>for</code> loop in BASH:</p>
<pre><code>for FILE in *csv; do TMPZIP=$(mktemp zip-XXXXX) &amp;&amp; perl -CSD -pe 's/^\x{feff}//' ${FILE} &gt; ${TMPZIP} &amp;&amp; cp ${TMPZIP} ${FILE} &amp;&amp; rm ${TMPZIP}; done
</code></pre>
<p>A quick explanation of the script:</p>
<ul>
<li><code>FILE in *csv</code> is a <code>foreach</code>, so the loop will perform this action on all CSV files with the <code>.csv</code> extension</li>
<li>I used <code>mktemp</code> to make a temp file to prevent the unlikely event where me mashing up a common temporary file name like <code>tmp</code> will overwrite a <code>tmp</code> file that I actually needed / wanted. See <code>man mktemp</code> for more about the command.</li>
<li>the <code>perl ...</code> line is reading in the file, replacing the <code>feff</code> hex character with no character (removing it), and writing the output to a new file. This is because the <code>perl</code> line doesn't modify the file itself, it just prints it to screen (<code>stdout</code>).</li>
<li>I copy the <code>TMPZIP</code> file to overwrite the existing <code>FILE</code>.</li>
<li>I remove the <code>TMPZIP</code> file.</li>
</ul>
<p>Note that if you <code>mktemp zip-XXXXX.csv</code> you'll be making a CSV file, which will be pulled into your <code>for</code> loop and will wreak some havoc on what you were hoping would be an easy, clean fix. To see how this works, create some faux CSV files or just some backups of real files, and run the <code>for</code> loop with a temp file that has the CSV extension.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Opsfire: Downgrading from Purgatory to Limbo]]></title><description><![CDATA[How to recover when you've fixed nginx only to break it again to fix it again.]]></description><link>https://agirlhasnona.me/opsfire-purgatory-to/</link><guid isPermaLink="false">5a2743d897a23327bcf903d8</guid><category><![CDATA[opsfire]]></category><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Wed, 06 Dec 2017 01:32:44 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>After fixing nginx woes earlier you might be feeling great and decide: let's fix this across the board (in dev only because we're not confident in prod yet 😅).</p>
<p>Uh, oh wait.</p>
<p>There's a monitoring alarm that an instance healthcheck failed?</p>
<p>Oh, good. What's running on port 80 and 443?</p>
<pre><code class="language-shell">$ sudo netstat -tlnp | grep '443\|80'
</code></pre>
<p>Oh, good.</p>
<p>First: let's try downgrading:</p>
<pre><code class="language-shell">$ sudo yum downgrade nginx
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main                                                                                                                                                                                                                                                | 2.1 kB  00:00:00
amzn-updates                                                                                                                                                                                                                                             | 2.5 kB  00:00:00
Nothing to do
</code></pre>
<p>🙃</p>
<p>Did you know that you can undo <code>yum history</code>? Yeppers! Let's hop on that ASAP.</p>
<pre><code class="language-shell">$ sudo yum history
Loaded plugins: priorities, update-motd, upgrade-helper
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
   107 | EC2 ... &lt;ec2-user&gt;       | 2017-12-05 19:42 | Install        |    1
   106 | EC2 ... &lt;ec2-user&gt;       | 2017-12-05 19:42 | Erase          |    1 EE
   105 | EC2 ... &lt;ec2-user&gt;       | 2017-12-05 15:36 | Update         |    1 EE
   104 | EC2 ... &lt;ec2-user&gt;       | 2017-11-21 12:47 | E, I, U        |   10 EE
   103 | EC2 ... &lt;ec2-user&gt;       | 2017-11-21 12:47 | Erase          |    7
   102 | root &lt;root&gt;              | 2017-11-08 18:21 | Update         |    1 EE
   101 | EC2 ... &lt;ec2-user&gt;       | 2017-11-07 15:04 | E, I, O, U     |   76 EE
   100 | System &lt;unset&gt;           | 2017-11-06 17:49 | Update         |    1 EE
    99 | System &lt;unset&gt;           | 2017-11-02 18:13 | Update         |    1 EE
    98 | System &lt;unset&gt;           | 2017-10-31 19:57 | Update         |    1 EE
    97 | System &lt;unset&gt;           | 2017-10-13 13:10 | Update         |    1 EE
    96 | System &lt;unset&gt;           | 2017-10-10 18:54 | Update         |    1 EE
    95 | EC2 ... &lt;ec2-user&gt;       | 2017-09-19 15:30 | E, I, U        |   27 EE
    94 | System &lt;unset&gt;           | 2017-09-18 16:40 | Update         |    1 EE
    93 | System &lt;unset&gt;           | 2017-09-12 15:54 | Update         |    1 EE
    92 | System &lt;unset&gt;           | 2017-08-28 13:45 | Update         |    1 EE
    91 | EC2 ... &lt;ec2-user&gt;       | 2017-08-15 22:26 | E, I, U        |   47 EE
    90 | EC2 ... &lt;ec2-user&gt;       | 2017-07-28 11:46 | Erase          |    2 EE
    89 | System &lt;unset&gt;           | 2017-07-26 14:55 | Update         |    1 EE
    88 | EC2 ... &lt;ec2-user&gt;       | 2017-07-18 13:53 | I, U           |  183 EE
history list


$ sudo yum history undo 107
Loaded plugins: priorities, update-motd, upgrade-helper
Undoing transaction 107, from Tue Dec  5 19:42:31 2017
    Install nginx-1:1.12.1-1.33.amzn1.x86_64 @amzn-main
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package nginx.x86_64 1:1.12.1-1.33.amzn1 will be erased
--&gt; Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================================================================================================================
 Package                                                      Arch                                                          Version                                                                     Repository                                                         Size
================================================================================================================================================================================================================================================================================
Removing:
 nginx                                                        x86_64                                                        1:1.12.1-1.33.amzn1                                                         @amzn-main                                                        1.4 M

Transaction Summary
================================================================================================================================================================================================================================================================================
Remove  1 Package

Installed size: 1.4 M
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Erasing    : 1:nginx-1.12.1-1.33.amzn1.x86_64                                                                                                                                                                                                                             1/1
  Verifying  : 1:nginx-1.12.1-1.33.amzn1.x86_64                                                                                                                                                                                                                             1/1

Removed:
  nginx.x86_64 1:1.12.1-1.33.amzn1

Complete!

$ sudo yum history undo 106
Loaded plugins: priorities, update-motd, upgrade-helper
Undoing transaction 106, from Tue Dec  5 19:42:27 2017
    Erase nginx-1:1.8.0-10.25.amzn1.x86_64 @amzn-main
Error: No package(s) available to install
</code></pre>
<p>Oh right, we're no longer using our internal repo.</p>
<pre><code class="language-shell">$ sudo vim /etc/yum.repos.d/internal.repo

$ sudo yum history undo 106
Loaded plugins: priorities, update-motd, upgrade-helper
Undoing transaction 106, from Tue Dec  5 19:42:27 2017
    Erase nginx-1:1.8.0-10.25.amzn1.x86_64 @amzn-main
internal-aws-arched                                                                                                                                                                                                                                      | 2.9 kB  00:00:00
internal-aws-noarch                                                                                                                                                                                                                                      | 2.9 kB  00:00:00
1 packages excluded due to repository priority protections
Error: No package(s) available to install
</code></pre>
<p>😐</p>
<p>Since the internal repo is re-enabled, let's just to a regular install:</p>
<pre><code class="language-shell"> sudo yum install nginx
Loaded plugins: priorities, update-motd, upgrade-helper
internal-aws-arched                                                                                                                                                                                                                                      | 2.9 kB  00:00:00
internal-aws-noarch                                                                                                                                                                                                                                      | 2.9 kB  00:00:00
1 packages excluded due to repository priority protections
Package nginx is obsoleted by nginx-all-modules, trying to install 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 instead
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package nginx-all-modules.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx-mod-stream(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-mail(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-xslt-filter(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-perl(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-image-filter(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-geoip(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Running transaction check
{{ snip -&gt; you remember this bit from the earlier fire, right? }}
</code></pre>
<p>🤔</p>
<p>Googling for the package, <code>nginx-1.8.0-10.25.amzn1.x86_64</code>, yields a result on <a href="https://www.dynatrace.com/support/help/technology-support/reference/supported-nginx-binaries/">DynaTrace</a>. Great!</p>
<pre><code class="language-shell">$ wget http://packages.eu-west-1.amazonaws.com/2015.09/main/201509419456/x86_64/Packages/nginx-1.8.0-10.25.amzn1.x86_64.rpm
--2017-12-05 20:03:23--  http://packages.eu-west-1.amazonaws.com/2015.09/main/201509419456/x86_64/Packages/nginx-1.8.0-10.25.amzn1.x86_64.rpm
Resolving packages.eu-west-1.amazonaws.com (packages.eu-west-1.amazonaws.com)... 52.218.20.193
Connecting to packages.eu-west-1.amazonaws.com (packages.eu-west-1.amazonaws.com)|52.218.20.193|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 568699 (555K) [binary/octet-stream]
Saving to: ‘nginx-1.8.0-10.25.amzn1.x86_64.rpm’

nginx-1.8.0-10.25.amzn1.x86_64.rpm                                  100%[===================================================================================================================================================================&gt;] 555.37K   491KB/s    in 1.1s

2017-12-05 20:03:25 (491 KB/s) - ‘nginx-1.8.0-10.25.amzn1.x86_64.rpm’ saved [568699/568699]

$ sudo rpm -ivh nginx-1.8.0-10.25.amzn1.x86_64.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:nginx-1:1.8.0-10.25.amzn1        ################################# [100%]
</code></pre>
<p>Now to start the service and see how it's doing:</p>
<pre><code class="language-shell">$ sudo service nginx start
Starting nginx:                                            [  OK  ]

$ nginx -v
nginx version: nginx/1.8.0

$ sudo ps aux | grep nginx
root     29442  0.0  0.1 110644  4344 ?        Ss   20:05   0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx    29443  0.0  0.1 110648  7408 ?        S    20:05   0:00 nginx: worker process
nginx    29444  0.0  0.1 110648  5844 ?        S    20:05   0:00 nginx: worker process
ec2-user 29484  0.0  0.0 110472  2112 pts/0    S+   20:05   0:00 grep --color=auto nginx

$ sudo netstat -tlnp | grep '443\|80'
tcp        0      0 0.0.0.0:443                 0.0.0.0:*                   LISTEN      29442/nginx
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      29442/nginx
</code></pre>
<p>😅</p>
<p>And that heartbeat monitor should be recovering now too.</p>
<p>Mischief managed.</p>
<p><strong>Quick Addendum</strong></p>
<p>You may have noticed that I googled for the package name <code>nginx-1.8.0-10.25.amzn1.x86_64</code> rather than <code>nginx-1:1.8.0-10.25.amzn1.x86_64</code>, the latter being how the package is identified in the error msgs above. The reason for this is the <code>1</code> in <code>1:</code> is what is called the Epoch. The Epoch allows you to reset a version if you change your versioning scheme. As a quick example, if you wrote a package called <code>memnommer</code> and provided a release as <code>memnommer-0.12345.6</code> but then provided the subsequent release as <code>memnommer-0.2.1</code> your package installer would not be able to clearly determine which is the upgrade since <code>12345 &gt; 2</code>. In this case, you would bump your Epoch number, e.g. <code>memnommer-0:0.12345.6</code> to <code>memnommer-1:0.2.1</code>. Now when you try to upgrade with <code>yum upgrade memnommer</code>, <code>yum</code> knows what package to install as the epoch takes precedence. Note that since this value is to be used by package manager, it is usually <em>not</em> in the file name of the release itself which is why it wasn't in my search.</p>
<p><img src="https://agirlhasnona.me/content/images/2017/05/opsfire_ribbon_300x300.png" alt="OpsFire Badge"></p>
<p><small>Documented on my <a href="http://agirlhasnona.me/frequently-used-images/">frequently used assets</a> page.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Opsfire: In Dependency Purgatory with Nginx]]></title><description><![CDATA[Recently ran into an issue where I could no longer update nginx. Spoilers: conflicting repos!]]></description><link>https://agirlhasnona.me/opsfire-dependency-purgatory-nginx/</link><guid isPermaLink="false">5a270d6897a23327bcf903d6</guid><category><![CDATA[opsfire]]></category><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Tue, 05 Dec 2017 22:11:10 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Recently ran into this issue on <strike>an</strike> a few instances in an environment I'm working in:</p>
<pre><code class="language-shell">$ sudo yum upgrade
Loaded plugins: priorities, update-motd, upgrade-helper
1 packages excluded due to repository priority protections
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package nginx.x86_64 1:1.8.1-1.26.amzn1 will be obsoleted
---&gt; Package nginx-all-modules.x86_64 1:1.12.1-1.33.amzn1 will be obsoleting
--&gt; Processing Dependency: nginx-mod-stream(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-mail(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-xslt-filter(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-perl(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-image-filter(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-geoip(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Running transaction check
---&gt; Package nginx-mod-http-geoip.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-geoip-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-image-filter.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-image-filter-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-perl.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-perl-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-xslt-filter.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-xslt-filter-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-mail.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-mail-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-stream.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-stream-1.12.1-1.33.amzn1.x86_64
--&gt; Finished Dependency Resolution
Error: Package: 1:nginx-mod-http-geoip-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Removing: 1:nginx-1.8.1-1.26.amzn1.x86_64 (@amzn-main)
               nginx(x86-64) = 1:1.8.1-1.26.amzn1
           Obsoleted By: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 (amzn-main)
               Not found
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-http-xslt-filter-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Removing: 1:nginx-1.8.1-1.26.amzn1.x86_64 (@amzn-main)
               nginx(x86-64) = 1:1.8.1-1.26.amzn1
           Obsoleted By: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 (amzn-main)
               Not found
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-http-perl-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Removing: 1:nginx-1.8.1-1.26.amzn1.x86_64 (@amzn-main)
               nginx(x86-64) = 1:1.8.1-1.26.amzn1
           Obsoleted By: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 (amzn-main)
               Not found
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-http-image-filter-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Removing: 1:nginx-1.8.1-1.26.amzn1.x86_64 (@amzn-main)
               nginx(x86-64) = 1:1.8.1-1.26.amzn1
           Obsoleted By: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 (amzn-main)
               Not found
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-stream-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Removing: 1:nginx-1.8.1-1.26.amzn1.x86_64 (@amzn-main)
               nginx(x86-64) = 1:1.8.1-1.26.amzn1
           Obsoleted By: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 (amzn-main)
               Not found
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-mail-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Removing: 1:nginx-1.8.1-1.26.amzn1.x86_64 (@amzn-main)
               nginx(x86-64) = 1:1.8.1-1.26.amzn1
           Obsoleted By: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 (amzn-main)
               Not found
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
</code></pre>
<p>Initially tried the suggested <code>--skip-broken</code> to work around the problem, but alas:</p>
<pre><code class="language-shell">$ sudo yum update --skip-broken
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main                                                                                                                                                                                                                                                | 2.1 kB  00:00:00
amzn-updates                                                                                                                                                                                                                                             | 2.5 kB  00:00:00
internal-aws-arched                                                                                                                                                                                                                                      | 2.9 kB  00:00:00
internal-aws-noarch                                                                                                                                                                                                                                      | 2.9 kB  00:00:00
1 packages excluded due to repository priority protections
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package nginx.x86_64 1:1.8.1-1.26.amzn1 will be obsoleted
---&gt; Package nginx-all-modules.x86_64 1:1.12.1-1.33.amzn1 will be obsoleting
--&gt; Processing Dependency: nginx-mod-stream(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-mail(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-xslt-filter(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-perl(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-image-filter(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-geoip(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Running transaction check
---&gt; Package nginx-mod-http-geoip.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-geoip-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-image-filter.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-image-filter-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-perl.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-perl-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-xslt-filter.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-xslt-filter-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-mail.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-mail-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-stream.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-stream-1.12.1-1.33.amzn1.x86_64
--&gt; Running transaction check
---&gt; Package nginx.x86_64 1:1.8.1-1.26.amzn1 will be obsoleted

Packages skipped because of dependency problems:
    1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 from amzn-main
    1:nginx-mod-http-geoip-1.12.1-1.33.amzn1.x86_64 from amzn-main
    1:nginx-mod-http-image-filter-1.12.1-1.33.amzn1.x86_64 from amzn-main
    1:nginx-mod-http-perl-1.12.1-1.33.amzn1.x86_64 from amzn-main
    1:nginx-mod-http-xslt-filter-1.12.1-1.33.amzn1.x86_64 from amzn-main
    1:nginx-mod-mail-1.12.1-1.33.amzn1.x86_64 from amzn-main
    1:nginx-mod-stream-1.12.1-1.33.amzn1.x86_64 from amzn-main
</code></pre>
<p>While not <em>eliminating</em> the problem it certainly makes it easier to find the source of my woe:</p>
<pre><code class="language-shell">---&gt; Package nginx.x86_64 1:1.8.1-1.26.amzn1 will be obsoleted
---&gt; Package nginx-all-modules.x86_64 1:1.12.1-1.33.amzn1 will be obsoleting
</code></pre>
<p>It looks like when AWS moved from 1.8 they, at some point, moved from separate packages to a monolith. In our testing environment I took a look at one of the impacted instances and removed nginx hoping a fresh install would resolve the issue:</p>
<pre><code class="language-shell">$ sudo yum remove nginx
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package nginx.x86_64 1:1.8.1-1.26.amzn1 will be erased
--&gt; Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================================================================================================================
 Package                                                      Arch                                                          Version                                                                     Repository                                                         Size
================================================================================================================================================================================================================================================================================
Removing:
 nginx                                                        x86_64                                                        1:1.8.1-1.26.amzn1                                                          @amzn-main                                                        1.3 M

Transaction Summary
================================================================================================================================================================================================================================================================================
Remove  1 Package

Installed size: 1.3 M
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Erasing    : 1:nginx-1.8.1-1.26.amzn1.x86_64                                                                                                                                                                                                                              1/1
warning: /etc/nginx/nginx.conf saved as /etc/nginx/nginx.conf.rpmsave
warning: /etc/logrotate.d/nginx saved as /etc/logrotate.d/nginx.rpmsave
  Verifying  : 1:nginx-1.8.1-1.26.amzn1.x86_64                                                                                                                                                                                                                              1/1

Removed:
  nginx.x86_64 1:1.8.1-1.26.amzn1

Complete!


$ sudo yum install nginx
Loaded plugins: priorities, update-motd, upgrade-helper
1 packages excluded due to repository priority protections
Package nginx is obsoleted by nginx-all-modules, trying to install 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64 instead
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package nginx-all-modules.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx-mod-stream(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-mail(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-xslt-filter(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-perl(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-image-filter(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Processing Dependency: nginx-mod-http-geoip(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-all-modules-1.12.1-1.33.amzn1.x86_64
--&gt; Running transaction check
---&gt; Package nginx-mod-http-geoip.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-geoip-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-image-filter.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-image-filter-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-perl.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-perl-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-http-xslt-filter.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-http-xslt-filter-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-mail.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-mail-1.12.1-1.33.amzn1.x86_64
---&gt; Package nginx-mod-stream.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Processing Dependency: nginx(x86-64) = 1:1.12.1-1.33.amzn1 for package: 1:nginx-mod-stream-1.12.1-1.33.amzn1.x86_64
--&gt; Finished Dependency Resolution
Error: Package: 1:nginx-mod-http-xslt-filter-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-http-image-filter-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-mail-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-http-perl-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-stream-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
Error: Package: 1:nginx-mod-http-geoip-1.12.1-1.33.amzn1.x86_64 (amzn-main)
           Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
           Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
               nginx(x86-64) = 1.9.2-1
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
</code></pre>
<p>Oh, good. It looks like the package was renamed from <code>nginx</code> to <code>nginx-all-modules</code>, but that's the least of my woes. My main woe is this pair of lines that appear on repeat:</p>
<pre><code>Requires: nginx(x86-64) = 1:1.12.1-1.33.amzn1
Available: nginx-1.9.2-1.x86_64 (internal-aws-arched)
</code></pre>
<p>So, so the <code>nginx-all-modules</code> package that is the new and shiny version requires version 1.12, <em>but</em> it can only find 1.9. What gives?</p>
<p>Assumptions are bad but you gotta start somewhere.</p>
<p>Assumption 1: The package installer needs 1.12, but is only finding 1.9. There are probably conflicting repos.</p>
<p>Assumption 2: It is unlikely that AWS has repos that are so broken that this is an AWS problem, because nginx is commonly used and there'd probably be a ton of bug submissions over this.</p>
<p>Assumption 3: There is another repo, somewhere, with an older version of nginx wrecking havoc on my life.</p>
<p>Running <code>cat /etc/yum.repos.d/*</code> I get ... a <em>ton</em> of output. With some focus:</p>
<pre><code class="language-shell">$ cat /etc/yum.repos.d/* | grep 'example\.com'
baseurl=http://yum.int.example.com/yum/noarch/
baseurl=http://yum.int.example.com/yum/x86_64/
$ ack 'example\.com' /etc/yum.repos.d/
/etc/yum.repos.d/internal.repo
3:baseurl=http://yum.int.example.com/yum/noarch/
11:baseurl=http://yum.int.example.com/yum/x86_64/
</code></pre>
<p>Oh, look at that. Internal repos. If I open the file and change <code>enabled</code> to <code>0</code>:</p>
<pre><code class="language-shell">[internal-aws-noarch]
name=Private Internal Repo (noarch)
baseurl=http://yum.int.example.com/yum/noarch/
enabled=0
gpgcheck=0
priority=1
metadata_expire=1200

[internal-aws-arched]
name=Private Internal Repo (arched)
baseurl=http://yum.int.example.com/yum/x86_64/
enabled=0
gpgcheck=0
priority=1
metadata_expire=1200
</code></pre>
<p><em><strong>Now</strong></em> let's see when I try to install nginx:</p>
<pre><code class="language-shell">$ sudo yum install nginx
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package nginx.x86_64 1:1.12.1-1.33.amzn1 will be installed
--&gt; Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================================================================================================================
 Package                                                      Arch                                                          Version                                                                      Repository                                                        Size
================================================================================================================================================================================================================================================================================
Installing:
 nginx                                                        x86_64                                                        1:1.12.1-1.33.amzn1                                                          amzn-main                                                        561 k

Transaction Summary
================================================================================================================================================================================================================================================================================
Install  1 Package

Total download size: 561 k
Installed size: 1.4 M
Is this ok [y/d/N]: y
Downloading packages:
nginx-1.12.1-1.33.amzn1.x86_64.rpm                                                                                                                                                                                                                       | 561 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : 1:nginx-1.12.1-1.33.amzn1.x86_64                                                                                                                                                                                                                             1/1
  Verifying  : 1:nginx-1.12.1-1.33.amzn1.x86_64                                                                                                                                                                                                                             1/1

Installed:
  nginx.x86_64 1:1.12.1-1.33.amzn1

Complete!
</code></pre>
<p>Bam!</p>
<p><img src="https://agirlhasnona.me/content/images/2017/05/opsfire_ribbon_300x300.png" alt="OpsFire Badge"></p>
<p><small>Documented on my <a href="http://agirlhasnona.me/frequently-used-images/">frequently used assets</a> page.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Saturday Burnt Pi]]></title><description><![CDATA[So I decided to upgrade my Pi3 this morning. Lesson learned: always read the release notes!]]></description><link>https://agirlhasnona.me/saturday-burnt-pi/</link><guid isPermaLink="false">5a107e784a5a3706d9af70d1</guid><category><![CDATA[linux]]></category><dc:creator><![CDATA[Quintessence]]></dc:creator><pubDate>Sat, 18 Nov 2017 21:25:57 GMT</pubDate><media:content url="https://agirlhasnona.me/content/images/2017/11/Raspberry-Pi-on-Fire-v4.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://agirlhasnona.me/content/images/2017/11/Raspberry-Pi-on-Fire-v4.png" alt="Saturday Burnt Pi"><p>So I decided to upgrade my Pi3 this morning, a task that I thought would be be pretty straightforward. It's worth noting that I haven't upgraded my Pi since I did the initial package installs when I set up the OS and pihole, so ... it was more than due for at least a cursory upgrade.</p>
<p>I logged into my Pi:</p>
<pre><code class="language-bash">self@GreenScreen:~$  ssh pi
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-1009-raspi2 armv7l)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

106 packages can be updated.
1 update is a security update.


*** System restart required ***
Last login: Mon Mar  6 01:11:08 2017 from 192.168.█.█
ubuntu@ubuntu:~$ sudo apt-get update &amp;&amp; sudo apt-get upgrade
Hit:1 http://ppa.launchpad.net/ubuntu-raspi2/ppa-rpi3/ubuntu xenial InRelease
Hit:2 http://ports.ubuntu.com/ubuntu-ports xenial InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports xenial-updates InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports xenial-backports InRelease
Hit:5 http://ports.ubuntu.com/ubuntu-ports xenial-security InRelease
E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?
ubuntu@ubuntu:~$ sudo reboot now
</code></pre>
<p>Since it was brunch time, I got up and grabbed my peanut butter latte, and sat back down. More than enough time for a reboot.</p>
<p>Or so you would think.</p>
<pre><code class="language-bash">self@GreenScreen:~$  ssh pi -v
OpenSSH_7.5p1, LibreSSL 2.5.4
debug1: Reading configuration data /Users/quintessence/.ssh/config
debug1: /Users/quintessence/.ssh/config line 41: Applying options for pi
debug1: /Users/quintessence/.ssh/config line 138: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 52: Applying options for *
debug1: Connecting to 192.168.█.█ [192.168.█.█] port 22.
^C

self@GreenScreen:~$  ssh ubuntu@192.168.█.█ -v
OpenSSH_7.5p1, LibreSSL 2.5.4
debug1: Reading configuration data /Users/quintessence/.ssh/config
debug1: /Users/quintessence/.ssh/config line 138: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 52: Applying options for *
debug1: Connecting to 192.168.█.█ [192.168.█.█] port 22.
debug1: connect to address 192.168.█.█ port 22: Operation timed out
ssh: connect to host 192.168.█.█ port 22: Operation timed out
</code></pre>
<p>Oh.</p>
<p>Good.</p>
<p>Fishing out an HDMI cable, I connect the Pi to my TV and see that U-Boot is trying to PXE boot like so:</p>
<pre><code class="language-bash">(( snip ))
Waiting for Ethernet connection...done
*** ERROR: `serverip' is not set
missing environment variable: bootfile
Retrieving file: pxelinux.cfg/default-arm
Waiting for Ethernet connection...done
*** ERROR: `serverip' is not set
missing environment variable: bootfile
Retrieving file: pxelinux.cfg/default
Waiting for Ethernet connection...done
*** ERROR: `serverip' is not set
(( snip ))
</code></pre>
<p>It looped around like this until I got a prompt, but I wasn't in bash or sh, I was in <a href="https://en.wikipedia.org/wiki/Das_U-Boot">U-Boot</a> - the bootloader. That became obvious when commands like <code>ls</code> and <code>cd</code> did not exist and the output of the <code>help</code> command looked like this:</p>
<pre><code class="language-bash">?      - alias for 'help'
base    - print or set address offset
bdinfo  - print Board Info structure
boot    - boot default, i.e., run 'bootcmd'
bootd  - boot default, i.e., run 'bootcmd'
bootelf - Boot from an ELF image in memory
bootm  - boot application image from memory
bootp  - boot image via network using BOOTP/TFTP protocol
bootvx  - Boot vxWorks from an ELF image
(( snip ))
</code></pre>
<p>Why was I in U-Boot? For some reason the SD card was not being recognized as a boot device and so the Pi tried the next thing it knew: PXE boot. This failed because I don't have a TFTP server config for it to find, so it's trying and failing to find PXE config files that don't exist.</p>
<p>As for how I ended up in this mess: I suspect that I didn't notice the <code>system restart required</code> message at some earlier point and therefore not only did the current upgrade fail, something in the initial batch did as well. The combination, well...</p>
<p>In any event, I'm lucky in that since this Pi was really just for running the pihole software, I don't have to waste too much (more) time trying to dig around in there. So I just wiped it clean and re-wrote a new image on it.</p>
<p>After reading up on Ubuntu a bit, and the fact that the image for the Pi is &quot;unofficial&quot; and has some &quot;upgrade issues&quot; (no, really?) I opted for <a href="https://www.raspberrypi.org/downloads/raspbian/">Raspbian</a>, the official Pi distro.</p>
<p>From the instructions, I decided to download <a href="https://etcher.io/">Etcher</a> to write the image to disk, like so:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/11/etcher-step1.png" alt="Saturday Burnt Pi"></p>
<p><img src="https://agirlhasnona.me/content/images/2017/11/etcher-step2.png" alt="Saturday Burnt Pi"></p>
<p>Since I only have one removable drive connected, it automatically found my SD card which was appreciated. The timer also lets me know that I can have another snack, which with how long this is taking why not...</p>
<p><img src="https://agirlhasnona.me/content/images/2017/11/etcher-step3.png" alt="Saturday Burnt Pi"></p>
<p>Awesome. At this point the SD card is no longer mounted as there is a setting in Etcher to auto-unmount on success. After a successful boot I enable <code>ssh</code>:</p>
<pre><code class="language-bash">sudo systemctl enable ssh
sudo systemctl start ssh
</code></pre>
<p>I need to remove the <code>192.168.█.█</code> line from my <code>~/.ssh/known_hosts</code> file since on my first attempt to <code>ssh</code> I receive an error about a potential man in the middle attack due to the changed fingerprint. This is expected, though, since there is a new image on the Pi at the same IP address.</p>
<p>Moving along I add my pubkey to <code>~/.ssh/authorized_keys</code> on the Pi and <code>sudo passwd pi</code> to change the password from the default <code>raspberry</code>:</p>
<pre><code class="language-bash">pi@raspberrypi:~ $ mkdir .ssh
pi@raspberrypi:~ $ vim ~/.ssh/authorized_keys
-bash: vim: command not found
pi@raspberrypi:~ $ vi !$
vi ~/.ssh/authorized_hosts
pi@raspberrypi:~ $ sudo passwd pi
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
</code></pre>
<p>Ran all the package updates, installed the pihole package which basically involved a <code>curl</code> and then navigating through prompts:</p>
<pre><code class="language-bash">pi@raspberrypi:~ $ curl -sSL https://install.pi-hole.net | bash
::: system networking, it requires elevated rights. Please check the contents of the script for
::: any concerns with this requirement. Please be sure to download this script from a trusted source.
:::
::: Detecting the presence of the sudo utility for continuation of this install...
::: Utility sudo located.

        .;;,.
        .ccccc:,.
         :cccclll:.      ..,,
          :ccccclll.   ;ooodc
           'ccll:;ll .oooodc
             .;cll.;;looo:.
                 .. ','.
                .',,,,,,'.
              .',,,,,,,,,,.
            .',,,,,,,,,,,,....
          ....''',,,,,,,'.......
        .........  ....  .........
        ..........      ..........
        ..........      ..........
        .........  ....  .........
          ........,,,,,,,'......
            ....',,,,,,,,,,,,.
               .',,,,,,,,,'.
                .',,,,,,'.
                  ..'''.

:::
::: You are root.
::: Verifying free disk space...
:::
::: Updating local cache of available packages...
(( snip ))
</code></pre>
<p><img src="https://agirlhasnona.me/content/images/2017/11/pihole-install.png" alt="Saturday Burnt Pi"></p>
<p>Navigated to a few known ad-heavy sites like Forbes and took a look at my dashboard:</p>
<p><img src="https://agirlhasnona.me/content/images/2017/11/pihole-immediate-usage.png" alt="Saturday Burnt Pi"></p>
<p>And now I'm off to a safer, more secure browsing experience.</p>
<p><strong>Updates</strong></p>
<p><img src="https://agirlhasnona.me/content/images/2017/11/pihole-hr1-usage.png" alt="Saturday Burnt Pi"><br>
<small>After 1 hr of use.</small></p>
<p><img src="https://agirlhasnona.me/content/images/2017/11/pihole-hr24-usage.png" alt="Saturday Burnt Pi"><br>
<small>After 24 hrs of use.</small></p>
<p><small>Source for header: firey background from burnt embers created by <a href="https://www.shutterstock.com/g/bernatskaya%20oxana">Shutterstock user Bernatskaya Oxana</a> and the Raspberry Pi logo.</small></p>
</div>]]></content:encoded></item></channel></rss>