Configure Worklytics Export with Terraform

We love Infra-as-code at Worklytics and use Terraform as our preferred solution. To support our customers doing the same, we’ve released two Terraform modules to help you set-up exports from your Worklytics account to your own cloud.

AWS and GCP solutions are published now in the Terraform Registry.

In both cases, you’ll need your Worklytics ID as an input variable, which you can obtain through the Worklytics app.

The code of both modules is publish on GitHub, under the Worklytics organization:

Titan Key has Grown On Me

It took a 2 years. Maybe 3. But I’ve finally coming around to using my Titan Security Key regularly. It’s still only really supported by Google-family stuff + GitHub, but that’s what I use most.

The rise in the Titan Key’s utility probably comes down to a few factors, pun intended:

  • my Google Authenticator app now has dozens of services in it, so lots of scrolling is required to find the right MFA code
  • working from home all the time makes the extra dongle less of a hassle. I don’t have to pull it in and out of my laptop, or have it protude awkwardly.
  • at the start of COVID, I bought a Dell U3812DW monitor, which has USB-C power delivery and effectively acts as a USB hub; so that also frees up a few ports on my Macbook.

I just wish a few more services supported the U2F standard.

Useful Git Command Aliases

Here are a couple git command aliases I’ve added to my .gitconfig file which may be generally useful. If you combine with Git auto-complete, these will be included as well.

Quick overview:

  • find-merge / show-merge  – used to help trace problematic merges / conflicts (credit: Stack overflow thread on tracing merges)
  • pull-all – pulls and fast-forwards all local branches that can be fast-forwarded.  Keeps me from accidentally making a local commit over some remote change.
  • tags-recent – by default git tag –list lists ALL tags in chronological order, which is pretty painful; this shows you only the most recent 10.

To make these available, add the following block to the .gitconfig file in your home directory.


[alias]
# find when commits were merged
# USAGE:
# git find-merge <SHA-1> // when merged to current branch
# git find-merge <SHA-1> master // when merged to master
find-merge = "!sh -c 'commit=$0 && branch=${1:-HEAD} && (git rev-list $commit..$branch –ancestry-path | cat -n; git rev-list $commit..$branch –first-parent | cat -n) | sort -k2 -s | uniq -f1 -d | sort -n | tail -1 | cut -f2'"
show-merge = "!sh -c 'merge=$(git find-merge $0 $1) && [ -n \"$merge\" ] && git show $merge'"
# pull and fast-forward ALL local branches
pull-all = !"for b in $(git for-each-ref refs/heads –format='%(refname)') ; do git checkout ${b#refs/heads/} ; git pull –ff-only ; done"
# list 10 most recent tags
tags-recent = !"git tag –list –sort=-refname |head -n 10"

view raw

.gitconfig

hosted with ❤ by GitHub

Is 0% APR loan from Affirm a good deal?

Short answer: No.

But it is fairly subtle as to why.

Background

I recently was making a big online purchase, and the merchant offered “12-month 0% APR financing” of the purchase through Affirm.

I thought “why pay for something now if I can pay for it in 12 months?” and decided to look into this.  Since I bothered to think through this, I thought I’d post the analysis in case someone else can benefit.

Analysis

First, the deal seems to be legit.  Going through the process, it genuinely appears that they’ll do the transaction as a 12-month loan with 0% APR instead of upfront payment. Looking at the structure of the payment plan, you make equal payments monthly. So in reality, you’re really borrowing for a weighted-average of 6 months.

However, even a credit card that you pay off in full each month is borrowing for ~1.5 months in the usual case. On average, you are ~15 days from when your statement next closes; and then the payment is due 30 days after that.  So you’re already borrowing the

Benefits:

  • time-value of your month: 1%. This is return earned by your money over the time between when you pay back the loan (~6 months) and when you would otherwise pay for it (~1.5 months).  Assume 3% annual after-tax return, which is arguably optimistic at the moment. Doing the math : 6 – 1.5 = 4.5 months that you’re really earning that return for, so you’d expect to earn 4.5/12 = ~1% return on the value of the purchase.

Costs:

  • Forgone credit card rewards: 2%. Because you’ll pay the Affirm loan off with ACH transfers, rather than a credit card.
  • Forgone credit card protections: ??. purchase protection, disputes, etc.
  • Value of your time/energy: ??. Marginal complexity/hassle for you to administer a loan in addition to your credit card.

In summary, if you can pay for the purchase on a credit card with decent rewards that you’ll pay off in full, you lose money from this loan in practice. Roughly, 1% of the value – depending on your card’s reward value and your beliefs about after-tax rate-of-return.

Dagger Dependency Injection in a Java Servlet

I couldn’t find a great example of using the Dagger dependency injection framework in a Java servlet, so I’m publishing some of the key bits of what I did in the hopes of saving others some hassle.

What’s Dagger? A lightweight dependency injection framework, from Square.  Particularly popular for Android projects, and implements standard JSR 330 annotations.  So easy to move up to Guice later if you need a reacher DI solution.

How? The general idea, following this Stack Overflow answer, is to use a ServletContextListener to initialize Dagger, build your object graph, and inject it into the ServletContext.  In the servlet’s init method, you can then use that graph. I’ve tried to provide a more precise outline of this in code below:

Details


import dagger.ObjectGraph;
public abstract class BaseServlet extends HttpServlet {
private ObjectGraph graph;
/**
* inits the Servlet with the object graph from Dagger
* if you override this, be sure your implementation calls that of this super class
*
* @param config
*/
public void init(ServletConfig config) {
this.graph = (ObjectGraph) config.getServletContext().getAttribute(DIListener.ATTR_OBJECT_GRAPH);
}
/**
* used to get injected instances
*
* @param arg
* @return
*/
public <T> T get(Class<T> arg) {
return this.getObjectGraph().get(arg);
}
}


import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import dagger.ObjectGraph;
/**
* DaggerListener – inits objectgraph and sets it in ServletContext, so it can be accessed in servlet's init() method
*
* @author Erik Schultink <erik@engetc.com>
*/
public class DIListener implements ServletContextListener {
static final public String ATTR_OBJECT_GRAPH = "ObjectGraph";
ObjectGraph objectGraph;
/***
* called when servlet initialized
*
* @see javax.servlet.ServletContextListener#contextInitialized(javax.servlet.ServletContextEvent)
*/
@Override
public void contextInitialized(ServletContextEvent sce) {
//your object graph is initialized here, from your Dagger Module(s)
this.objectGraph = ObjectGraph.create(new ProductionModule());
sce.getServletContext().setAttribute(ATTR_OBJECT_GRAPH, this.objectGraph);
}
/**
* no-op
*
* @see javax.servlet.ServletContextListener#contextDestroyed(javax.servlet.ServletContextEvent)
*/
@Override
public void contextDestroyed(ServletContextEvent sce) {
//do nothing
}
}

view raw

DIListener.java

hosted with ❤ by GitHub


public class ExampleServlet extends BaseServlet {
//some dependency that the servlet needs
private Dependency dependency;
public void init(ServletConfig config) {
super.init(config);
this.dependency = this.get(Dependency.class);
}
//you're now free to implement doGet()/etc as you wish
//dependency should be defined, with the binding provided by the Dagger Module
}

If you’re using Eclipse, getting Dagger’s code generation working properly can also be a bit tricky.  The bit in this answer about the JARs to include under Project Settings -> Java Compiler -> Anotation Processing –> Factory Settings helped me.

An Open Letter to the FCC on the Open Internet

Today (September 15th) is the final day for public comment on the FCC’s proceeding regarding its proceeding on how to classify and regulate ISPs. You can speak up by commenting publicly on proceeding 14-28 via the FCC’s website.  You can learn more about the importance of Net Neutrality at Battle for the Net. To be clear, “Open Internet” is the term used by ISP lobbyists trying to limit the ability of the FCC to regulate them.  Here’s my view:

Chairman Wheeler,

I have worked for internet companies for the last 10 years, including both social networks and telecoms companies. I have seen how various parties on both sides of net neutrality behave. I fully believe that, however imperfect, regulators like the FCC must act to limit the ability of ISPs to exploit their oligopolies in connectivity to compete in adjacent markets.

Internet connectivity should be treated as a public utility, with ISPs classified as common carriers. In many markets, broadband is a duopoly at best. Consumers have very limited choice. If ISPs are allowed to use access to their captive pool of end users as an asset for strategic aims in adjacent markets, it will further restrict choice and competition in a number of nascent markets for digital services.

If it were technically possible – should General Electric be permitted to buy an electricity utility and slow or degrade the electricity flowing to Whirlpool dishwashers in consumers’ homes in favor of that flowing to its GE Profile brand? In the internet market,
analogous behavior is technically possible. An ISP could act to degrade the quality of a competing video-on-demand service in favor of its own solution. Consumers need some regulatory action to protect them for such abuse.

ISPs should be able to charge content providers fair and reasonable prices for interconnection – but these prices must be set without consideration for the nature of the content providers business and be proportional to the technical costs the ISP incurs for carrying that content.

ISPs should be able to charge consumers based on the amount of data they use, so long as they don’t differentiate rates based on the source of data.

Within this framework, ISPs must not be able to discriminate based on traffic source (or any other factor that is a proxy for traffic source) when managing the quality or speed at which traffic passed through their network.

Thank you for considering my perspective on this matter.

Sincerely,

Erik Schultink

Google Improves Currency Support in Spreadsheets

In July 2013, I wrote about using custom scripts to improve currency formatting in Google Spreadsheets (Tuenti worked in English, but operated in Euros, which was not well-supported by Google’s locale options).  Over the last two months, Google has launched a new version of spreadsheets  which exposes many more currency formatting options – although they’re a bit buried in the UI.  Here’s how to find them:

Step 1: Format → Number → More Formats … → More Currencies

Image

 Step 2: use auto-complete dialog to find Euros

or whatever your preferred currency happens to be.

Image

Step 3: select the specific formatting variant you want

Google now offers numerous options: the symbol (€) or the code (EUR), position before or after the amount (this is language, not currency dependent; in English, currency symbols go before the amount), and rounding.  Previously, most of these variants would have required writing custom Apps Script code.

Image

 

Bonus: Google Sheets remembers recently used formatting options at the bottom of the Format → Number menu

This makes subsequent use of a given format much simpler. Unfortunately, recent formats are only remembered in the context of a given spreadsheet.  You’ll have to repeat the above steps for each new sheet you wish to format..  Ideally, I’d like Google to remember recent number formats across sheets.

Image

 

This functionality eliminates the need for writing custom number formatting functions in Google Apps Scripts, as I showed in my previous post – although that can still be convenient in some circumstances, or for older Google Spreadsheets.  There’s not (yet) a way to automatically convert an old Google Spreadsheet to the latest version; you need to copy content out of the old version, into a new one.

 

 

 

Is Amazon Web Services “too big to fail”?

To borrow some financial market metaphors, it’s hard to argue that cloud providers aren’t a “systemically important” part of the Internet.  If one fails catastrophically, it’s more probable than you might think for others to quickly follow.

Compared to 10 years ago, Infrastructure-as-a-Service (IaaS) has greatly simplified web engineering.  Gone are the days of assembling and racking servers – experiences that every start-up experienced in the 90s and even well into the Web 2.0 era.  But any simplification involves trade-offs, and the rise of IaaS is no different.  One to consider is reliability.

With IaaS, you’re outsourcing a good chunk of your reliability to a 3rd party.  They give you an SLA, which at best compensates you for outages, but it’s usually limited  to what you’re paying them – not the true cost to your business of the outage.  At best, the SLA aligns interests a bit; they suffer when they cause you to suffer, albeit it not as much.  In practice, this often works OK.  Cloud providers are pretty reliable, and its very difficult for any young engineering team to credibly claim that their home-grown systems architecture is going to be more reliable than Amazon Web Services (AWS.)

But what if AWS isn’t reliable enough for you?  A straightforward approach is to avoid being dependent on AWS, which is appealing for cost and lock-in reasons in addition to reliability.  If AWS fails, you’ll quickly failover to Google App Engine (GAE), Azure, etc, right? But what happens when a lot more AWS users opt for the same approach? Then an AWS failure becomes a huge load increase on GAE, which could trigger it to fail as well.  The probabilities of AWS failing and GAE failing are not independent.

That is a subtle point: The probability that AWS fails is low.  The probability that GAE fails is low.  The probability that GAE fails, given that AWS fails, is not as low.

And it’s hard to predict. How many AWS users have disaster scenarios where they migrate to GAE?   How significant of an AWS failure would it take to cause them to invoke these failover procedures? How much spare capacity does GAE really have?

The logic above applies to all cloud platforms, not just GAE. The general reasoning is simply that there’s a relatively small set of major cloud providers, such that if a big one fails, so might others.  

The more fragmented and commoditized the cloud infrastructure market is, the safer we all are. As long as AWS is the dominant player, you’re better off – from a reliability standpoint – picking someone smaller and relying on AWS as your failover.  At very least, should your primary provider fail, you’re maximizing your chance that the aggregate hit to your failover will be manageable.   

Separate Work and Play with Multiple Chrome Users

If you use Google Apps both personally and professionally, you should set up a distinct Chrome user for each context. It works much, much better than having multiple Google accounts authenticated within the same browser instance.

The Problem

Although Google supports being simultaneously logged in with Google accounts, the UI for switching between authenticated users in many services isn’t graceful. You go to gmail.com, often find yourself in your personal account, and need 2-3 clicks to switch it to your professional account . Even once you’ve done this for Gmail, opening a link to a Google doc might open Google docs in the other context – and even give you a permissions error that your personal account can’t open that document.

The easiest solution was to log into only one account at a time and to minimize switching between your personal and professional contexts. That’s probably a good habit for other reasons as well, and was mostly what I did during years of formal employment.

However, a couple years ago, when I activated two factor authentication on all my accounts (which I highly recommend), logging in and out several times a day became a much larger hassle. For the last few months, I haven’t been formally employed, so I’ve been switching in and out of professional and personal contexts even more frequently.

The Solution

Google Chrome supports multiple users. For years, my wife and I used these on shared devices to separate our accounts. I hadn’t before thought to use them to separate my own accounts into multiple users.

I set-up multiple Chrome users — one for my personal Google account and another for my professional Google Apps account. Handily, Chrome visually distinguishes users with an avatar, shown in the upper left of any browser window. Clicking the avatar allows you to launch a new browser window in the context of another user, which is simpler and more consistent than how this is implemented within individual Google web apps.

2014-02-03 Screenshot for Blog on Chrome

Some Windows specific stuff:

  • multiple browser windows for the same user stack separately on the taskbar, with the avatar overlaid on the Chrome icon
  • you can create shortcuts that directly launch Chrome in a user context
  • pin to the shortcuts to the taskbar by dragging them there; pinning an active browser window will pin the Chrome application, which won’t necessarily launch in the user that was active when you pinned it

More Benefits

Clear Visual Distinction between Work and Play

Once I adjusted my bookmarks bar, extensions, and user avatar for each context, I found that the browser windows look visually quite distinct. Mentally, I think this can only help focus and avoid procrastination. My personal email is no longer just one click away, staring at me. It’s less likely that when I get stuck on coding problem for a moment, or otherwise distracted, I’ll reflexively click into my email, find something new, and turn a 30 second distraction into 5-10 minutes.

Separate Sets of Bookmarks

I used to mix personal and professional bookmarks into one bookmarks bar, which I sync’d between Chrome installations on different devices. Once you have distinct Chrome users for each account, you can have a set of professional bookmarks and a set of personal ones. It provides more toolbar real estate than you could otherwise allocate to each context.

Limit Extensions by Context

Again, this is useful in supporting a strong mental separation of your work vs personal context. But it’s also a security issue. A lot of Chrome extensions have permissions to see and access every webpage you open. Why show them more than you need? If some extensions are only relevant in one context or the other, keep them there. At very least, you’ll minimize the potential scope of private data that a malicious/invasive extension will see.

5 Tips for a New Codebase

Lately, I’ve been thinking about starting new repos for some projects and looking at a few different start-ups’ code. Below I’ve captured a few quick tips on starting a new codebase:

1. Use English.

The best engineering talent in the world reads and writes english. It’s a prereq to keeping up-to-date on the latest technologies or contributing to open-source projects.

At some point in the future, you might want to outsource something, have a consultant advise you on some aspect of your code, or even sell your company to someone else. Why limit potential partners to those who only speak your local language?

2. Tabs not spaces.

Or the reverse. I don’t care – the point is don’t waste time arguing or researching issues that are 90% arbitrary questions of taste. Anyone who emails your team list about changing should be fired. Or at least forced to buy everyone else a beer.

3. Git not Hg.

It works. More importantly it’s popular and well-known. Why make people learn something new to work for you if the “standard” tool is just as good? I prefer the elegance of Hg, but it just doesn’t have the following that Git does.

4. Use known code standards.

Don’t write your own. Having done so a couple times, trust me: you have more important things to do. Most engineers will imitate what they see in your code base, rather than spend time reading your standard. To find a standard to reference, Google Style Guides is a good place to look; or your favorite open source project; or the engineering blog of your favorite company.

5. Document in code, not in a wiki.

Wikis are chronically incomplete and out-of-date. The best way to write documentation of what your code does is in the code itself.
It’s the first place whoever needs the documentation is going to look.
The documentation will follow the code through the workflow. You don’t need to worry about a developer making changes on a branch that has yet to be merged to live. When it’s in live, the documentation on live will be updated.
It’s easy for the reviewer to verify that someone changing code has properly changed the documentation.

Conclusion

A common theme from these: while the “clean slate” of a new project or company might seem like a great opportunity to follow your own preferences on issues like the above – be careful. Indulging your personal preferences can come at the cost of lengthening the learning curve for those who join your project in the future.