Search

Pony Foo

Ramblings of a degenerate coder

Designing Front-End Components

reading time: , published

Last monday I published an open-source library to easily take control of drag & drop in the front-end, dragula, and it has amassed over 2000 stars on GitHub in under a week. Previously I had published a date picker component, Rome, that's somewhat popular among the nerds. In this article we'll discuss how you can build and isolate components that have a simple API.

Dragula had a much faster growth rate than rome did, and that's mostly due to the fact that there were basically zero proven libraries that solve drag & drop (and drag & drop alone) while being flexible and simple. I emphasize the fact that it does less as a feature, and not a problem. In my experience, I've found that the best pieces of code I've ever used tend to follow the UNIX philosophy of doing one thing very well. Having that kind of focused approach to front-end modules typically isn't the case. Instead, we often develop solutions that are only useful in our specific use. For example, we may put together an entrenched AngularJS directive that uses other directives or relies on data-binding provided by Angular.

What's worse, most often we don't just limit ourselves to the scope of the libraries we're currently using, but to the specific scope of the one project we're working on. This means that now our library can't be extricated from that database without considerable work. For instance, consider that one time you developed a UI component where the user would type tags in plain text and after typing a delimiter (a comma or a space), they would get visual feedback into the tag being accepted. Something like the screenshot below.

Screenshot of Insignia in action

Except for the fact that, unless you found what you needed on the open web in the form of a decoupled module, we sometimes commit the crime of tightly coupling the tagging feature to the framework we're using. Or to a library. Or to our application. Or to a specific part of our application.

It would be best if were able to identify what it is that we're trying to accomplish (in this case, improve the UX on an input field for tags), isolate that into its own module, and come up with the simplest possible API. Coming up with a simple API is often a dreadful task, and in this article I'll try to put together advice and explain my thought process while designing them.

Read the full article

The Great Web Module Compendium

reading time: , published

For the past few months I’ve developed quite the number of front-end modules. These range from UI components to utility libraries, silly games, and everything in between. I've put together this article briefly describing many of those modules in hopes that somebody puts them to good use.

I've organized the modules into a few different categories. Let me know if you find any of this to be useful!

Read the full article

Leveraging Immutable Deployments

reading time: , published

Last time around, we discussed how to create an AMI for every deployment: a crucial step in enabling you to leverage deployment immutability. This time around we'll learn what it takes to automate autoscaling immutable deployments with zero-downtime, while sitting behind a load balancer and using Route 53 for DNS.

Given the immutable images we're now able to build, thanks to the last article, in this article we'll lay out an architecture, based on Amazon Web Services, that's able to take advantage of those immutable images.

Architecture Overview

Our journey starts at Route 53, a DNS provider service from Amazon. You'll set up the initial DNS configuration by hand. I'll leave the NS, SOA, and TXT records up to you. The feature that interests me in Route 53 is creating ALIAS record sets. These special record sets allow you to bind a domain name to an Elastic Load Balancer (ELB), which can then distribute traffic among your Elastic Cloud Compute (EC2) server instances.

The eye candy in ALIAS record sets is that you're able to use them at the naked, or apex domain level (e.g ponyfoo.com), and not just sub-domains (e.g blog.ponyfoo.com). Our scripts will set up these ALIAS records on our behalf whenever we want to deploy to a new environment, also a very enticing feature.

When you finish reading this article, you'll be able to run the commands below and have your application listening at dev01.example.com when they complete, without the need for any manual actions.

NODE_ENV=dev01 npm run setup
NODE_ENV=dev01 npm run deploy

We don't merely use ELB because of its ability to route traffic from naked domains, but also because it enables us to have many web servers behind a single domain, which makes our application more robust and better able to handle load, becoming highly available.

In order to better manage the instances behind our ELB, we'll use Auto Scaling Groups (ASG). These come at no extra cost, you just pay for the EC2 instances in them. The ASG will automatically detect "unhealthy" EC2 instances, or instances that the ELB has deemed unreachable after pinging them with GET requests. Unhealthy instances are automatically disconnected from the ELB, meaning they'll no longer receive traffic. However, we'll enable connection draining so that they gracefully respond to pending requests before shutting down.

When a new deployment is requested, we'll create a new ASG, and provision it with the new carnivore AMI, which you may recall from our last encounter. The new ASG will spin as many EC2 instances as desired. We'll wait until EC2 reports every one of those instances are properly initialized and healthy. We'll then wait until ELB reports every one of those instances is reachable via HTTP. This ensures that no downtime occurs during our deployment. When every new instance is reachable on ELB, we'll remove the outdated EC2 instances from ELB first, and downscale the outdated ASG to 0. This will cause the ASG to allow connection draining to kick in on the outdated instances, and terminate them afterwards. Once all of that is over, we delete the outdated ASG.

This approach might not be blazing fast, but I'll take a speed bump over downtime any day.

Of course, none of this would be feasible if spinning a new instance took 15 minutes while installing software that should've been baked into an image. This is why creating those images was crucial. In a slow process such as this, baking images saves us much appreciated startup time.

Read the full article

Immutable Deployments and Packer

reading time: , published

A correcter title for this series would be something along the lines of "Automating autoscaled zero-downtime immutable deployments using plain old bash, Packer, nginx, Node.js, and AWS", but that would've been kind of long (although it does fit on a tweet). My last article on the subject was two years ago, when I wrote about Deploying Node apps to AWS using Grunt. Here's a long overdue update, containing everything I've learned about deployments in the time since then.

This detailed article series aims to explain:

  • How to provision a ready-made image before every deployment
  • How to make that image set up nginx, node, and your web application
  • How to dynamically update DNS record sets
  • How to let AWS handle scaling on your behalf
  • How to avoid downtime during deployments
  • How to clean up all this mess
  • How to do all of the above in plain old bash
  • Why any of the above matters

In this article I'll start by explaining why doing any of this matters, and then move on to creating immutable images ready for deployment using Packer.

Read the full article

Server-First Apps are a Good Idea

reading time: , published

Earlier today, Tom Dale published an article sharing his views on the whole "server-side vs client-side rendered apps" debacle. I was tempted to call this article No. You’re Missing the Point of Server-Side Rendered JavaScript Apps!, but that would've been way too long, and kind of childish. I agree with the first sentence in his article, where he states that there's a lot of confusion about "the push to rendering JavaScript apps on the server-side". There were many other parts of the article I agreed with, and a few I didn't agree so much with. There's also some points which weren't discussed but I'd like to raise myself.

In the article, Tom goes on to explain how applications rendered on the server-side are clumsy when it comes to responsiveness. This is a fair point, server-side rendered applications typically rely on the PRG (POST-Redirect-GET) pattern. They have HTML <form>s, users POST some data, the server processes the request, responds with a redirect to another page, and the client GETs that other page. These are pretty much the basics of the web. What's worse, as Tom notes, is that as you start adding AJAX calls to this server-side rendered content, you are now a slave to both state (what you initially pulled from the server) and behavior (users clicking on things) when it comes to updating the UI. That is the ultimate nightmare of a web developer.

Client-side rendered apps, in contrast, can be way faster than that. Once the initial payload is downloaded, interpreted, and executed, client-side JavaScript can set up its own smart caching on the client-side, avoiding roundtrips to the server for data it already has, and it can also set up routing on the client-side to emulate incredibly-fast roundtrips. It can even have the server spewing information downstream as soon as it has any fresh data to offer, using WebSockets. The issue of updating the UI as a slave to many masters is long gone, since you just update the UI as the client-side JavaScript engine demands of you.

And so the story goes...

The answer to a productive and maintainable web development orientation, that also favors customers, doesn't lie in one or the other, but rather in the combination of both approaches. In this article, I'll explore what all of this means.

Read the full article

Baking Modularity into Tag Editing

reading time: , published

For quite some time I've been wanting some sort of input that dealt with user-submitted tags in a reasonable way. I wanted this input to still be half-decent when JavaScript was out of the picture, so this meant starting with the basic building blocks of HTML.

Turns out, all libraries out there (google for "tag input javascript" if you're curious) that do this have at least one of these flaws.

  • Feature bloat
  • Human unfriendly
  • Addiction to dependencies

Instead of creating separate components for, say, tag input and autocompletion, most of these libraries try to do it all, and fail miserably at doing any one thing well. This is the defining characteristic of modular development, where we define a (narrow) purpose for what we're building and we set out to build the best possible solution to meet that purpose.

Multi-purpose libraries end up doing a lot of things I do not need or want, while single-purpose components typically do exactly what you need.

Being user friendly is why we build these libraries, and yet I couldn't find a single library that I would've used over a simple <input type='text' /> and telling my humans to enter tags separated by a single space. The vast majority of these presented inconveniences such as being unable to walk around the input like we do in an input tag, insert tags in any position, or not even allowing humans to edit tags once they receive the "this is a tag" treatment.

Human-facing interface elements must be sensitive to the needs of the human. Sometimes we have to wonder if a component is even worth the trouble. Wouldn't a simple <input> be more amicable than an inflexible "tag input" solution? Is the component more useful to your humans?

Finally, most of these libraries tie themselves to a dependency. If I really liked one of them, I probably would've needed to commit to jQuery myself. Or Angular. Or React. It doesn't matter what their "big dependency" is, the point is that it constraints the consumer from being able to easily port the component to different projects with other assortments of frameworks or libraries.

The fact that most of them count jQuery among their dependencies leaves much to be desired.

This article will explore the approach I took to build Insignia in just one day, as I think you may apply some of these concepts to any development you do on a daily basis.

Read the full article
Pony
Foo
Pony
Foo
Pony
Foo