Search

Pony Foo

Ramblings of a degenerate coder

The Progressive Web

reading time: , published

I've blogged very little about Taunus since I first released it, roughly a year ago. Back then, it only powered ponyfoo.com, but now there's a few cases in the wild where it's being used, and I even got to do some consulting in a project where they're using Taunus! In the year since its release, it has had a whooping 174 releases, but not a whole lot has changed, and its API has remained stable for the most part. Its feature-set grew quite a bit, although it remains fairly light-weight at 18.8kB after gzip and minification. Today, it's able to figure out how to submit forms via AJAX automatically, as long as they already work as plain HTML, and it has support for WebSockets via a plugin.

If I were to define Taunus in "elevator pitch" style, I would say:

Taunus is the logical step forward after server-side MVC web frameworks such as Rails or ASP.NET MVC. It turns server-side rendered apps in Node.js (or io.js?) into single-page applications after the initial page load by hijacking link clicks, form submissions, and defining a format you can leverage for realtime communications.

Building an app in a Server-First fashion is important because then you aren't taking a huge leap of faith in assuming that your customers have a browser capable of supporting all the bleeding edge features your dedicated client-side application demands.

After the initial load, which should be blazing fast so that your customers are happier (tons of research point to this fact), you can should turn to a single page application, hijacking links, making AJAX requests that ask for the bare minimum (view models) and then rendering those view models directly in the client-side.

Why Server-First Matters

Server-First matters because it's the exact opposite of breaking the web. You establish a bare minimum experience that you know most people can access, a baseline, and you go from there. This baseline isn't just there for SEO purposes or to be more amicable to people turning off JavaScript.

Think of the ways in which your app is shared on the web. What other places is it rendered to? Services that crawl around it. With client-side rendering, Twitter and Facebook display a pile of garbage instead of descriptive metadata and a thumbnail whenever someone links to your site. Humans might think your site is bogus and not even click on links leading to it, because the description on their Facebook feed just shows a bunch of Mustache templates and gibberish.

Search engines other than Google are completely oblivious to your content. Even Google is not as good at crawling client-side rendered apps as you think they are. Often times, you also get penalized for not being fast enough.

Mobile performance degrades substantially in client-side rendered applications as opposed to those server-side rendered. Both because the connection is slower, and because the scripts you depend on to actually render your site take a long time to download. When they do, mobile devices take longer to parse them and execute them, because they're not as powerful as the Mac Book Pro you use during development.

Demand Progress!

Not doing server-side rendering might be just as bad as not designing a website to be responsive.

It's about time we .shift() "SEO purposes and <noscript>" from our list of excuses for not doing server-side rendering anymore.

Read the full article

Priorities

reading time: , published

Peter-Paul Koch (@ppk) recently wrote an article that generated noticeable turmoil. One of the best responses was Jeremy Keith (@adactio)'s article, "Instantiation", where he graced us with a flurry of responses from different bloggers, as well as a timeline and his analysis of the situation as a whole.

Earlier today, @ppk published a follow-up post, where he mentioned Tim Kadlec (@tkadlec)'s article on "Choosing Performance". Tim rightly asserts that the web is not inherently slow, and that if a site is slow it's merely due to the fact that performance wasn't prioritized by the site's maintainers. Similarly, Jeremy posited that services such as Instapaper or Pocket shouldn't have a reason to exist. Besides being well-designed products that many of us use and love, that shouldn't be the case, websites should offer that degree of usability, and accessibility on their own, Jeremy argues.

content.png

What if we were to focus just on the content? Are any prominent "apps" as ad-dollar hungry as the screenshot above?

Everyone seems to agree that the crux of the issue stems from media deals and dollar-hungry marketers who have little interest in much else than profits. I tend to agree with that view. Most consumer-facing media rely directly on ad revenue, have little to no regard for their typical user when it comes to the effects these ads have on their experience (hey, we call them "users" for a reason, right?), and simply keep on piling on advertisement.

Sure. There's definitely some excellent original work in there — in the middle of all those ads and self-links and chrome and value-added "journalism."

I don't see this trend being anywhere close to slowing down, quite the contrary. Video advertising is becoming more and more prominent, huge unoptimized hero images and annoying interstitial dialogs that want you to download a mobile app that's supposed to do what the site you're already on should be doing, along with ridiculous laws, are all becoming lingua franca in web publishing.

All of this wouldn't affect us as web workers if it weren't for the fact that all the extra cruft is being blamed on the web platform itself, as @ppk and @tkadlec point out. It's not even just the bloat, but we also keep on breaking the web by trying to mimic mobile applications, in one of those desperate attempts to become successful by imitating aspects of a business (e.g Google's 80-20 rule) that have very little to do with their actual success. Scrolljacking, irritating banners that only frustrate the hell out of users who don't want to install your app for no good reason, and similar tactics to drive mobile app usage have done nothing but hurt.

It's time for the web to step up.

Read the full article

Designing Front-End Components

reading time: , published

Last monday I published an open-source library to easily take control of drag & drop in the front-end, dragula, and it has amassed over 2000 stars on GitHub in under a week. Previously I had published a date picker component, Rome, that's somewhat popular among the nerds. In this article we'll discuss how you can build and isolate components that have a simple API.

Dragula had a much faster growth rate than rome did, and that's mostly due to the fact that there were basically zero proven libraries that solve drag & drop (and drag & drop alone) while being flexible and simple. I emphasize the fact that it does less as a feature, and not a problem. In my experience, I've found that the best pieces of code I've ever used tend to follow the UNIX philosophy of doing one thing very well. Having that kind of focused approach to front-end modules typically isn't the case. Instead, we often develop solutions that are only useful in our specific use. For example, we may put together an entrenched AngularJS directive that uses other directives or relies on data-binding provided by Angular.

What's worse, most often we don't just limit ourselves to the scope of the libraries we're currently using, but to the specific scope of the one project we're working on. This means that now our library can't be extricated from that database without considerable work. For instance, consider that one time you developed a UI component where the user would type tags in plain text and after typing a delimiter (a comma or a space), they would get visual feedback into the tag being accepted. Something like the screenshot below.

Screenshot of Insignia in action

Except for the fact that, unless you found what you needed on the open web in the form of a decoupled module, we sometimes commit the crime of tightly coupling the tagging feature to the framework we're using. Or to a library. Or to our application. Or to a specific part of our application.

It would be best if were able to identify what it is that we're trying to accomplish (in this case, improve the UX on an input field for tags), isolate that into its own module, and come up with the simplest possible API. Coming up with a simple API is often a dreadful task, and in this article I'll try to put together advice and explain my thought process while designing them.

Read the full article

The Great Web Module Compendium

reading time: , published

For the past few months I’ve developed quite the number of front-end modules. These range from UI components to utility libraries, silly games, and everything in between. I've put together this article briefly describing many of those modules in hopes that somebody puts them to good use.

I've organized the modules into a few different categories. Let me know if you find any of this to be useful!

Read the full article

Leveraging Immutable Deployments

reading time: , published

Last time around, we discussed how to create an AMI for every deployment: a crucial step in enabling you to leverage deployment immutability. This time around we'll learn what it takes to automate autoscaling immutable deployments with zero-downtime, while sitting behind a load balancer and using Route 53 for DNS.

Given the immutable images we're now able to build, thanks to the last article, in this article we'll lay out an architecture, based on Amazon Web Services, that's able to take advantage of those immutable images.

Architecture Overview

Our journey starts at Route 53, a DNS provider service from Amazon. You'll set up the initial DNS configuration by hand. I'll leave the NS, SOA, and TXT records up to you. The feature that interests me in Route 53 is creating ALIAS record sets. These special record sets allow you to bind a domain name to an Elastic Load Balancer (ELB), which can then distribute traffic among your Elastic Cloud Compute (EC2) server instances.

The eye candy in ALIAS record sets is that you're able to use them at the naked, or apex domain level (e.g ponyfoo.com), and not just sub-domains (e.g blog.ponyfoo.com). Our scripts will set up these ALIAS records on our behalf whenever we want to deploy to a new environment, also a very enticing feature.

When you finish reading this article, you'll be able to run the commands below and have your application listening at dev01.example.com when they complete, without the need for any manual actions.

NODE_ENV=dev01 npm run setup
NODE_ENV=dev01 npm run deploy

We don't merely use ELB because of its ability to route traffic from naked domains, but also because it enables us to have many web servers behind a single domain, which makes our application more robust and better able to handle load, becoming highly available.

In order to better manage the instances behind our ELB, we'll use Auto Scaling Groups (ASG). These come at no extra cost, you just pay for the EC2 instances in them. The ASG will automatically detect "unhealthy" EC2 instances, or instances that the ELB has deemed unreachable after pinging them with GET requests. Unhealthy instances are automatically disconnected from the ELB, meaning they'll no longer receive traffic. However, we'll enable connection draining so that they gracefully respond to pending requests before shutting down.

When a new deployment is requested, we'll create a new ASG, and provision it with the new carnivore AMI, which you may recall from our last encounter. The new ASG will spin as many EC2 instances as desired. We'll wait until EC2 reports every one of those instances are properly initialized and healthy. We'll then wait until ELB reports every one of those instances is reachable via HTTP. This ensures that no downtime occurs during our deployment. When every new instance is reachable on ELB, we'll remove the outdated EC2 instances from ELB first, and downscale the outdated ASG to 0. This will cause the ASG to allow connection draining to kick in on the outdated instances, and terminate them afterwards. Once all of that is over, we delete the outdated ASG.

This approach might not be blazing fast, but I'll take a speed bump over downtime any day.

Of course, none of this would be feasible if spinning a new instance took 15 minutes while installing software that should've been baked into an image. This is why creating those images was crucial. In a slow process such as this, baking images saves us much appreciated startup time.

Read the full article

Immutable Deployments and Packer

reading time: , published

A correcter title for this series would be something along the lines of "Automating autoscaled zero-downtime immutable deployments using plain old bash, Packer, nginx, Node.js, and AWS", but that would've been kind of long (although it does fit on a tweet). My last article on the subject was two years ago, when I wrote about Deploying Node apps to AWS using Grunt. Here's a long overdue update, containing everything I've learned about deployments in the time since then.

This detailed article series aims to explain:

  • How to provision a ready-made image before every deployment
  • How to make that image set up nginx, node, and your web application
  • How to dynamically update DNS record sets
  • How to let AWS handle scaling on your behalf
  • How to avoid downtime during deployments
  • How to clean up all this mess
  • How to do all of the above in plain old bash
  • Why any of the above matters

In this article I'll start by explaining why doing any of this matters, and then move on to creating immutable images ready for deployment using Packer.

Read the full article
Pony
Foo
Pony
Foo
Pony
Foo