Migration from Wordpress to Wintersmith static site generator
Well my blog was on Wordpress, and I was happy, so why did I move away?
- Constant maintenance, requires a lot of time, as it’s constantly being targeted by hackers
- Slow, even with caching plugins, I didn’t want to put a really powerfull machine just to run a blog page
- Server maintenance
- Didn’t use all the functionality in the system, don’t really need all that functionality
So I got tired of managing all this stuff, so I decide it is time to find something that won’t waste so much of my time and let me use it when I need, and not worry about if I’m gonna get hacked, or if my server is slow.
Let’s see some of the pros and cons of running a static generated site.
Pros:
- Extremely fast
- Don’t need to worry, if someone is gonna find an exploit for my site, even if I mess up, it’s still html and javascript code, there is nothing they can do on the server, to change content and stuff like that
- You can write your posts in Markdown format
Cons:
- You need your computer to publish a new post, you can’t do it from the beach or the phone if you want to do it. (Though there is way of how to do this automatically, to accomplish simillar thing, like creating a markdown document, and pushing it to git, while the server periodically regenerates the site if there are new changes)
- You need to regenerate the build and deploy it
- Needed to learn Jade, there are other templating plugins available, but in the end I settled with the default one, as it wasn’t really difficult to learn, took about one hour to be able to use it
Choosing Static Site Generator
These are some of the generators that I checked to see if I can use them, after looking through the web to see what other people are using I narrowed the selection down to these items:
I played a bit with all of them and in the end I settled with Wintersmith, the reason is, that is really easy to use, and extendable.
Preparation
So firstly I needed to prepare the environment for Wintersmith.
A lot of people install node from repositories, but I prefer to use Node Version Manager, it gives me more control over which version I use, for details about how to install visit this page.
To get started, you will need to install Wintersmith. Getting Wintersmith installed is fairly easy when using npm. You simply need to type the following:
npm install -g wintersmith
One of the features that was greatly improved with some of the newer versions of Wintersmith is the ability to generate a new site from the command line utility. This saves a great deal of time in the early creation process. The command can be utlized as follows:
wintersmith new <project_name>
In the following example, I ran the following command with project name of blog. This generated a basic skeleton for me to use in building this blog.
Looking at the skeleton of the blog, it is fairly simple to use, and configure.
Modding
I needed some data added to the site, so I can safely migrate from my old Wordpress blog to my new static generated one.
Helper
I needed a function that will return all articles so I can manipulate the generation process, and use it to generate the redirects, sitemap and feeds
var _ = require('underscore'), util = require('util');
module.exports = function(env, callback) {
var defaults = {
articles: 'articles'
};
var options = _.extend(env.config.helpers || {
}, defaults);
/**
* Get all available articles
* @param {Object} contents
*
* @returns {Array} A list of available articles sorted by descending date
*/
var getArticles = function(contents) {
return contents[options.articles]._.directories.map(function(item) {
return item.index;
}).sort(function(a, b) {
return b.date - a.date;
});
};
// add the article helper to the environment so we can use it later
env.helpers.getArticles = getArticles;
return callback();
};
Robots
The Robot Exclusion Standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web crawlers and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code. The standard is different from, but can be used in conjunction with, Sitemaps, a robot inclusion standard for websites.
- // robots.txt
| User-agent: *
| Disallow: #{locals.robotsDisallowed}
| Sitemap: #{locals.url}/sitemap.xml
In the main config file under locals we create robotsDisallowed property which holds what is not allowed, in my case it’s empty.
So the generated file looks like this:
User-agent: *
Disallow:
Sitemap: http://blog.matoski.com/sitemap.xml
Sitemap
A site map (or sitemap) is a list of pages of a web site accessible to crawlers or users. It can be either a document in any form used as a planning tool for Web design, or a Web page that lists the pages on a Web site, typically organized in hierarchical fashion. There are two popular versions of a site map. An XML Sitemap is a structured format that a user doesn’t need to see, but it tells the search engine about the pages in your site, their relative importance to each other, and how often they are updated. HTML sitemaps are designed for the user to help them find content on the page, and don’t need to include each and every subpage. This helps visitors and search engine bots find pages on the site.
doctype xml
urlset(
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance",
xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd",
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9")
- var buildDate = new Date().toISOString()
url
loc =locals.url
lastmod= buildDate
changefreq daily
priority 1.0
url
loc= locals.url + "/archive.html"
lastmod= buildDate
changefreq daily
priority 1.0
- var articles = _.chain(contents.articles._.directories).map(function(item) {
- return item.index
- }).compact().filter(function(article) {
- return article.metadata.ignored !== true
- }).sortBy(function(item) {
- return -item.date
- }).value()
for article in articles
- var permalink = locals.url + article.url
url
loc= permalink
lastmod= article.date.toISOString()
changefreq daily
priority 0.8
.htaccess
A .htaccess (hypertext access) file is a directory-level configuration file supported by several web servers, that allows for decentralized management of web server configuration. They are placed inside the web tree, and are able to override a subset of the server’s global configuration for the directory that they are in, and all sub-directories.
The reason for this file is so we can provide redirects to the new articles in the old system, this will work on Apache only, for a nginx configuration you can see it bellow
- var articles = env.helpers.getArticles(contents);
| RewriteEngine On
| RewriteBase / #{"\n"}
for article in articles
if article.metadata.oldurl
| Redirect 301 #{article.metadata.oldurl} #{article.url} #{"\n"}
nginx.conf
This file generates the redirects for the old articles in the system to the new system, but you will have to put them manually inside in the configuration file for the host
- var articles = env.helpers.getArticles(contents);
for article in articles
if article.metadata.oldurl
| location #{article.metadata.oldurl}
| rewrite ^(.*)$ #{article.url} redirect;
| } #{"\n"}
Google Analytics Tracking
At the end of layout.jade, I added this bellow the footer, this adds google tracking to my web page
script(type='text/javascript').
var _gaq = _gaq || [];
_gaq.push(['_setAccount', '#{locals.googleTrackingCode}']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
The locals.googleTrackingCode contains my tracking code, and it’s set in the config.json in the root of the directory.
So that’s it the blog is ready to go.
Next thing to do is to style this up, I’m thinking what CSS framework to use, I’m thinking about using Foundation, so should be fun.
So total in all it took about 2.5 hours to transfer my old blog, to the new static system, not much if you compare the speed you get and stability with the static site builder.