Tuesday 11 October 2016

Handling mocks and environmental variables in JS-apps through Webpack

When JS-apps need different variables in production and locally, one simple way to solve that is by using Webpack. Say you work with an app that calls an API using code similar to this:
DoRequest("GET", "http://swaglist.api.dev")
  .then(data => {
    const result = JSON.parse(data);
    if (result && result.swaglist) {
      this.setState ({
        groovythings: result.swaglist
      });
    }
  })
  .catch(error => {
    this.setState({
      error
    });
  });
We want to be able to use a variable instead of the hardcoded API. Using the different configs for Webpack in dev and prod makes this an easy task.

Setting up Webpack's DefinePlugin

Take a simple Webpack config for a React application, like the following:
var webpack = require('webpack');
var path = require('path');

var config = {
  devtool: 'inline-source-map',
  entry: [
    path.resolve(__dirname, 'src/index')
  ],
  output: {
    path: __dirname + '/dist',
    publicPath: '/',
    filename: 'bundle.js'
  },
  module: {
    loaders: [
      { test: /\.js$/, exclude: /node_modules/, loader: "babel-loader" },
      { test: /(\.css)$/, loaders: ['style', 'css']},
    ]
  },
};

module.exports = config;
Let's presume we have completely different Webpack configs for dev and prod. First we add a global config-object at the top of the file:
var GLOBALS = {
  'config': {
    'apiUrl': JSON.stringify('http://swaglist.api.dev')
  }
};
Don't forget to stringify! Then we add a new plugin in the config section:
  plugins: [
    new webpack.DefinePlugin(GLOBALS)
  ],
And now we can use the variable in our application:
DoRequest("GET", config.apiUrl)
  .then(data => {
    const result = JSON.parse(data);
    if (result && result.swaglist) {
      this.setState ({
        groovythings: result.swaglist
      });
    }
  })
  .catch(error => {
    this.setState({
      error
    });
  });

Adding a mock API

Using this approach, it's very easy to set up a way to temporarily use a mock instead of a real API. This is a great help during development if the API in question is being developed at the same time. Or if you're working on the train without WiFi. :)

I like to use NPM tasks for my build tasks, in those cases where a task runner like Grunt or Gulp is not really needed. My NPM tasks in package.json typically look something like this:
  "scripts": {
    "build:dev": "npm run clean-dist && npm run copy && npm run webpack:dev",
    "webpack:dev": "webpack --config webpack.dev.config.js -w",
    "build:prod": "npm run clean-dist && npm run copy && npm run webpack:prod",
    "webpack:prod": "webpack --config webpack.prod.config",
    "clean-dist": "node_modules/.bin/rimraf ./dist && mkdir dist",
    "copy": "npm run copy-html && npm run copy-mock",
    "copy-html": "cp ./src/index.html ./dist/index.html",
    "copy-mock": "cp ./mockapi/*.* ./dist/"
  },
Now, to add a build:mock-task to use a mock instead of the real API, let's start by adding two tasks in package.json.
"build:mock": "npm run clean-dist && npm run copy && npm run webpack:mock",
"webpack:mock": "webpack --config webpack.dev.config.js -w -mock",
Build:mock does the same as the ordinary build:dev-task, but it calls webpack:mock instead. Webpack:mock adds the flag -mock to the Webpack command. Arguments to Webpack are captured using process.argv. So we just add a line of code at the top of webpack.dev.config.js to catch it:
var isMock = process.argv.indexOf('-mock') > 0;
Now we can change the GLOBALS config-object accordingly. The resulting Webpack config looks like this:
var webpack = require('webpack');
var path = require('path');

var isMock = process.argv.indexOf('-mock') > 0;

var GLOBALS = {
  'config': {
    'apiUrl': isMock
      ? JSON.stringify('./mock-swag.json')
      : JSON.stringify('http://swaglist.api.dev')
  }
};

var config = {
  devtool: 'inline-source-map',
  entry: [
    path.resolve(__dirname, 'src/index')
  ],
  output: {
    path: __dirname + '/dist',
    publicPath: '/',
    filename: 'bundle.js'
  },
  plugins: [
    new webpack.DefinePlugin(GLOBALS)
  ],
  module: {
    loaders: [
      { test: /\.js$/, exclude: /node_modules/, loader: "babel-loader" },
      { test: /(\.css)$/, loaders: ['style', 'css']},
    ]
  },
};

module.exports = config;
The mock is nothing more advanced than a JSON-blob with the same structure as your API:
{
  "swaglist": [
    {
      "thing": "Cats",
      "reason": "Because they're on Youtube."
    },
    {
      "thing": "Unicorns",
      "reason": "Because it's true they exist."
    },
    {
      "thing": "Raspberry Pi",
      "reason": "Because you can build stuff with them."
    },
    {
      "thing": "Cheese",
      "reason": "Because it's very tasty."
    }
  ]
}
Now, run the build:mock-task and let the API-developers struggle with their stuff without being bothered. :)

Monday 26 September 2016

Building a faceted search using Redis and MVC.net - part 4: Using Redis in an MVC-app

There's a number of .Net clients available as Nuget-packages. I've chosen to use StackExchange's Redis, github.com/StackExchange/StackExchange.Redis. It maps well against the commands available in the Redis Client, it has a good documentation and, well, Stack Overflow uses it so it really ought to cover my needs... And of course, it is free.

The demo web for the faceted search is available at hotelweb.azurewebsites.net and code can be found on github.com/asalilje/redisfacets.

Connecting to Redis

Once the StackExchange.Redis nuget package is installed in the .Net-solution, we can try a simple Redis query. We want all Hotels that have one star, i e all members of the set Stars:1:Hotels.
  var connection = ConnectionMultiplexer.Connect("redishost");
  var db = connection.GetDatabase();
  var list = db.SetMembers("Stars:1:Hotels");
The list returned is the JSON-blobs we stored for each hotel, so we need to deserialize it to a C#-entity using Newtonsoft.
  var hotels = hotels.Select((x, i) =>
  {
    var hotel = JsonConvert.DeserializeObject(x);
    hotel.Index = i;
    return hotel;
  });
Now, the ConnectionMultiplexer is the central object of this Redis Client. It is expensive, does a lot of work hiding away the inner workings of talking to multiple servers and it is completely threadsafe. So it's designed to be shared and reused between callers. It should not be created per call, as in the code above.

The database object that you get from the multiplexer is a cheap pass through object on the other hand. It does not need to be stored, and it is your access to all parts of the Redis API. One way to handle this is to wrap the connection and Redis calls in a class that uses lazy loading to create the connection.
  private static ConnectionMultiplexer Connection => LazyConnection.Value;
  private static readonly Lazy LazyConnection =
    new Lazy(() => ConnectionMultiplexer.Connect("redishost"));

  private static IDatabase GetDb()  {
    return Connection.GetDatabase(Database);
  }

  public static string GetString(string key)  {
    return GetDb().StringGet(key);
  }

Fine tuning the queries

Let's return to the concepts from the earlier parts of this blog series, combinations of sets. Say we want to get all hotels in Germany that has a bar. Just send in an array of the keys that should be intersected.
  var db = GetDb();
  return db.SetCombine(SetOperation.Intersect, 
    new []{"Countries:1:Hotels", "Bar:False"};
The chosen keys in the same category should be unioned before they are intersected with another category. As we did before, we union them and store them in the db to be able to do the intersection directly in Redis. In this case, we also send in the name of the new key to store, compounded from the data it contains.
  var db = GetDb();
  db.SetCombineAndStore(SetOperation.Union, "Countries:1:Countries:2:Hotels", 
    new []{"Countries:1:Hotels", "Countries:2:Hotels"});
  return db.SetCombine(SetOperation.Intersect, 
    new []{"Countries:1:Countries:2:Hotels", "Bar:False"};
If we want to sort the list according to an external key, we just add the by-keyword in the sort-command to point to the correct key, using the asterisk-pattern.
  var db = GetDb();
  db.Sort("Countries:1:Hotels", by: "SortByPrice_*", get: new RedisValue[] {"*"}));

Putting it all together

Now we have the concepts and data modelling of Redis and the Redis client in place. And the rest is basically just putting the things together. The filtering buttons are created dynamically according to what options are available in the db. Each time a filter or sorting option is clicked, or a slider is pulled, an event is triggered in javascript that creates an url based on which buttons are chosen.

The call goes via AJAX to the MVC-app that does all the filtering using unions and intersections, fetches and sorts the final list, and disables or enables any affected filter buttons.

All this, as you know, can be done in a number of ways. If you need inspiration or some coding examples, take a look at the code on github.com/asalilje/redisfacets. :)

Friday 23 September 2016

Leader Election with Consul.Net

Microservices are great and all that, but you know those old fashioned batch services, like a data processing service or a cache loader service that should run with regular intervals? They're still around. These kind of services often end up on one machine where they keep running their batch jobs until someone notices they've stopped working. Maybe a machine where it runs for both stage and production purposes, or maybe it doesn't even run in stage cause no one can be bothered. Easier to just copy the database from production.

But we can do better, right? One way to solve this is to deploy the services to multiple machines, as you would with a web application. Use Octopus, deploy the package, install and start the service, then promote the same package to production, doing config transforms along the way. Problem then is that we have a service that runs on multiple machines, doing the same job multiple times. Unnecessary and, if there's a third party API involved, probably unwanted.

Leader election to the rescue

Leader election is really quite a simple concept. The service nodes register against a host using a specific common key. One of the nodes is elected leader and performs the job, while the other ones are idle. This lock to a specific node is held as long as the node's session remains in the host's store. When the node's session is gone, the leadership is open for taking by the next node that checks for it. Every time the nodes are scheduled to run their task, this check is performed.

Using this approach, we have one node doing the job while the others are standing by. At the same time, we get rid of our single point of failure. If a node goes down, another will take over. And we can incorporate this in our ordinary build chain and treat these services like we do with other types of applications. Big win!

An example with Consul.io

Consul is a tool for handling services in your infrastructure. It's good at doing many things and you can read all about it at consul.io. Consul is installed as an agent on your servers, which syncs with one or many hosts. But you can run it locally to try it out.

Running Consul locally

To play around with Consul, just download it here, unpack it and create a new config file in the extracted folder. Name the file local_config.json and paste in the config below.
{
    "log_level": "TRACE",
    "bind_addr": "127.0.0.1",
    "server": true,
    "bootstrap": true,
    "acl_datacenter": "dc1",
    "acl_master_token": "yep",
    "acl_default_policy": "allow",
    "leave_on_terminate": true
}
This will allow you to run Consul and see the logs of calls coming in. Run it by opening a command prompt, moving to the extracted folder and typing:
consul.exe agent -dev -config-file local_config.json

Consul.net client

For a .Net-solution, a nice client is available as a Nuget-package, https://github.com/PlayFab/consuldotnet. With that, we just create a ConsulClient and have access to all the API's provided by Consul. For leader election, we need the different Lock-methods in the client. Basically, CreateLock is creating the node session in Consul, AcquireLock is trying to assume leadership if no leader exists, and the session property IsHeld is true if the node is elected leader and should do the job.
var consulClient = new ConsulClient();
var session = consulClient.CreateLock(serviceKey);
await session.AcquireLock();
if (session.IsHeld)
    DoWork();

A demo service

Here's a small service running a timer updating every 3 seconds. On construction, the service instance creates a session in Consul. Every time the CallTime-function is triggered, we check if we hold the lock. If we do, we display the time, otherwise we print "Not the leader". When the service is stopped, we destroy the session so the other nodes won't have to wait for the session TTL to end.
using System;
using System.Threading;
using System.Threading.Tasks;
using Consul;
using Topshelf;
using Timer = System.Timers.Timer;

namespace ClockService
{
    class Program
    {
        static void Main(string[] args)
        {
            HostFactory.Run(x =>
            {
                x.Service(s =>
                {
                    s.ConstructUsing(name => new Clock());
                    s.WhenStarted(c => c.Start());
                    s.WhenStopped(c => c.Stop());
                });
                x.RunAsLocalSystem();
                x.SetDisplayName("Clock");
                x.SetServiceName("Clock");
            });
        }
    }

    class Clock
    {
        readonly Timer _timer;
        private IDistributedLock _session;

        public Clock()
        {
            var consulClient = new ConsulClient();
            _session = consulClient.CreateLock("service/clock");
            _timer = new Timer(3000);
            _timer.Elapsed += (sender, eventArgs) => CallTime();
        }

        private void CallTime()
        {
            Task.Run(() =>
            {
                _session.Acquire(CancellationToken.None);
            }).GetAwaiter().GetResult();

            Console.WriteLine(_session.IsHeld 
                ? $"It is {DateTime.Now}" 
                : "Not the leader");
        }

        public void Start() { _timer.Start(); }

        public void Stop()
        {
            _timer.Stop();
            Task.WaitAll(
                Task.Run(() =>
                {
                    _session.Release();
                }),
                Task.Run(() =>
                {
                    _session.Destroy();
                }));
        }
    }
}

When two instances of this service are started, we get this result. One node is active and the other one is idle.


When the previous leader is stopped, the second node automatically takes over the leadership and starts working.


All in all, quite a nice solution for securing the running of those necessary batch services. :)

Saturday 10 September 2016

Building a faceted search using Redis and MVC.net - part 3: Sorted sets for range queries

Storing and combining sets and strings in Redis will get us a nice filtered search. The first three rows of filtering options in the demo at http://hotelweb.azurewebsites.net/ are using only sets holding the keys to the hotels. If one or more buttons are clicked in one category, e g Countries, we do a union of those sets and store the new set in Redis. The same goes for all categories, clicked options of the category are unioned and stored as new keys, then intersected against the other categories.


With the possibility to sort the final set using external keys, we have built quite a cool feature with not that much work. But to make it awesome, we want to add some range filters to be able to filter out for instance all hotels in this facet within a certain price range. Not only does it look impressive, it's also easy to achieve with Redis.



Sorted sets

Sorted sets in Redis are like ordinary sets, but with one major difference. Whereas sets can hold only string values, typically the key to some other entity, sorted sets also give each item in the set a numeric score. If the score is the same for all items, the set is sorted and ranged lexically instead. There are some very interesting things that can be done with the lexical part of the sorted sets, but for this demo, we're going to look at the numeric score instead.
Hotels:Prices = [
   1000 "Hotels:1",
   2000 "Hotels:33",
   5000 "Hotels:194",
   3000 "Hotels:233",
    750 "Hotels:299",
   8000 "Hotels:45",
]
The set is always sorted by the score as a default. To get the items of a set, the command ZRANGE is used. ZRANGE takes the name of the sorted set and the indexes of where to start and end. To get all items without knowing how big the set is, use -1 as the ending index.
ZRANGE Hotels:Prices 0 -1
  1) "Hotels:299"
  2) "Hotels:1"
  3) "Hotels:33"
  4) "Hotels:233"
  5) "Hotels:194"
  6) "Hotels:45"
To view the score and make sure it's sorted correctly, add WITHSCORES to the command. Here we fetch the items between index 0 and 3.
ZRANGE Hotels:Prices 0 3 WITHSCORES
  1) "Hotels:299"
  2) "750"
  3) "Hotels:1"
  4) "1000"
  5) "Hotels:33"
  6) "2000"
  7) "Hotels:233"
  8) "3000"
Getting a range of items from the sorted set by their index is not enough though. We want to be able to fetch all items between, say 1000 and 2200 SEK. Easy peasy using ZRANGEBYSCORE instead of ZRANGE!
ZRANGEBYSCORE Hotels:Prices 1000 2200 WITHSCORES
  1) "Hotels:1"
  2) "1000"
  3) "Hotels:33"
  4) "2000"
And now things start to fall into place. We have a way of getting the id's of all hotels with a price between 1000 and 2200 SEK. Now we need to create a new set out of this range to intersect this result with the other sets.

Combining sorted sets

Creating this set containing only a certain range of prices is different from the sets we created before. There's no single command that will create the set for us. It's not a union, intersection or difference operation we are looking at. We need a subset of the data in the sorted set.

The way to do this is to create a copy of the original sorted set by using the ZUNIONSTORE command. In this case, we don't want to do a union with another set, we just want to copy the whole Hotels:Prices-set. If only one set is given in a union, this is precisely what happens. To define the set in the db, we name it Hotels:Prices:1000:2200 to show which range of prices it will eventually contain.
ZUNIONSTORE Hotels:Prices:1000:2200 1 Hotels:Prices
Now, we can remove the range we're not interested in from this new set using the command ZREMRANGEBYSCORE. All rangeby-commands are inclusive as default, meaning that the score we provide will be included in the range. This was fine in the latter case where we wanted to include both 1000 and 2200 in our range, but here we want to remove all items with a score less than 1000 and greater than 2200. Luckily this is not a problem since we have the option to make the numbers exclusive by adding a parenthesis.

So, first we want to remove all items with a score lower than 1000. Since we don't know the lowest score in the set, we use negative infinity (-inf) as the starting point. Then we remove everything greater than 2200 up to positive infinity (inf).
ZREMRANGEBYSCORE Hotels:Prices:1000:2200 -inf (1000
ZREMRANGEBYSCORE Hotels:Prices:1000:2200 (2200 inf

ZRANGE Hotels:Prices:1000:2200 0 -1
  1) "Hotels:1"
  2) "1000"
  3) "Hotels:33"
  4) "2000"
Success! We have a new set, containing only the hotels within the given price range. Now we can intersect this set with the other sets.

Combining sets and sorted sets

If we try to do the same kind of intersection like before between an ordinary set and a sorted set, SINTER, we'll get a big no-no. A union, intersection or diff that involves a sorted set will have to use the special sorted set commands; ZINTERSTORE, ZDIFFSTORE and ZUNIONSTORE. All of these commands store a new set in the db. The reason the command is different is that the contents of these types of sets are different.

A sorted set does not only contain the string value, it also has the numeric score. When doing an intersection, we have to decide how to treat the scores of the two different sets. Should the two scores of intersected items be added, or should we use the minimum or maximum value? If we intersect between a regular set and a sorted set, the sorted set's items gets treated like they have a score of 0.

Next up - using Redis in .Net

Now that we hopefully understand Redis a bit better, it's time to get it up and running in the MVC-app.

Friday 9 September 2016

Building a faceted search using Redis and MVC.net - part 2: Combining and sorting sets in Redis

So far we have used the set and string data types in Redis. The Countries-set holds the keys to all country entities. Each country then has a set of all hotel keys in that country. The hotel key holds a string, which is the JSON-representation of the hotel entity. By setting up keys like this, we can slice our hotels in many ways. Want to fetch all hotels with one star? Just get the members of the Stars:1:Hotels-set using the command SMEMBERS.
Countries = ["Countries:1", "Countries:2", "Countries:3"]
Countries:1 = "Germany"
Countries:2 = "Sweden"
Countries:3 = "Denmark"
Countries:1:Hotels = ["Hotels:1", "Hotels:33", "Hotels:194"]
Hotels:1 = '{\"Name\":\"Hotel 1\",\"Stars\":3,\"PricePerNight\":1000}'
Hotels:33 = '{\"Name\":\"Hotel 33\",\"Stars\":5,\"PricePerNight\":2700}'
Hotels:194 = '{\"Name\":\"Hotel 194\",\"Stars\":1,\"PricePerNight\":235}'
Stars:1:Hotels = ["Hotels:194", "Hotels:200"]

SMEMBERS Stars:1:Hotels

Combining sets

Just getting the different sets one by one won't help us filter our hotel list. What we need to do is combine the sets in different ways. In Redis, you can perform combination operations on sets and get the resulting set as a return value, but you can also store the resulting set as a new set in the database. The lifetime scope of these sets can be either temporary by setting an expiration, or permanent if that suits your needs better.

Redis performs intersections, unions and difference operations extremely fast, which makes storing the sets and performing these data manipulations in Redis a much better idea than doing them in your application code. These operations are very powerful and can be combined in a multitude of interesting ways.

Union

Union is the operation that returns all unique members of the given sets. If we want to get all hotels in Germany and Sweden, but not Denmark, we do a union of Countries:1:Hotels and Countries:2:Hotels.
SUNION Countries:1:Hotels Countries:2:Hotels
To store the resulting set instead of immediately retrieving it, we use SUNIONSTORE and as the first parameter to the operation provide a name for the new key.
SUNIONSTORE Countries:1:Countries:2:Hotels Countries:1:Hotels Countries:2:Hotels
If the same value exists in both sets, it will only be included once in the resulting set.

Intersect

Intersect combines two or more sets by taking the values only existing in all sets. An intersection between Countries:1:Hotels and Countries:2:Hotels won't give us anything, unless a hotel can be located in both countries. But doing an intersection between the sets Countries:1:Hotels and Stars:1:Hotels will give us all hotels in Germany holding 1 star.
SINTER Countries:1:Hotels Stars:1:Hotels
Here we can of course use that previously stored union set with both German and Swedish hotels.
SINTER Countries:1:Countries:2:Hotels Stars:1:Hotels
If we want to keep on doing combination operations on the result of this operation, we store the set, creating a new compounded key describing the set's content.
SINTERSTORE Countries:1:Countries:2:Stars:1:Hotels 
   Countries:1:Countries:2:Hotels Stars:1:Hotels

Diff

The final combination operation is diff, which as the name implies, returns the difference between sets. The diff operation is a bit different (haha) than the other combination operations. While union and intersect performs a union/intersection of all given sets, diff performs a difference operation between the first given set and one or more other sets. If we want to see which Swedish and German hotels that don't have 2 or 3 stars, we can do a diff operation.
SDIFF Countries:1:Countries:2:Hotels Stars:2:Hotels Stars:3:Hotels
Now, this could be done with an intersect operation as well, but then you would have to first store the union of the sets of 1, 4 and 5 stars and then do the intersection between that union and the countries union.
SUNIONSTORE Stars:1:Stars:4:Stars:5:Hotels Stars:1:Hotels Stars:4:Hotels Stars:5:Hotels
SINTER Countries:1:Countries:2:Hotels Stars:1:Stars:4:Stars:5:Hotels

Sorting sets

Once we have performed the unions, intersections, diffs and whatnots of our sets, we want to get the result of the final stored key. Even though the actual set only contains one value, the keys of the hotels, they can be sorted in different ways by using external keys and pattern matching.

If we for instance want to sort all hotels in the final set according to their price, we create a new key for that. The key name needs to contain the string value held in the set, i e the hotel key and the value used for sorting. The sort command then takes a pattern and sorts the set according to the values in the external key.
Countries:1:Countries:2:Stars:1:Hotels = ["Hotels:194", "Hotels:200"]
SortByPrice_Hotels:194 = 1000
SortByPrice_Hotels:200 = 1200

SORT Countries:1:Countries:2:Stars:1:Hotels BY SortByPrice_*
We can use a limit to decide how many items we want to fetch in the sorted list. The limit takes the offset and the size, in this case we start at item 0 and take 4 items.
SORT Countries:1:Countries:2:Stars:1:Hotels BY SortByPrice_* LIMIT 0 4
And finally, if the set contains keys to other entities, we can also use the same pattern as for sorting by external keys to actually get that JSON-blob in the same operation. Very clever. Think of the asterisk as being replaced with the individual values in the set. :)
SORT Countries:1:Countries:2:Stars:1:Hotels BY SortByPrice_* LIMIT 0 4 GET *

Next step

These operations will get us a long way. The basics of filtering with Redis is to just keep on storing the result of unions, intersections and diffs based on choices made in the GUI until there is a final set to be sorted and fetched. But we need to use one more datatype in Redis, sorted sets. The sorted set will help us fetch hotels within a certain price interval, distance to beach or distance to shopping.

Wednesday 7 September 2016

Building a faceted search using Redis and MVC.net - part 1: The key is key

Faceted searches on web pages come in many forms, both technically and UX-wise. This one is built using Redis and MVC.net. The data is stored and modelled in Redis to suit this particular need. The web site uses plain javascript to build the calls to the MVC app, which serves the resulting list as an HTML-blob.


Basic concept

A demo can be found on http://hotelweb.azurewebsites.net/. The basic concept is: 2000 hotels in a list, possible to filter by different qualities and sort by column. The filtering facets are updated with the number of hits each option has. All calls for filtering and sorting are server calls going to the Redis DB. The demo is hosted on Azure and the Redis DB on RedisLabs, which makes the demo not quite as fast as it could be. Sorry about that. And it's not responsive. At all.

Data in Redis

Redis is a key-value store that gives you the possibility to use a number of different data types as the value. In this case, we make use of:
  • Strings, a plain string value: color = "blue".
  • Sets, an unordered collection of unique strings: allcolors = "blue"; "red"; "green".
  • Sorted sets, a set where each element has a score for sorting them: bestcolors = "blue", 1; "green", 2; "red", 3.

Being able to use sets and stored sets is great, but you still have to wrap your head around how to think about data in Redis. It's a bit different compared to working with the relational or document database you might be used to. Let's look at the first example: how to store and connect countries and hotels.

Compounding keys

In Redis, it's all about working with the data, creating all the keys necessary for what you want to do. There is a great pattern for doing this. First, let's look at what we want to be able to do with the countries in this scenario.
  • We want to fetch all the countries available. This sounds like a set.
  • We want to fetch all hotels for a country. This also sounds like a set.

So if we create the set "Countries", holding the values "Germany", "Sweden", "Denmark", that would solve our first task. But how will we solve the second one? Should we have a set called "Germany" that holds the hotel ids? Then, how would we know that the set "Germany" is a country? We won't, and maybe that wouldn't be a problem in a small data model, but it will definitely quickly get messy and hard to understand the data structure.

The solution to this are compound keys (yay!). Let the key reflect the data structure and let the values be other keys leading down to more data.
Countries = ["Countries:1", "Countries:2", "Countries:3"]
Countries:1 = "Germany"
Countries:2 = "Sweden"
Countries:3 = "Denmark"
Countries:1:Hotels = ["Hotels:1", "Hotels:33", "Hotels:194"]
Hotels:1 = '{\"Name\":\"Hotel 1\",\"Stars\":3,\"PricePerNight\":1000}'
Hotels:33 = '{\"Name\":\"Hotel 33\",\"Stars\":5,\"PricePerNight\":2700}'
Hotels:194 = '{\"Name\":\"Hotel 194\",\"Stars\":1,\"PricePerNight\":235}'

Using this pattern, the data structure is fairly simple to grasp just by looking at the key. The set Countries holds the keys Countries:1, Countries:2, Countries:3. These keys lead to the name of the country. By adding another segment to the key you can connect the countries to other sets. Countries:1:Hotels holds the keys to all hotels in that particular country. That hotel key in turn uses the string data type to store a json blob containing the hotel information.

Compound keys can of course be used in any direction. Hotels:1:Beaches can hold a set of all the beaches close to a certain hotel, in the same way as Beaches:1:Hotels can hold a set of all hotels close to a particular beach. Slicing and dicing the data in advance is key (no pun intended) when working with Redis.

Next step

Now we have a start to getting the hotels we want when clicking countries and stars (you guessed it, Stars:1:Hotels) in our faceted search. Next up we'll see how we can combine and sort sets.

Sunday 24 July 2016

A pure javascript numbers range slider

Recently I needed to use a range slider in an interface where it was to be used to select an interval of values. I assumed I could use the HTML5 range-control, <input type="range">, but it turned out that it was hard to get the look and feel I wanted. I had to stack two range controls on top of each other, position them and try to style them with all kinds of vendor specific attributes. When it finally worked and look OK in Chrome and Firefox, I took a look in Microsoft Edge and gave up.

Instead, I built this little javascript control instead. This simple version with 200 lines of javascript is available on github.com/asalilje/rangeslider and a demo of the range slider can be found at asalilje.github.io/rangeslider/.

Elements of the range slider

  • A static background track representing the total range.
  • Static labels for the total range's starting and ending values.
  • Two draggable handles for selecting the interval.
  • An interval track marking the currently selected range between the handles.
  • Input fields for both handles, showing the current value selected. It's also possible to input another value and get the handles to move.


Basic concept

A number of event listeners are added to the elements, responding to touch as well as mouse movements. When a handle is clicked, or touched, it is set to active. When the mouse or finger is moved, a function is triggered that:
  • Changes the position of the active handle so it looks like it's being dragged.
  • Calculates the current value based on the current handle position.
  • Changes the size and position of the interval track.
  • Writes the current values to the input fields.
When the mouse button is released, or the touch ends, the active handle is reset and all actions stopped.

A linear range

The simple version of a range slider just calculates the values in a linear fashion. Say you have a background track that is 500 pixels wide and a numbers range going from 0 to 100. By first calculating how many percent of the total track the handle has reached, that same percentage is used to calculate the current value.
const maxValue = 100;
const minValue = 0;
const value = maxValue - minValue;
const trackWidth = 500;
const currentPosition = 125;
const currentPercentage = 125/500*100; //25
const currentValue = value * (percentage/100); // 25
This calculation is quite simple to implement. But what if you want to use for instance the standard deviation to get a more precise scale closer to the median? Maybe the numbers range is very big and the precision should be less the bigger the numbers get. This was definitely one of my use cases.

A crazy range

To change the scale of the range I decided to make it possible to configure subranges. Giving the slider a set of values and the percentages they represent on the slider meant that I could split the total range into smaller subranges and get different scales for the different parts.

For instance, the values "0,100,500" mapped to the percentages "0,50,100" means that the values 0 to 100 stretches from 0 to 50% on the range, while 100 to 500 uses the remaining 50%. To calculate this we need to find the current value range the handle is in, calculate it's position within that range and then use the ratio of that range compared to the total range.
const prevRangeMaxValue = 0; // no previous range
const rangeStartValue = 0;
const rangeEndValue = 100;
const rangeValue = rangeEndValue - rangeStartValue;  // 100
const rangeStartPercent = 0;
const rangeRatio = 100/50; // the range's part of the total range
const trackWidth = 500;
const currentPosition = 125;
const currentPercentage = 125/500*100; // 25
const currentValue = (currentPercentage-rangeStartPercent) * (rangeRatio*rangeValue)
      / 100 + prevRangeMaxValue; // 50
Once we have this calculation in place, we can do all kinds of funny range scales by adding arrays of values and their matching percentages. The values to actually use for the subranges I calculate in the backend, cause you know, I'm not really a frontend coder.

Wednesday 11 May 2016

Being a woman in tech. The bad parts.

There's a lot of talk about women in tech, why there are so few of us and what can be done about it. I don't have the answer to that. I know that I work with things I love and I wish that no one would have to avoid following this path because of discrimination for whatever reason.

Where are the female programmers?

I started working as a programmer in 1997. From 1997 up until 2011, I worked at 8 different companies, ranging from 3 employees to 240. During that time, I only had three other female developer colleagues. I ran into one more, who came in as a short time consultant at my then current company. And I worked with a female DBA. There. That's it. And male developer colleagues? Well, I'd say around 80. From 2011 to 2014, I worked at Thomas Cook in Stockholm who actually had around 15-20% female developers. So whatever they're doing, it's working. :)

And is there a problem?

When I started working as a developer, I never even thought about the fact that I was one of very few female developers. Back then, the Internet had just arrived and we were all trying to learn and keep up with the new technology. I never noticed anyone saying anything or received any remarks about being female. The dotcom-business was booming, everyone was having fun and the industry was taking shape.

But 10 years later, I thought about it a lot. And I know I was wondering if a rookie female developer would walk through these doors, would she stay? Maybe it was because the dotcom-era ended, and we - the happy new Internet people - were scattered and ended up in old school companies where murky dusty views of women and tech were residing.

It's not the small things that idiots say and do

So, I guess I've had the same experience as most women in male dominated environments. I've spoken up because something is not working and blokes loudly comment "Oh, I guess it's PMS-time", followed by big laughs from the others. Or the classic mumbling "I guess someone didn't get some last night". Or maybe the "Look out, she's gonna start crying". Classical condescending bullying. I suppose it can happen anywhere. Or can it? Has it happened to you while standing with your 30yo-something developer peers?

Or yeah, the "What do you prefer, frontend or backend?" from completely unknown blokes at a tech conference, followed by the traditional laughs. That's also a classic. Heard that more than once. Pattern here is men in group. No one would act like that if it was just me and him. But that's just it. As a female developer, it's hard not to bump into men in group.

And of course, not all men. Very few men, actually. Immensely few! But it doesn't have to happen that many times before you start to back away from it all. Not enough to scare me out of tech, but maybe enough to make me avoid crowds and not speak up and draw attention to myself.

It's worse with the more subtle things

You know when you go up to a group of developers that stand around talking and laughing at a conference or meetup and suddenly they get all quiet and the person who was talking won't finish what he was saying? And it gets really awkward and you know that you shouldn't have gone up to them. That feeling.

Again, making me keep to myself at those events. Which I wish I wouldn't do. But for me, it's not about mingling and networking, it's about listening to the sessions and in between, staying invisible and probably leaving early. I just don't feel like I'm part of the developer community.

Then there are the things that are hard to deal with

"We really want to hire women", they say. And then you start working there as their first female dev and at least five male devs say - in one way or another - "We are so eager to hire women, but I think they should have hired the best developer instead." BAM, kind of. It's sad. And hard to handle. And leaving me with the feeling that I ALWAYS ALWAYS have to prove myself, to show that I am good enough to deserve this job.

And that time, when I had been working as a backend developer and architect in a very complex domain for years, and went to a interview for a job in exactly the same domain. I got a no because I had "too much of a frontend profile". And we hadn't discussed frontend at all because I really knew nothing about it. But I guess they thought that my pink laptop bag was full of gifs and jpegs.

But nothing is as hard as being invisible

So I worked in this team as an architect together with two other blokes. One of which, after 3 months in our team, for the first time asks me a question: "what colours and icons should I use for the dashboard?". And I say "I have no idea, talk to the designer". And he says "isn't this what you do?". After 3 months of stand-ups, hadn't he listened or did he just want to push me down? Who knows. I'm just trying hard not to care.

But the worst thing with being invisible is when people from other teams or the business don't talk to me. If there's a bloke around, they talk to him. It doesn't matter if he's the designer or whatever. All questions on technical issues gets directed to the man in the room. I've had team mates apologising to me because they've been asked things when they think everyone should know I'm the one to go to. Sometimes it has felt really hopeless. And I've been seriously doubting my competence since people don't talk to me. I still am, not because these things happen to me very often, but because of the bulk of it all during my 20 years as a developer. I work and work and learn and learn, but I'm still no one. Just that person fulfilling the gender quota.

There's more of course. These things are just some samples of everything that has made me have that lump of uncertainty in my belly all these years. But in the big whole, it is just a few people, a few companies and a few events that have caused it. The majority of people I've met in tech are lovely, creative people that I've really enjoyed working with. I just wish that on any one of those bad events I've experienced, someone else would've spoken up and said something. Maybe I wouldn't have felt so alone, exposed and so much like not belonging there. If you want to help making the developer community a more including one, start there.

Sunday 24 April 2016

Build a solderless Big Red Button using Raspberry Pi and Node.js

The Big Red Button

This is a simple tutorial for building your own Big Red Button, used to do whatever you come up with. Here, I use Node.js to make the button active at a set interval. If pressed while active, it fetches cat clips from Youtube and posts it on Twitter.


You can of course do whatever. If you can reach something through an API, it's easy to work with. Or, maybe use Selenium Webdriver and trigger something by visiting a website? At work, we use the button to listen to our deploy system Octopus and trigger deploys to production when they're available.

What you need:

  • Raspberry Pi with built in wifi/wifi dongle
  • Formatted microSD-card
  • Adafruit Large Arcade button with built in resistor
  • 4 jumper cables, female to whatever
  • 2 faston connectors 4,8 mm
  • 2 faston connectors 6,3 mm
  • Power supply with MicroUSB cable
  • Cardboard or other type of box, minimum 6 cm high (enough to house the button and the pi)
  • Tool to make holes in the box for the button and cable
  • Cable peeler and crimping tool
  • 5mm LED
  • LED holder for attaching to box
  • 3 jumper wires female to female
  • 1 resistor, 330 Ohm

Let's start by setting up the Raspberry Pi

  • Download the latest release of Noobs from https://www.raspberrypi.org/downloads/.
  • Extract the files and copy them onto the formatted SD-card.
  • Connect the Pi to a screen/TV using HDMI. Plug in a keyboard and a mouse.

Follow along with the installation instruction and install Raspbian. When that is done, click Menu > Preferences > Raspberry Pi Configuration and configure the Pi according to your own locale. Set up the Wifi, either the built in or the dongle. Also make sure you tick the checkbox for boot option "Wait for network".

When you're connected to the network, find your Pi's ip number by opening the console and running the command ip addr. Make a note of it, you'll need that later.

All Pis come with the default user 'Pi' and password 'raspberry'. It might be a good idea to change this if you expose the Pi on the internet. When I write piuser in the instruction, I mean the username you use, whether it's 'pi' or something else. :)

Using SSH to access Pi

When you need to write something in the console on your Pi, you can either do it on the Pi, or you can use SSH from another computer. If you're using Windows, you have to install an SSH-client. Install Cygwin with the OpenSSH-package to get access to SSH as well as SCP, which is needed to copy files to the Pi.

Log in to your Pi by opening a console on your computer and write SSH piuser@pi-ipnumber. For Windows, you use Cygwin. You will be prompted to input the password and then you'll be connected. Using SSH you can perform the same commands as if you were looking at the Pi's console. When the instruction here says Pi-console, you can use either SSH on your computer or the console on Pi.

Using remote desktop connection to access Pi

If you want to be able to connect remotely to the Pi and share screens with your PC/Mac, you can install TightVNCServer on the Pi. This is nice if you, like me, only have a laptop at home. I don't want to go over to the TV just to see the Pi desktop.

Follow the instructions here to set TightVNC up on Pi and your computer, and get it to autostart on the Pi: https://www.raspberrypi.org/documentation/remote-access/vnc/

Once installed, the Pi is accessible on port 5901.

Install Node.js on Pi

Depending on which model of the Pi you have, download and install the correct version of Node.js by opening up a Pi-console and writing:

Pi, Model A:
>wget https://nodejs.org/dist/v4.4.3/node-v4.4.3-linux-armv6l.tar.gz 
>tar -xvf node-v4.4.3-linux-armv6l.tar.gz 
>cd node-v4.4.3-linux-armv6l

Pi2 and Pi3, Model B:
>wget https://nodejs.org/dist/v4.4.3/node-v4.4.3-linux-armv7l.tar.xz
>tar -xvf node-v4.4.3-linux-armv7l.tar.xz 
>cd node-v4.4.3-linux-armv7l

Then copy the files to usr/local to make them accessible on the path:
>sudo cp -R * /usr/local/

You can remove the downloaded packages if you want to save space on your SD-card. Make sure everything installed correctly by writing node -v in your Pi-console. You should get back the version number, 4.4.3 in this case.

First hardware test: Connect a LED

Let's start with something really simple that explains the concept of the Raspberry Pi. The Pi has 40 pins: https://www.element14.com/community/servlet/JiveServlet/previewBody/73950-102-9-339300/pi3_gpio.png. There are pins for ground and power, and then there are the GPIO-pins (General Purpose Input/Output). In short, and without the gritty details, when you connect something that should be turned on, you want a circuit going through it from power to ground. But you also need a resistor somewhere in that circuit, because otherwise the current flowing through the circuit back to the Raspberry might be too high and fry your Pi.

So for the first test, you need a LED, three jumper wires F/F and a 330 Ohm resistor. Resistors are tricky, use the excellent help at http://www.hobby-hour.com/electronics/resistorcalculator.php to make sure you find the correct one. With a 5-band resistor, 330 Ohm has the code orange, orange, black, black, brown.

If you look at the LED, one leg is longer. This is the anode that should be connected to power. Start with pressing the anode leg into one end of a jumper wire. The other leg should be connected to ground, but with a resistor in between. Connect the ground leg to a jumper wire, insert a resistor in the other end of it and then another jumper wire after the resistor.


Now connect the anode jumper wire to a 3,3 volt pin on your Pi, and the ground jumper wire to a ground pin. Connect your Pi to power and Voila! Light!

So how do the GPIO pins work? Well, move the jumper wire you connected to power to one of the GPIO-pins instead. You might see that it's still on, but weaker. If so, it's in a state called 'floating'. The GPIO-pins can be in three states:
  • Low, which means it's outputting no current.
  • High, which means it's outputting 3,3v, just like a normal power pin.
  • Floating, which means it's in a random state, affected by electromagnetic fields and whatnot.
After this first test, I'm sure you can see what our programming will actually do. Set GPIO-pins to high or low! Turn things on or off.

First software test: turn the LED on programmatically

For this, we need to get our Node environment up and running. If you don't have Nodejs on your computer, follow the instructions here: https://nodejs.org/en/.

Download the PiNodeStarterProject from http://github.com/asalilje/PiNodeStarterProject. The project contains a package.json setting up the dependencies for manipulating the GPIO-pins, and a deployment file that makes it easy to move all your files to your Pi and start the app. The code is written using ES2015, which demands you have the line "use strict" at the beginning of each file.

Open the app.js file. You should see the following lines:
"use strict";
const GPIO = require('onoff').Gpio;

function exit() {
    process.exit();
}

process.on('SIGINT', exit);

What we do here is just importing the dependency to the excellent onoff-package and setting up an exitfunction that's run when the Nodeprocess ends, that is, when you press Ctrl + C.

To set a pin high or low, you need to first define the pin and the data direction. If you want to listen to a button it's 'in', but here, it's 'out'. Make sure your led is connected to ground and a GPIO-pin on the anode side. Check the number of the pin on the chart.

At the line after requiring the onoff-package, insert the code, (with your correct GPIO if it's not GPIO21):
const led = new GPIO(21, 'out');
led.writeSync(1);

Also, you want to turn off the light and kill the GPIO-definition when the process exits. Do this by modifying the exit function to look like this:
function exit() {
    led.writeSync(0);
    led.unexport();
    process.exit();
}

That's it. Now our program should turn on the LED properly with a full 3,3v!

To deploy this program to your Pi, go into gulpfile.js and change the hostname, user and password to reflect your Pi's setup. Save the file and in a console, cd to the project directory and write npm install to install the packages needed to develop the app. Don't worry if the Gpio-modules onoff and epoll won't install.

Move files to Pi
If you're on a Mac, running npm run deploy will create a directory on the Pi and copy files to it using SCP. Sadly, I haven't gotten the copy part to work on Windows yet, here you must run the script inside Cygwin and you will get a permission denied. After doing this, the directory is created and it's possible to handle the file copy by running the actual copy command directly in cygwin: scp app.jss package.json piuser@pi-ipnumber:/home/piuser/PiNodeStarterProject. You should be prompted for the user's password and the files are copied onto the raspberry. Now run npm run modules to install the node modules. When that is done, run npm run app to start the app on the Pi. Hopefully, everything is connected A-OK and the LED lights up.

Note, that this app can't be run locally on your computer when the onoff-package is required and called, since it needs access to the pins. If you want to test other logic in your program locally you have to work around that by triggering actions in other ways.

Time to connect the button!

The arcade button has 4 connectors. The ones on the side are for the LED inside the button. The other two are for the input. To connect the button, you need four jumper wires with one female end. That end is connected to the Pi. The other end needs to be cut and peeled to crimp on a faston connector.


Two 4,8 mm and two 6,3 mm connectors are needed. Since the jumper wires are very thin, it might help to peel off 2 centimetres of the shell so you can fold the copper thread and make it a bit thicker. Make sure you have a good crimping tool (or strong hands).


Connect the button LED
Let's try out the button LED first. Just connect the wider two flaton connectors to the connectors on the side of the button. Connect the one on the red side to a 3,3v power pin, and the other to ground. The LED shines! So all we need to do here is the same as we did with the other LED, connect it to a GPIO pin instead of a power pin. The LED has a built in resistor, so you don't have to add any to the circuit.


Connect the button switch
Output pins are easy to configure, but there's a bit more to an input pin. If an input pin is in a floating state, it could behave very randomly. We need a pin that has a pulldown or pullup resistor. Simply put, the pulldown resistor sets the default state of the pin as low and the pullup resistor sets the default state to high. Here, we don't care whether it's high or low, since we only want to listen to the button being pushed and not use the actual state of the button.

The GPIO pins on a Pi can be configured to use pulldown or pullup resistors, but let's just use one of the I2C ones, GPIO2 or GPIO3 since they have built in resistors. Connect the two remaining connectors on the button to the Pi, the lower one to ground and the higher one to GPIO2.


Let's try that out with some Node.js-coding. When we press the button, we want to light the button LED for 1 second. Insert the following code in your app.js file:
const buttonLed = new GPIO(4, 'out');
const buttonPush = new GPIO(2, 'in', 'falling');

buttonLed.writeSync(0);

buttonPush.watch(function(state) {
 console.log("Button pushed, state ", state);
 buttonLed.writeSync(1);
 setTimeout(function() {
  buttonLed.writeSync(0); 
 }, 1000);
});

Run npm run deploy to copy the files to your Pi. But instead of doing npm run app, use SSH to connect to your Pi and run your application from there. This will make it possible for you to see the output from the app. CD into /home/piuser/PiNodeStarterProject and write node app.js to start the application. Now push the button and see the LED turn on.

What you also will see is that each push of the button triggers several calls to the button.watch-function. One way to deal with this is to set the button as inactive the first time it's triggered and then activate it again as soon as your button action has been performed.

And that's basically it. Now you have a button with a built in LED, an extra LED to turn on when the app is running and a function watching for button pushes. What's left to do is decide what you actually want to do with your Big Red Button. And of course assemble the box. If you want to look at a project for inspiration, check out the CatBot that plays around with all these features and connects to Youtube and Twitter. Code is available on http://github.com/asalilje/catbot.

Last step - set up your app to start automatically on the Pi

Once you have your dream app up and running, I'm sure you want to be able to pull the plug on the Pi and get it to start your app automatically when booting next time. Here's how.

Go into your Pi-console and write sudo nano /etc/rc.local. Before the exit-statement, insert:
PATH=$PATH:/usr/local/bin
cd /home/PiUser/YourProject #path to your app
/usr/local/bin/node app.js < /dev/null >/var/tmp/startup.log 2>/var/tmp/startup.err &

This code snippet starts your node-app and logs errors so you can see if anything goes wrong during the startup. Restart your Pi to make sure the app starts.

Good luck, have fun!

Tuesday 9 February 2016

Easy Singletons with CommonJS

Bye bye, singleton wiring and getInstance()!

CommonJS has a nice way of automatically creating singletons out of your modules. It's all in how you write your modules and what you export from them. No more hideous getInstance methods that drowns out the actual purpose of the module, sweet! If that's what you want, of course. If not, the behaviour might be a bit confusing... :)

CommonJS and Browserify

To be able to use the same syntax in your browser javascripts as you do in node, using CommonJS-modules with an exports-statement, you have to use a tool like Browserify or Webpack. I'm sure there are others out there too, but these are the ones I'm familiar with. Browserify is very easy to setup for small private projects. I'll try to walk you through it.

Project structure

The structure of this little project is the simplest possible: A source-folder containing Index.html and three js-files. Main.js is the entry point for the javascript source files and uses the colourFetcher.js and colourRepository.js to fetch and display colours. What we want to do is use Browserify to bundle up all the js-dependencies into one file that we can include in Index.html.

Install Browserify

You can find more info about Browserify on browserify.org. You need node on your machine, and then it's a cakewalk to install it:

In your root folder, create a package.json file if you don't already have one. This is done by running the command npm init in the terminal and answering the questions.

Install Browserify by running npm install browserify --save-dev in the terminal. This will install the package and add the dependency to the devDependencies-section of your package.json.

If you want to explore the options available, just type browserify in the terminal and take it from there. What we want to do now is just take all js-files in the src-folder and bundle them into a bundle.js-file placed in a public-folder. Just create a new folder named public under src and in the terminal, run browserify src/*.js -o src/public/bundle.js -d. The first part of the command is the glob-pattern for the files to bundle, the -o is the output location and -d stands for debug and means source maps will be generated. We probably don't want to have to remember this command, so change the script-section in package.json to look like this:
"scripts": {
  "start": "browserify src/*.js -o src/public/bundle.js -d"
}
Now you can handily run the site with the command npm start instead and add the bundle.js as the js-source for your index.html.
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Singleton</title>
    <script src="public/bundle.js"></script>
</head>
<body>
</body>
</html>

Add Watchify

To add a nice watch-function that immediately re-bundles your files when they're modified, install Watchify with npm install watchify. Modify your package.json again with:
"scripts": {
  "start": "browserify src/*.js -o src/public/bundle.js -d 
     && watchify src/*.js -o src/public/bundle.js -d -v"
}
This will run browserify immediately followed by watchify with the verbose setting on (-v).

Back to the singleton issue

Now, we should have a nice environment up and running for trying out the quirks and wonders of CommonJS. For instance module caching. So let's create some code. First colourRepository.js:
var colourRepository = function () {
    var colours = {
        magenta: "#FF00FF",
        palegreen: "#98FB98",
        chocolate: "#D2691E",
    };

    var list = function (callback, message) {
        if (!message)
            message = "from repo";
        callback(colours, message);
    };

    console.log("new colourrepo");

    return {
        list: list
    };
};
module.exports = colourRepository;
ColourRepository.js just sets up a list of colours and passes them along into the callback function provided by the caller. If there's a message it gets sent back to the callback too, otherwise we add one.

Next up, colourFetcher.js, that requires colourRepository as a dependency and calls it:
var colourRepo = require("./colourRepository")();

var colourFetcher = function() {
    var list = function(callback) {
        colourRepo.list(callback, "from fetcher");
    };

    return {
        list: list
    };
};
module.exports = colourFetcher;
And last, main.js, that requires both of the modules and therefore fetches colour in two different ways:
var colourRepo = require("./colourRepository")();
var colourFetcher = require("./colourFetcher")();

colourRepo.list(listColours);
colourFetcher.list(listColours);

function listColours(colours, message) {
    console.log(message);
    for (var colour in colours) {
        console.log(colours[colour] + '=' + colour);
    }
}
If we run this code, we notice that the console logs "new colourrepo" twice. Once when the repo is required in main.js and once when it's required from colourFetcher.js. This is because we're exporting colourRepository as a function. When we require it, we call the function at the same time using var colourRepo = require("./colourRepository")();. No caching, no singleton.

So, what if we change the code in a couple of places? Instead of exporting a function in colourRepository.js, we change the last line to module.exports = colourRepository();. The exports-statement now returns the called function when the module is loaded. When we require the module in main.js and colourFetcher.js, we can now remove the call to that function: var colourRepo = require("./colourRepository");. As the code is run, the console only logs "new colourrepo" once, and with the tiniest of effort we've turned our colour repository into a singleton. :)

Summary

Modules in CommonJS are cached after the first time they're loaded. This means that require("colourRepository") will return the same object everywhere, if it is resolved by the same file. If this is not the wanted behaviour, the exports-statement of the CommonJS-module should return a function instead, and the calling script must call that function.

Thursday 4 February 2016

Writing a pure javascript repository pattern

My javascripting needs work!

I suffer from javascriptus horribilis. I don't know why all my knowledge of object oriented programming and patterns disappear when I write javascript. It feels like I'm coding myself into a corner all the time. But I'm trying to improve. One step was to try and mix a bit of factory and repository thinking into my javascripts. Don't you just feel the weight of the Gang of Four-book sweeping in by now?


Basic train of thought

So the idea is this; think layers in javascript. That sounds like a very non-hipster thing I guess, stiff and overly complicated, but if we think 'components' or 'modules' instead it gets more hip by the minute. What I wanted to do was basically:
  • The calling component calls a repository, with a callback to trigger when the data returns.
  • A repository factory provides the available repositories to the calling component.
  • The repository exposes public methods for CRUD, provides data from some source and calls the callback provided from the calling component. Whether the source is static content, an external service or whatnot is not important, and the calling component doesn't have to know anything about that.
Code is available on Github. I like the Node-way of requiring modules, so I use Browserify to be able to write my code in commonjs-style.


A CRUD Repository

Let's start with creating a basic User Repository, userRepository.js.
var userRepository = function() {

    var get = function(id, callback) {
        callback();
    };

    var list = function(callback) {
        callback();
    };

    var save = function(user, callback) {
        callback();
    };

    return {
        get: get,
        list: list,
        save: save
    };
};

module.exports = userRepository();
The code executes when it is required (module.exports = userRepository()) and the methods get, list and save are exposed. Now we can require and call the repository from our main.js file:
var userRepo = require("./userRepository");
userRepo.get(1, getUser);

function getUser() {
    console.log("got the user");
}
This code requires the user repository and assigns it to the variable userRepo. Now we can call the get-method and send along the callback function that we want the user repository to execute when the data is fetched. For now, the callback just logs a message.


Implementing the CRUD-methods

Now, what the repo actually does can of course be whatever. Fetch data from an array in the repo, from external or internal services or from text files. I just fake a service by fetching the content of a json-file, users.json.
[
  { "id":1, "name":"User Number One" },
  { "id":2, "name":"User Number Two" },
  { "id":3, "name":"User Number Three" },
  { "id":4, "name":"User Number Four" }
]
To fetch these, I want to make an Ajax-call using a promise. Granted, promise is not a standard in for example IE, but there are nice polyfillers out there that will do the trick. The nice thing about promises is that you can chain methods together and the catch-clause catches errors in all of the then-clauses. In userRepository.js I extend my get-implementation to this:
var get = function(id, callback) {
    ajax.makeRequest('GET', 'users.json')
        .then(function (data) {
            var users = JSON.parse(data);
            var user = users.filter(function(user) {
                return user.id === id;
            });
            if (user.length > 0)
                user = user[0];
            callback(user);
        })
        .catch(function (err) {
            console.error('Ouch, there was an error!', 
            err.statusText);
        });
};
I make a request to users.json and when the promise is resolved and returns, I parse the data, filters the users by id and returns the correct user. Yeah, not the most efficient way to get a user, agreed, but it's there to show that there's a middle layer between the actual ajax-request and the component needing the data. :)

When the user is filtered and ready, the callback is executed, sending along the user. The ajax-component being used in the repo looks like this, but it's outside the scope here and really not important:
var ajax = function () {

    var createParams = function (params) {
        if (params && typeof params === 'object') {
            params = Object.keys(params).map(function (key) {
                return encodeURIComponent(key) + '=' 
                  + encodeURIComponent(params[key]);
            }).join('&');
        }
        return params;
    };

    var makeRequest = function (method, url, params) {
        return new Promise(function (resolve, reject) {
            var xhr = new XMLHttpRequest();
            xhr.open(method, url);
            xhr.onload = function () {
                if (this.status >= 200 && this.status < 300) {
                    resolve(xhr.response);
                }
                else {
                    reject({
                        status: this.status,
                        statusText: xhr.statusText
                    })
                }
            };
            xhr.onerror = function () {
                reject({
                    status: this.status,
                    statusText: xhr.statusText
                });
            };
            if (params) {
                params = xhr.params = createParams(params);
            }
            xhr.send(params);
        });
    };

    return {
        makeRequest: makeRequest
    }
};

module.exports = ajax();


A repository factory, just because factories are cool

After writing all possible repositories, the code in main.js looks quite nice. Lots of different repositories being required and assigned to variables though.
var dom = require("./domManager");
var userRepo = require("./userRepository");
var catRepo = require("./catRepository");
var colourRepo = require("./colourRepository");

userRepo.list(listUsers);
catRepo.list(listCats);
colourRepo.list(listColours);
colourRepo.get("magenta", showColour);

function listUsers(users) {
  users.forEach(function(user) {
      dom(".userList").addListItem(user.name, user.id);
  });
}

function listCats(cats) {
  var catsWithImages = cats.filter(function(cat) {
      return "image" in cat;
  });

  catsWithImages.forEach(function(cat) {
      dom(".catList").addHtml(
       "<div><img src='"+cat.image+"'></div>"
      );
  });
}

function listColours(cols) {
  for (var col in cols) {
      dom(".colourList").addHtml(
       "<div><b style='color:"+cols[col]+"'>"+col+"</b></div>"
      );
  }
}

function showColour(hex) {
  dom(".bestColour").element.style.backgroundColor = hex;
}
The repository factory to the rescue! Easy-breezy repositoryFactory.js takes care of the plumming:
var repositoryFactory = function() {
    var repos = this;
    var repositories = [
      {name: "users", source: require("./userRepository")},
      {name: "cats", source: require("./catRepository")},
      {name: "colours", source: require("./colourRepository")}
    ];

    repositories.forEach(function(repo) {
       repos[repo.name] = repo.source;
    });
};

module.exports = new repositoryFactory();
The factory contains an array with all available repositories. When it is required, it loops through the repos in the array, requires all of them and assigns them to properties on 'this'. Since they execute when they are required, they're all exposing their public methods and are ready to be used. Neat and tidy. And now we can do this in main.js:
var repos = require("./repositoryFactory");

repos.users.list(listUsers);
repos.cats.list(listCats);
repos.colours.list(listColours);
repos.colours.get("magenta", showColour);

Wednesday 13 January 2016

Writing a touch enabled responsive slideshow in pure javascript

The issue at hand

I've kind of started to enjoy Javascript and even CSS as of lately, and decided to try to write a slideshow in pure Javascript, no frameworks or frills involved. We use slideshows all the time on different web sites, be it good or bad, but I'm always resorting to some kind of component or framework. And there's plenty of good ones out there, doing lovely stuff, but since I'm not that well versed in advanced Javascript, I can't really read the code and understand what's actually happening. 

Another issue is of course that these awesome frameworks are so generic and cover all possible options and devices out there, which in all honesty makes the code a bit bloated. And when we started to write our own code at work, I realised it can actually be done. I managed to get the JS-part down to 250 lines, yay! Demo is here. Code is found on Github.

What I wanted to solve

  • It should be written in pure Javascript, with no frameworks or mysteries.
  • It must be responsive.
  • It must handle touch events.
  • It must be possible to have several slideshows on the same page, when the slideshowmania hits.
  • It must be able to handle both looping and not looping slideshows.
  • And for fun, it must handle the people not using Javascript. Because you never know, Javascript might disappear, right?

What I didn't care about

It's not like this slideshow is going to take over the world, so I'm not too worried about:
  • Old browsers. Basically, if it works on my machine, a Macbook with OSX and Windows 10, with latest versions of Safari, Chrome, Firefox and Edge on it, I'm happy.
  • Testing on real life phones or tablets. If it works on my Nexus 6p with Chrome, I'm happy.
Which means there's room for improvement, to say the least. :)

Basic idea

The idea behind this slideshow is simple; an outer container holding a number of slides. The slides are vertically aligned, by setting them as inline-blocks and not wrapping whitespace on the outer container. View code at Codepen.

To remove the annoying 4 pixels between inline-blocks, there are a number of tricks. I set the font-size to 0 on the outer container, that works fine. That's basically it. Now we need some smart behaviour to turn this into a slideshow.

No javascript

This is easily handled with this markup. All we do is set the outer container to overflow-x: auto. This will make it horizontally scrollable. So if we have the class 'no-js' in an outer container, we change the CSS of the slideshow: View Codepen. Now we have the world's simplest slideshow, neat. :)

No touch - desktop browser

For desktop browsers, we'll make the outer container hide it's overflow and present buttons that will move the slides back and forward. There are a number of ways to do the scrolling, but I went with CSS transforms and transitions. View Codepen.

Touch - phones and tablets

For touch devices, we want to still present the buttons, but also make it possible to swipe between slides. For this, there are four events we have to listen to:
  • Touchstarted - a touch has been detected. We save the pixel position and timestamp.
  • Touchmove - finger is moving over the screen. We want the slide to move with the finger.
  • Touchend - touch ends. We want to check the position, move to next or previous slide or just stay where we are.
  • Touchcancel - touch moves outside area. We want to cancel the ongoing touch and set start position to null.
We'll use CSS transforms again to move the slide along with the finger, but this time without the transition. View Codepen.

For the touchend handler, we want to move to another slide if the touch has been really quick, like a flick, or if the touch has passed a certain percent of the slide. View Codepen.

Not much more to it, except handling touch details like cancelling vertical swiping and handling what to do when you've reached the end of the slides.

Looping

How to handle looping? Well, I've just added an extra first and last item to the DOM so when we loop we use the extra slides that are placed at the end of the array. Once the transition is done we do another transform without transition effect so the array of slides actually ends up at the "real" first and last item. Confusing? View the code on Github. :)

Feature checking without Modernizr

Modernizr is an excellent framework for detecting which features a browser has, but since I'm not using any frameworks we'll do it ourselves. In the HTML-file, I set a class in the HTML-tag named "no-js". Further down, I remove it using Javascript. If Javascript is not enabled, the class stays. Easy! For the touch/no-touch feature I use another small snippet of code. View Codepen.