Thursday, 15 June 2017

Mob programming for managers

I could give you many, many reasons as to why mob programming is a great way of working. I've practiced it full time for two years now, at two different companies, and I really see no reason to go back to working alone. Together with my two colleagues Håkan Alexander and John Magnusson, I've been speaking about the subject at more than a dozen companies in Stockholm and at a couple of conferences. In short, we're into it. Big time.

At my current assignment, SEB, one of the biggest banks in Sweden, we've been mob programming since I came into the project. In fact, most of the issues I encounter regarding poor quality and late deliveries, I strongly feel can be helped by mob programming.

Of course, working this way, sitting 4 or more developers around one computer, can trigger a few questions. Is it really efficient? How much does every line of code cost? One person is active and the other ones are just sitting around looking at their phones? Will management allow it?

Views on mob programming

To be honest, when we're out speaking about mob programming, managers are almost always positive. We speak about better quality, faster deliveries, better throughput, less time spent on fixing bugs.

Developers are more skeptic, especially the senior ones. Some feel that their work is too complex and they need to solve problems undisturbed and alone.

Junior developers are often very positive though. The things they could learn sitting together with the senior developers, instead of struggling alone through legacy systems where technology as well as domain is unchartered territory for them!

It's not that strange though, that managers encourage it and developers resist. For managers, this won't mean anything for their day to day job. It's always easy to encourage someone else to change their ways. For developers, on the other hand, this will deeply affect their everyday work.

SEB leader day plans

One day, our lovely agile coach Anna Borgerot came by at SEB and asked me if I wanted to help arrange the SEB IT Leader Day. 120 leaders within the IT organization from Sweden and Lithuania would meet up in Stockholm for a whole day with the theme of Learning. They had come up with the idea of letting the leaders do some coding, inspired by the Hour of Code-movement.

Anna loved the way we were mob programming (she said it warmed her heart to see us :) ) and thought that would be a great way to inspire everyone at the event to actually sit in front of the computer and do some coding, even though they might never have done it before.

So naturally, I said yes. Such a great opportunity to spread the mob programming gospel and to actually observe how people react when faced with a new team, a new way of working and a task way outside of their comfort zone. My view is that this is the natural way of solving problems for us, but once we head out into the work life, we're supposed to be efficient and go it alone. And - surprise - one mind does not think as well as four.

The coding task

To begin with, we realised it could not be just me managing the two hour slot of introducing mob programming, coding and reflecting. Three more developers from SEB were asked to help: Andreas Frigge, Andreas Berggren and Magnus Crafoord. We thought about what the actual coding exercise should be and ended up with Minecraft Designer found at Minecraft Designer is a block based application consisting of 12 steps of different tasks, with short movies in between explaining the coding concepts.

We all tried it and decided it would be something that would work for everyone, coding experience or not. The recognition of coding Minecraft was nice as well, something they might do at home with their kids later.

We also gave them the actual task, when they were done with the 12 mandatory steps we wanted them to build their own game, using what they had learned. There were some requirements, but quite vague. So we gave them a fixed deadline of one hour, a new way of working that they hadn't chosen themselves, and vague requirements. Totally realistic, in other words!

Dress rehearsal

In order to see if our idea for the two hours given to us would work out, we did a dress rehearsal two weeks in advance. Anna found 12 willing test pilots that could help us. This was incredibly helpful! We learned that my mob programming introduction had to be more geared towards the upcoming coding task, that we had to steer the dividing into teams better, that the written instructions about setting up the timer and the actual coding had to be much clearer and the screens, keyboards and mouse at each station had to be checked. When running through it with real non coding people, we ended up making small changes to almost everything.

We also noticed something else. They were laughing, pointing, discussing and creating stuff. Everyone participated. We started to feel quite good about the upcoming big day.

The leader day event

When the IT leader day finally arrived, we had the following schedule:
  • Intro to mob programming, 15 minutes.
  • Divide into teams, 4 at each table, 10 minutes. We took care ensuring they didn't work with the people they normally work with.
  • Setup the mob timer and programming environment, 10 minutes. Everyone had their own computers and we had 30 tables with screen, keyboard and mouse. We also asked them to use cool hacker names in the timer, which turned out to be a fun task that got the energy going in the room.
  • The coding task, 60 minutes.
  • Reflection in the team, 10 minutes. We had prepared a sheet of questions to help them.
  • Joint reflection, pass the mic, 10 minutes.

Reflections from my side during the actual event were these: everyone coded. They followed the timer that was set at 7 minutes. They laughed. They were active. They were loudly discussing the problems, solving all the tasks together. As I was walking the room, it was obvious how natural and powerful this way of working was.

At the joint reflection afterwards, one of the participants expressed that he was surprised that they had actually managed to solve this task and it was all due to working together. Another said that she actually felt she participated more when being a navigator than when being a driver. Great reflections, and so true!

More about the event can be found here, at SEB:s website.

Comments afterwards

Getting the written opinions on the mob programming session a couple of days after the event was truly awesome:
  • Mob programming is DA SHIT!
  • Mob programming – WOW!
  • Interesting interaction!
  • Fun to do some programming that also got you to think of ways of working.
  • Good to focus on development and IT-competence.
  • Inspirational to hear about the mob programming method.
  • Great with mob programming (outside my comfort zone which is good for me to be!)
  • The introduction to Mob programming was the best – loved the simplicity and clarity.
  • MOB - great way of working - will try that in my department.
  • Loved the mob programming!
  • Fun/useful to try mob programming.
  • An extremely powerful way of solving problems!
Can't be anything else than happy about those comments. The week after I also started to get bookings in my Outlook calendar from managers wanting me to speak about mob programming at their departments. So yay, great success!

Will SEB start mob programming everywhere now?

Mob programming is something that I personally am very passionate about. But one thing to watch out for of course is this: no one wants to be told how to do their work. The way a team works must come from within the team. Inspiration is great, trying different things is great, but it has to be a team decision.

Showing IT leaders that mob programming is a good way of working mainly achieves this: it might remove any future obstacle of managers thinking it's a waste of money and time. It might give teams the opportunity and possibility to try it out. It might help managers embrace that not all has to be done according to the standard process and beliefs. Hopefully in the end, some teams will get inspired to try it and see the benefits!

Sunday, 12 February 2017

Build an info station using Adafruit Feather

Everyone needs an info station. Press a button to quickly get the information you want! This project uses a Feather Huzzah with Wifi, an arcade button and an OLED i2C display.

What does it do?

  • On startup, the info station connects to your WiFi.
  • It displays a message: 'please press button'.
  • When pressed, the API of your choice is called, the response is parsed and displayed.
  • After a given duration, the display goes back to showing the message: 'please press button'.

In my case, I have a bus stop outside my building. When I press the button, it fetches the real time data showing when the next buses are due. The info updates once a minute for 5 minutes, then goes back into sleep mode. The reason I don't show the data all the time is that the API can only be called a limited number of times every month.

Step 1 - Prettify the button

The arcade button I bought looks nice, but I felt it would look even nicer if it had a LED light inside. But since a LED would be hard to fit in there, I decided to go with a LED sequin instead. I usually use these for wearables, but the sequin is very easy to work with; one end connects to voltage and the other to ground. So just start with soldering two wires onto the sequin. Make sure you use different colors on the wiring so you later know which is plus and minus.


To test the wiring, just connect your Feather to a computer using a micro USB cable and hold the ends of the sequin wires to the pins for 3V(+) and GND(-). The sequin should light up.

Use a small screwdriver to pry open the button by pressing the clips on both sides. Glue the LED sequin to the inside of the actuator so it will shine through the white plastic. Carefully put the button together again, pulling the wires out through the side slits without damaging the solder joints or wires. Test the sequin again using the Feather. A lovely shiny button! Who could possibly resist pressing it?

Step 2 - Solder pin headers into the Feather pads

In all Arduino and Raspberry Pi projects, one thing to remember is to always test the components before soldering. The easiest way to do that is by using a breadboard. If the pins are just pads, like on the Feather, I usually solder pin headers into them so I can plug everything into a breadboard and try out connections and code.

The Feather either comes with pre-soldered headers, or with a set of headers that you can solder yourself. There's no need to solder all of the pins. The ones used for this project are 3V, GND, GPIO2, GPIO4 and GPIO5. When you solder the headers, plug the long end of the pins in the header strip into the breadboard, place the Feather over the pins and solder the short end of the pin that's poking up through the pad. Now you can connect jumper wires and test out your connections to other components.

Step 3 - Test the Feather

To use the Feather Huzzah, we need to install the ESP8266 Board Package in the Arduino IDE. Under Preferences >> Additional Boards Manager Urls, add the url Next, use the Boards manager to install the ESP8266 package.

Restart the IDE and you should now be able to select the board Adafruit HUZZAH ESP2866 in the Boards manager.

Select the correct USB serial port under Ports and connect the Feather using a micro USB cable. Open a new sketch and insert the following code:
  void setup() {
    pinMode(0, OUTPUT);

  void loop() {
    digitalWrite(0, HIGH);
    digitalWrite(0, LOW);
The sketch will blink the built in red LED on GPIO0 every 500 ms. Save the sketch and press Upload to upload it to your Feather. If you have trouble connecting to the Feather it can be due to a faulty USB cable, it has to be able to transfer data, or issues with discovering the correct serial port. I have one USB cable that I know work well, and many that just won't connect my boards. If your LED blinks on your first attempt, congratulations! :)

Step 4 - Connect and test the button

To connect and try out the button, solder wires onto the gold plated connectors of the button. Use shrinking tube to cover the joints.

Peel and tin 5mm at the other end of the wires so you can push the wires into the breadboard. Connect one wire from the button to ground on the Feather and the other to GPIO2.

In the Arduino IDE, find the example Button under Examples >> 02.Digital. The example lights up a LED when a button is pressed. Most boards have a built in LED you can use. On the Feather, the built in red LED is on GPIO0. So change the sketch to use 0 for the LED pin, upload the sketch to the Feather and make sure your button works and the Feather can detect the state changes when you press the button.

Step 5 - Connect and test the display

The display I chose is an i2c OLED display with pre-soldered headers and 4 pins, very easy to work with. SPI displays are generally a bit faster but need more pins. Some microcontrollers are more suited for SPI, some displays need a bit of tampering to use i2c. But both work fine, it's just a matter of changing the wiring and number of pins.

Plug the display into the breadboard next to the Feather using the headers. Using male to male jumper wires, connect VCC to 3V, GND to GND, SCL to GPIO5 and SDA to GPIO4.

To communicate with the OLED display we need to install the library Adafruit SSD1306. Go to the Github repo and download a zip-file of the repo. Unpack it, rename it Adafruit_SSD1306 and place the folder in your Arduino/libraries/-folder. If this is the first library, you might have to create the folder libraries. Then do the same with the Adafruit GFX Library. This folder should be named Adafruit_GFX and placed in the same libraries-folder as SSD1306.

Restart the IDE and open File>>Examples. You should now have access to the Adafruit SSD1306-examples.

Pick the example corresponding to your display. In my case, the 128x64 i2c. Since my display does not have a RESET-connector, I change the pin for OLED_RESET to the default, which is -1. I also change the initiation of the display to the correct i2c-address, 0x3C. To find out the i2c-address, you can use the i2c-scanner from Arduino Playground.
  #define OLED_RESET -1
  display.begin(SSD1306_SWITCHCAPVCC, 0x3C);  
Now upload the sketch to your Feather. Hopefully you have a working connection between the two components, and a working display.

Step 6 - The code

Now when all things are working and connecting to each other, it's time to try it out with the actual code. My example can be found at In order to get that exact code to work, you need to register with to get an API-key, and add your wifi SSID and password. You also need to add the Arduino Json-library to Arduino IDE. But I'm sure the buses at my stop are really irrelevant to you, so do whatever you want here. There are lots of fun API's to play around with. :)

I chose to do the JSON-parsing on my Feather. In retrospect, I should have built a Node API on a Raspberry Pi that called the external API and fetched the nicely parsed data from there instead.

Whatever you choose to do, make sure your application works as expected before you start to solder and encase the components.

Step 7 - Putting it all together

Think carefully before you start to put all the components together. It's a good idea to solder one component at the time and check after every step that it still works as expected. Nothing worse than soldering everything at the same time and then discover it's not working and be completely lost as to where it's gone wrong. Trust me, I've been there...

Since the button snaps into a hole from the top down, it needs to be mounted before the wires are attached to the Feather. I used a simple small cardboard with a top lid and mounted the button first. Then I soldered the button wires to GND and GPIO2, and the LED sequin wires to GND and 3V. Button done, yay!

I almost always keep the headers when I'm soldering components together, since I think it's easier to get right than soldering wires directly into the pads. I solder the wires onto the pins and then use shrinking tube to cover both joints and pins. Heating shrinking tube with a hair dryer works perfectly!

For the display, solder VCC to 3V, GND to GND, SCL to GPIO5 and SDA to GPIO4. As you notice, GND and 3V on the Feather are connected to multiple components. You might want to twin those wires together and tin them into one before soldering them onto the Feather pin.

That's basically it! Mount your info station on the wall where you need access to your quick info and enjoy the seconds you save by not having to get exactly the same information on your phone. :)

Tuesday, 11 October 2016

Handling mocks and environmental variables in JS-apps through Webpack

When JS-apps need different variables in production and locally, one simple way to solve that is by using Webpack. Say you work with an app that calls an API using code similar to this:
DoRequest("GET", "")
  .then(data => {
    const result = JSON.parse(data);
    if (result && result.swaglist) {
      this.setState ({
        groovythings: result.swaglist
  .catch(error => {
We want to be able to use a variable instead of the hardcoded API. Using the different configs for Webpack in dev and prod makes this an easy task.

Setting up Webpack's DefinePlugin

Take a simple Webpack config for a React application, like the following:
var webpack = require('webpack');
var path = require('path');

var config = {
  devtool: 'inline-source-map',
  entry: [
    path.resolve(__dirname, 'src/index')
  output: {
    path: __dirname + '/dist',
    publicPath: '/',
    filename: 'bundle.js'
  module: {
    loaders: [
      { test: /\.js$/, exclude: /node_modules/, loader: "babel-loader" },
      { test: /(\.css)$/, loaders: ['style', 'css']},

module.exports = config;
Let's presume we have completely different Webpack configs for dev and prod. First we add a global config-object at the top of the file:
var GLOBALS = {
  'config': {
    'apiUrl': JSON.stringify('')
Don't forget to stringify! Then we add a new plugin in the config section:
  plugins: [
    new webpack.DefinePlugin(GLOBALS)
And now we can use the variable in our application:
DoRequest("GET", config.apiUrl)
  .then(data => {
    const result = JSON.parse(data);
    if (result && result.swaglist) {
      this.setState ({
        groovythings: result.swaglist
  .catch(error => {

Adding a mock API

Using this approach, it's very easy to set up a way to temporarily use a mock instead of a real API. This is a great help during development if the API in question is being developed at the same time. Or if you're working on the train without WiFi. :)

I like to use NPM tasks for my build tasks, in those cases where a task runner like Grunt or Gulp is not really needed. My NPM tasks in package.json typically look something like this:
  "scripts": {
    "build:dev": "npm run clean-dist && npm run copy && npm run webpack:dev",
    "webpack:dev": "webpack --config -w",
    "build:prod": "npm run clean-dist && npm run copy && npm run webpack:prod",
    "webpack:prod": "webpack --config",
    "clean-dist": "node_modules/.bin/rimraf ./dist && mkdir dist",
    "copy": "npm run copy-html && npm run copy-mock",
    "copy-html": "cp ./src/index.html ./dist/index.html",
    "copy-mock": "cp ./mockapi/*.* ./dist/"
Now, to add a build:mock-task to use a mock instead of the real API, let's start by adding two tasks in package.json.
"build:mock": "npm run clean-dist && npm run copy && npm run webpack:mock",
"webpack:mock": "webpack --config -w -mock",
Build:mock does the same as the ordinary build:dev-task, but it calls webpack:mock instead. Webpack:mock adds the flag -mock to the Webpack command. Arguments to Webpack are captured using process.argv. So we just add a line of code at the top of to catch it:
var isMock = process.argv.indexOf('-mock') > 0;
Now we can change the GLOBALS config-object accordingly. The resulting Webpack config looks like this:
var webpack = require('webpack');
var path = require('path');

var isMock = process.argv.indexOf('-mock') > 0;

var GLOBALS = {
  'config': {
    'apiUrl': isMock
      ? JSON.stringify('./mock-swag.json')
      : JSON.stringify('')

var config = {
  devtool: 'inline-source-map',
  entry: [
    path.resolve(__dirname, 'src/index')
  output: {
    path: __dirname + '/dist',
    publicPath: '/',
    filename: 'bundle.js'
  plugins: [
    new webpack.DefinePlugin(GLOBALS)
  module: {
    loaders: [
      { test: /\.js$/, exclude: /node_modules/, loader: "babel-loader" },
      { test: /(\.css)$/, loaders: ['style', 'css']},

module.exports = config;
The mock is nothing more advanced than a JSON-blob with the same structure as your API:
  "swaglist": [
      "thing": "Cats",
      "reason": "Because they're on Youtube."
      "thing": "Unicorns",
      "reason": "Because it's true they exist."
      "thing": "Raspberry Pi",
      "reason": "Because you can build stuff with them."
      "thing": "Cheese",
      "reason": "Because it's very tasty."
Now, run the build:mock-task and let the API-developers struggle with their stuff without being bothered. :)

Monday, 26 September 2016

Building a faceted search using Redis and - part 4: Using Redis in an MVC-app

There's a number of .Net clients available as Nuget-packages. I've chosen to use StackExchange's Redis, It maps well against the commands available in the Redis Client, it has a good documentation and, well, Stack Overflow uses it so it really ought to cover my needs... And of course, it is free.

The demo web for the faceted search is available at and code can be found on

Connecting to Redis

Once the StackExchange.Redis nuget package is installed in the .Net-solution, we can try a simple Redis query. We want all Hotels that have one star, i e all members of the set Stars:1:Hotels.
  var connection = ConnectionMultiplexer.Connect("redishost");
  var db = connection.GetDatabase();
  var list = db.SetMembers("Stars:1:Hotels");
The list returned is the JSON-blobs we stored for each hotel, so we need to deserialize it to a C#-entity using Newtonsoft.
  var hotels = hotels.Select((x, i) =>
    var hotel = JsonConvert.DeserializeObject(x);
    hotel.Index = i;
    return hotel;
Now, the ConnectionMultiplexer is the central object of this Redis Client. It is expensive, does a lot of work hiding away the inner workings of talking to multiple servers and it is completely threadsafe. So it's designed to be shared and reused between callers. It should not be created per call, as in the code above.

The database object that you get from the multiplexer is a cheap pass through object on the other hand. It does not need to be stored, and it is your access to all parts of the Redis API. One way to handle this is to wrap the connection and Redis calls in a class that uses lazy loading to create the connection.
  private static ConnectionMultiplexer Connection => LazyConnection.Value;
  private static readonly Lazy LazyConnection =
    new Lazy(() => ConnectionMultiplexer.Connect("redishost"));

  private static IDatabase GetDb()  {
    return Connection.GetDatabase(Database);

  public static string GetString(string key)  {
    return GetDb().StringGet(key);

Fine tuning the queries

Let's return to the concepts from the earlier parts of this blog series, combinations of sets. Say we want to get all hotels in Germany that has a bar. Just send in an array of the keys that should be intersected.
  var db = GetDb();
  return db.SetCombine(SetOperation.Intersect, 
    new []{"Countries:1:Hotels", "Bar:False"};
The chosen keys in the same category should be unioned before they are intersected with another category. As we did before, we union them and store them in the db to be able to do the intersection directly in Redis. In this case, we also send in the name of the new key to store, compounded from the data it contains.
  var db = GetDb();
  db.SetCombineAndStore(SetOperation.Union, "Countries:1:Countries:2:Hotels", 
    new []{"Countries:1:Hotels", "Countries:2:Hotels"});
  return db.SetCombine(SetOperation.Intersect, 
    new []{"Countries:1:Countries:2:Hotels", "Bar:False"};
If we want to sort the list according to an external key, we just add the by-keyword in the sort-command to point to the correct key, using the asterisk-pattern.
  var db = GetDb();
  db.Sort("Countries:1:Hotels", by: "SortByPrice_*", get: new RedisValue[] {"*"}));

Putting it all together

Now we have the concepts and data modelling of Redis and the Redis client in place. And the rest is basically just putting the things together. The filtering buttons are created dynamically according to what options are available in the db. Each time a filter or sorting option is clicked, or a slider is pulled, an event is triggered in javascript that creates an url based on which buttons are chosen.

The call goes via AJAX to the MVC-app that does all the filtering using unions and intersections, fetches and sorts the final list, and disables or enables any affected filter buttons.

All this, as you know, can be done in a number of ways. If you need inspiration or some coding examples, take a look at the code on :)

Friday, 23 September 2016

Leader Election with Consul.Net

Microservices are great and all that, but you know those old fashioned batch services, like a data processing service or a cache loader service that should run with regular intervals? They're still around. These kind of services often end up on one machine where they keep running their batch jobs until someone notices they've stopped working. Maybe a machine where it runs for both stage and production purposes, or maybe it doesn't even run in stage cause no one can be bothered. Easier to just copy the database from production.

But we can do better, right? One way to solve this is to deploy the services to multiple machines, as you would with a web application. Use Octopus, deploy the package, install and start the service, then promote the same package to production, doing config transforms along the way. Problem then is that we have a service that runs on multiple machines, doing the same job multiple times. Unnecessary and, if there's a third party API involved, probably unwanted.

Leader election to the rescue

Leader election is really quite a simple concept. The service nodes register against a host using a specific common key. One of the nodes is elected leader and performs the job, while the other ones are idle. This lock to a specific node is held as long as the node's session remains in the host's store. When the node's session is gone, the leadership is open for taking by the next node that checks for it. Every time the nodes are scheduled to run their task, this check is performed.

Using this approach, we have one node doing the job while the others are standing by. At the same time, we get rid of our single point of failure. If a node goes down, another will take over. And we can incorporate this in our ordinary build chain and treat these services like we do with other types of applications. Big win!

An example with

Consul is a tool for handling services in your infrastructure. It's good at doing many things and you can read all about it at Consul is installed as an agent on your servers, which syncs with one or many hosts. But you can run it locally to try it out.

Running Consul locally

To play around with Consul, just download it here, unpack it and create a new config file in the extracted folder. Name the file local_config.json and paste in the config below.
    "log_level": "TRACE",
    "bind_addr": "",
    "server": true,
    "bootstrap": true,
    "acl_datacenter": "dc1",
    "acl_master_token": "yep",
    "acl_default_policy": "allow",
    "leave_on_terminate": true
This will allow you to run Consul and see the logs of calls coming in. Run it by opening a command prompt, moving to the extracted folder and typing:
consul.exe agent -dev -config-file local_config.json client

For a .Net-solution, a nice client is available as a Nuget-package, With that, we just create a ConsulClient and have access to all the API's provided by Consul. For leader election, we need the different Lock-methods in the client. Basically, CreateLock is creating the node session in Consul, AcquireLock is trying to assume leadership if no leader exists, and the session property IsHeld is true if the node is elected leader and should do the job.
var consulClient = new ConsulClient();
var session = consulClient.CreateLock(serviceKey);
await session.AcquireLock();
if (session.IsHeld)

A demo service

Here's a small service running a timer updating every 3 seconds. On construction, the service instance creates a session in Consul. Every time the CallTime-function is triggered, we check if we hold the lock. If we do, we display the time, otherwise we print "Not the leader". When the service is stopped, we destroy the session so the other nodes won't have to wait for the session TTL to end.
using System;
using System.Threading;
using System.Threading.Tasks;
using Consul;
using Topshelf;
using Timer = System.Timers.Timer;

namespace ClockService
    class Program
        static void Main(string[] args)
            HostFactory.Run(x =>
                x.Service(s =>
                    s.ConstructUsing(name => new Clock());
                    s.WhenStarted(c => c.Start());
                    s.WhenStopped(c => c.Stop());

    class Clock
        readonly Timer _timer;
        private IDistributedLock _session;

        public Clock()
            var consulClient = new ConsulClient();
            _session = consulClient.CreateLock("service/clock");
            _timer = new Timer(3000);
            _timer.Elapsed += (sender, eventArgs) => CallTime();

        private void CallTime()
            Task.Run(() =>

                ? $"It is {DateTime.Now}" 
                : "Not the leader");

        public void Start() { _timer.Start(); }

        public void Stop()
                Task.Run(() =>
                Task.Run(() =>

When two instances of this service are started, we get this result. One node is active and the other one is idle.

When the previous leader is stopped, the second node automatically takes over the leadership and starts working.

All in all, quite a nice solution for securing the running of those necessary batch services. :)

Saturday, 10 September 2016

Building a faceted search using Redis and - part 3: Sorted sets for range queries

Storing and combining sets and strings in Redis will get us a nice filtered search. The first three rows of filtering options in the demo at are using only sets holding the keys to the hotels. If one or more buttons are clicked in one category, e g Countries, we do a union of those sets and store the new set in Redis. The same goes for all categories, clicked options of the category are unioned and stored as new keys, then intersected against the other categories.

With the possibility to sort the final set using external keys, we have built quite a cool feature with not that much work. But to make it awesome, we want to add some range filters to be able to filter out for instance all hotels in this facet within a certain price range. Not only does it look impressive, it's also easy to achieve with Redis.

Sorted sets

Sorted sets in Redis are like ordinary sets, but with one major difference. Whereas sets can hold only string values, typically the key to some other entity, sorted sets also give each item in the set a numeric score. If the score is the same for all items, the set is sorted and ranged lexically instead. There are some very interesting things that can be done with the lexical part of the sorted sets, but for this demo, we're going to look at the numeric score instead.
Hotels:Prices = [
   1000 "Hotels:1",
   2000 "Hotels:33",
   5000 "Hotels:194",
   3000 "Hotels:233",
    750 "Hotels:299",
   8000 "Hotels:45",
The set is always sorted by the score as a default. To get the items of a set, the command ZRANGE is used. ZRANGE takes the name of the sorted set and the indexes of where to start and end. To get all items without knowing how big the set is, use -1 as the ending index.
ZRANGE Hotels:Prices 0 -1
  1) "Hotels:299"
  2) "Hotels:1"
  3) "Hotels:33"
  4) "Hotels:233"
  5) "Hotels:194"
  6) "Hotels:45"
To view the score and make sure it's sorted correctly, add WITHSCORES to the command. Here we fetch the items between index 0 and 3.
  1) "Hotels:299"
  2) "750"
  3) "Hotels:1"
  4) "1000"
  5) "Hotels:33"
  6) "2000"
  7) "Hotels:233"
  8) "3000"
Getting a range of items from the sorted set by their index is not enough though. We want to be able to fetch all items between, say 1000 and 2200 SEK. Easy peasy using ZRANGEBYSCORE instead of ZRANGE!
  1) "Hotels:1"
  2) "1000"
  3) "Hotels:33"
  4) "2000"
And now things start to fall into place. We have a way of getting the id's of all hotels with a price between 1000 and 2200 SEK. Now we need to create a new set out of this range to intersect this result with the other sets.

Combining sorted sets

Creating this set containing only a certain range of prices is different from the sets we created before. There's no single command that will create the set for us. It's not a union, intersection or difference operation we are looking at. We need a subset of the data in the sorted set.

The way to do this is to create a copy of the original sorted set by using the ZUNIONSTORE command. In this case, we don't want to do a union with another set, we just want to copy the whole Hotels:Prices-set. If only one set is given in a union, this is precisely what happens. To define the set in the db, we name it Hotels:Prices:1000:2200 to show which range of prices it will eventually contain.
ZUNIONSTORE Hotels:Prices:1000:2200 1 Hotels:Prices
Now, we can remove the range we're not interested in from this new set using the command ZREMRANGEBYSCORE. All rangeby-commands are inclusive as default, meaning that the score we provide will be included in the range. This was fine in the latter case where we wanted to include both 1000 and 2200 in our range, but here we want to remove all items with a score less than 1000 and greater than 2200. Luckily this is not a problem since we have the option to make the numbers exclusive by adding a parenthesis.

So, first we want to remove all items with a score lower than 1000. Since we don't know the lowest score in the set, we use negative infinity (-inf) as the starting point. Then we remove everything greater than 2200 up to positive infinity (inf).
ZREMRANGEBYSCORE Hotels:Prices:1000:2200 -inf (1000
ZREMRANGEBYSCORE Hotels:Prices:1000:2200 (2200 inf

ZRANGE Hotels:Prices:1000:2200 0 -1
  1) "Hotels:1"
  2) "1000"
  3) "Hotels:33"
  4) "2000"
Success! We have a new set, containing only the hotels within the given price range. Now we can intersect this set with the other sets.

Combining sets and sorted sets

If we try to do the same kind of intersection like before between an ordinary set and a sorted set, SINTER, we'll get a big no-no. A union, intersection or diff that involves a sorted set will have to use the special sorted set commands; ZINTERSTORE, ZDIFFSTORE and ZUNIONSTORE. All of these commands store a new set in the db. The reason the command is different is that the contents of these types of sets are different.

A sorted set does not only contain the string value, it also has the numeric score. When doing an intersection, we have to decide how to treat the scores of the two different sets. Should the two scores of intersected items be added, or should we use the minimum or maximum value? If we intersect between a regular set and a sorted set, the sorted set's items gets treated like they have a score of 0.

Next up - using Redis in .Net

Now that we hopefully understand Redis a bit better, it's time to get it up and running in the MVC-app.

Friday, 9 September 2016

Building a faceted search using Redis and - part 2: Combining and sorting sets in Redis

So far we have used the set and string data types in Redis. The Countries-set holds the keys to all country entities. Each country then has a set of all hotel keys in that country. The hotel key holds a string, which is the JSON-representation of the hotel entity. By setting up keys like this, we can slice our hotels in many ways. Want to fetch all hotels with one star? Just get the members of the Stars:1:Hotels-set using the command SMEMBERS.
Countries = ["Countries:1", "Countries:2", "Countries:3"]
Countries:1 = "Germany"
Countries:2 = "Sweden"
Countries:3 = "Denmark"
Countries:1:Hotels = ["Hotels:1", "Hotels:33", "Hotels:194"]
Hotels:1 = '{\"Name\":\"Hotel 1\",\"Stars\":3,\"PricePerNight\":1000}'
Hotels:33 = '{\"Name\":\"Hotel 33\",\"Stars\":5,\"PricePerNight\":2700}'
Hotels:194 = '{\"Name\":\"Hotel 194\",\"Stars\":1,\"PricePerNight\":235}'
Stars:1:Hotels = ["Hotels:194", "Hotels:200"]

SMEMBERS Stars:1:Hotels

Combining sets

Just getting the different sets one by one won't help us filter our hotel list. What we need to do is combine the sets in different ways. In Redis, you can perform combination operations on sets and get the resulting set as a return value, but you can also store the resulting set as a new set in the database. The lifetime scope of these sets can be either temporary by setting an expiration, or permanent if that suits your needs better.

Redis performs intersections, unions and difference operations extremely fast, which makes storing the sets and performing these data manipulations in Redis a much better idea than doing them in your application code. These operations are very powerful and can be combined in a multitude of interesting ways.


Union is the operation that returns all unique members of the given sets. If we want to get all hotels in Germany and Sweden, but not Denmark, we do a union of Countries:1:Hotels and Countries:2:Hotels.
SUNION Countries:1:Hotels Countries:2:Hotels
To store the resulting set instead of immediately retrieving it, we use SUNIONSTORE and as the first parameter to the operation provide a name for the new key.
SUNIONSTORE Countries:1:Countries:2:Hotels Countries:1:Hotels Countries:2:Hotels
If the same value exists in both sets, it will only be included once in the resulting set.


Intersect combines two or more sets by taking the values only existing in all sets. An intersection between Countries:1:Hotels and Countries:2:Hotels won't give us anything, unless a hotel can be located in both countries. But doing an intersection between the sets Countries:1:Hotels and Stars:1:Hotels will give us all hotels in Germany holding 1 star.
SINTER Countries:1:Hotels Stars:1:Hotels
Here we can of course use that previously stored union set with both German and Swedish hotels.
SINTER Countries:1:Countries:2:Hotels Stars:1:Hotels
If we want to keep on doing combination operations on the result of this operation, we store the set, creating a new compounded key describing the set's content.
SINTERSTORE Countries:1:Countries:2:Stars:1:Hotels 
   Countries:1:Countries:2:Hotels Stars:1:Hotels


The final combination operation is diff, which as the name implies, returns the difference between sets. The diff operation is a bit different (haha) than the other combination operations. While union and intersect performs a union/intersection of all given sets, diff performs a difference operation between the first given set and one or more other sets. If we want to see which Swedish and German hotels that don't have 2 or 3 stars, we can do a diff operation.
SDIFF Countries:1:Countries:2:Hotels Stars:2:Hotels Stars:3:Hotels
Now, this could be done with an intersect operation as well, but then you would have to first store the union of the sets of 1, 4 and 5 stars and then do the intersection between that union and the countries union.
SUNIONSTORE Stars:1:Stars:4:Stars:5:Hotels Stars:1:Hotels Stars:4:Hotels Stars:5:Hotels
SINTER Countries:1:Countries:2:Hotels Stars:1:Stars:4:Stars:5:Hotels

Sorting sets

Once we have performed the unions, intersections, diffs and whatnots of our sets, we want to get the result of the final stored key. Even though the actual set only contains one value, the keys of the hotels, they can be sorted in different ways by using external keys and pattern matching.

If we for instance want to sort all hotels in the final set according to their price, we create a new key for that. The key name needs to contain the string value held in the set, i e the hotel key and the value used for sorting. The sort command then takes a pattern and sorts the set according to the values in the external key.
Countries:1:Countries:2:Stars:1:Hotels = ["Hotels:194", "Hotels:200"]
SortByPrice_Hotels:194 = 1000
SortByPrice_Hotels:200 = 1200

SORT Countries:1:Countries:2:Stars:1:Hotels BY SortByPrice_*
We can use a limit to decide how many items we want to fetch in the sorted list. The limit takes the offset and the size, in this case we start at item 0 and take 4 items.
SORT Countries:1:Countries:2:Stars:1:Hotels BY SortByPrice_* LIMIT 0 4
And finally, if the set contains keys to other entities, we can also use the same pattern as for sorting by external keys to actually get that JSON-blob in the same operation. Very clever. Think of the asterisk as being replaced with the individual values in the set. :)
SORT Countries:1:Countries:2:Stars:1:Hotels BY SortByPrice_* LIMIT 0 4 GET *

Next step

These operations will get us a long way. The basics of filtering with Redis is to just keep on storing the result of unions, intersections and diffs based on choices made in the GUI until there is a final set to be sorted and fetched. But we need to use one more datatype in Redis, sorted sets. The sorted set will help us fetch hotels within a certain price interval, distance to beach or distance to shopping.