My Ninja Blocks Setup

Now that the fine folks at Ninja Blocks already started shipping their next generation IoT controller, the Ninja Sphere, I better write about my old school Ninja Blocks setup before it gets too late.

First off is the Watts Clever socket that I used with Ninja rules to switch a table lamp on at 8.30pm and then switch it off at 10.30pm during weekdays.

The second socket is used to switch my Tivoli radio on and off on the rare occasion when everyone is away travelling and the house is empty, you know, like Home Alone but minus the kid.

I already wrote about my use of the third socket to switch off my old dryer.

Next up is the contact sensor, it’s set up to send an email when the door is opened. I’m keeping track of the times when I leave and return home. Too bad there’s no easy way to chart the data, at least one that doesn’t involve hacking some code.

The motion sensor on the left is used to detect any movement in the living room when the house is empty, definitely not that useful without a security camera. The wireless red button on the right works, but I’m not putting it outside as a door bell since someone can easily rip it off.

The temperature and humidity sensor is set up to send an email when the house temperature rises above and drops below 30 degrees. In the summer, I would stay back in the city a bit longer when the house is too hot and wait for the evening cool change.

Here’s how the block looks like. I wrote about using the RGB LED as build status indicator, that code now lives in nestor-ninjablocks. I also use this block to test some stuffs on Ubuntu.

And here’s how the Ninja Remote looks like, it’s set up on my Android phone.

Overall, Ninja Blocks works well as a hobbyist project, sure there are some minor annoyances with actuation delay or failure, but it delivers what it promised, and I’m sure the Ninja Sphere will be heaps better.

As for IoT itself, it’s still very early days, the industry has a lot to solve before mainstream adoption becomes a reality. I think the three most critical problems in need of solution are security (please don’t hack my home, ever), energy (I’m sick of replacing batteries every so often), and interoperability (will various IoT controllers and devices work with each other?).

Looking forward to the day when my vacuum cleaner, fridge, coffee machine, rice cooker, washing machine, and dryer, can communicate with one another and coordinate themselves.

Wrapping AEM cURL Commands With Python

If you ever had the experience (no pun) of using Adobe Experience Manager (AEM), you would already know that curl commands are arguably the de facto way of interacting with AEM over http.

Whenever you google for various AEM /CQ HOWTOs, it’s easy to find examples with curl commands:

Naturally I started integrating those curl commands into my project’s application provisioning and deployment automation via Ansible’s shell module. However, it wasn’t long until I encountered a number of issues:

  • Lack of consistent response payload format from AEM. Some status messages are embedded within various html response bodies, some within json objects.
  • Some endpoints are returning status code 500 for non-server error result (e.g. when an item to be created already exists), making it hard to differentiate from a real server error.
  • Some endpoints are returning status code 200 with error message in the html response body.
  • Even though curl –fail exists, it’s not fail-safe. There doesn’t seem to be any way to identify success/failure result without parsing the response headers and body.
  • Which means that curl commands could be returning exit code 0 even when the http status code indicates an error, and Ansible would not fail the task, it would simply continue on to the next task.
  • Printing the response bodies to stdout won’t help much either, it will be painful for a human having to go through a large volume of text to identify any error.

It’s obvious that curl commands alone are not enough, I need better error handling by both checking status code and parsing response body, and then translating it into Ansible success/failed status. So I wrote PyAEM PyAEM, a Python client for Adobe Experience Manager (AEM) API.

Why Python? 1) It’s first class in Ansible. 2) It’s saner to handle the response (status code checking, html/json parsing) in Python compared to shell. 3) Ditto for code lint, unit tests, coverage check, and package distribution, Python wins!

PyAEM ended up using pycurl to simplify porting those curl commands into Python. I initially tried Requests instead and managed to port majority of the curl commands, until I got to package manager API and  kept getting different responses from AEM with Requests compared to the ones with curl commands. Since AEM was a black box and I didn’t have any access to its source code, I couldn’t tell what was it with libcurl and package upload/download that was missing from Requests. So at the end I stuck with pycurl.

Here’s a code snippet on how to use PyAEM to stop a bundle called ‘mybundle':
(check out PyAEM API Reference to find out other actions that PyAEM currently supports)

import pyaem

aem = pyaem.PyAem('admin', 'password', 'localhost', 4502)

try:
    result = aem.stop_bundle('mybundle')

    if result.is_success():
        print 'Success: {0}'.format(result.message)
    else:
        print 'Failure: {0}'.format(result.message)
except pyaem.PyAemException, e:
    print e.message

Better. Now it has success/failure status handling and also error handling by catching PyAemException.

As for Ansible, the next obvious step is to create Ansible modules which utilise PyAEM. These modules serve as a thin layer between Ansible and PyAEM, all they need to worry about is argument passing and status handling.

#!/usr/bin/python

import os
import pyaem
import commands

def main ():
    module = AnsibleModule(
        argument_spec = dict(
            host = dict(required=True),
            port = dict(required=True),
            bundle_name = dict(required=True)
        )
    )

    host = module.params['host']
    port = module.params['port']
    bundle_name = module.params['bundle_name']

    aem_username = os.getenv('crx_username')
    aem_password = os.getenv('crx_password')

    aem = pyaem.PyAem(aem_username, aem_password, host, port)
    result = aem.stop_bundle(bundle_name)

    if result.is_failure():
        print json.dumps({ 'failed': True, 'msg': result.message })
    else:
        print json.dumps({ 'msg': result.message })

from ansible.module_utils.basic import *
main()

The above module can then be used in an Ansible playbook.

- name: 'Stop com.day.crx.crxde-support bundle'
  aem-stop-bundle: >
    host=somehost.com
    port=4503
    bundle_name=com.day.crx.crxde-support

Too simple!

This can actually be improved further by creating an Ansible role for AEM, distributed through Galaxy. Things like downloading an AEM package file from an artifact repository, uploading the package to AEM, install, then replicate, it’s a repetitive pattern for AEM package management.

PyAEM is still at an early stage, but it’s stable enough (we use it in production). It currently only supports the actions that are used in my project. Having said that, I think the package is pretty solid with 100% unit test coverage, 0 lint violation, and an automated Travis CI build on every code change.

Since AEM is a proprietary product, it currently doesn’t have any automated integration tests (think AEM docker containers :) ). However, PyAEM is verified to work with AEM 5.6.1 and Python 2.6.x/2.7.x via the internal project I’m working on.

Want to use it? PyAEM is available on PyPI. Anything missing? Contributions are welcome!

Update (30/12/2014):

PyAEM is no longer controlled by me starting from October 2014, the source code was forked to this repository and I already deleted the original repo under my GitHub account as requested. I cannot comment on the future of PyAEM, but if you’re interested, I can find the information for you (ping me at blah@cliffano.com).

If anyone is interested in writing a PyAEM-like library from scratch, I can convince you that it’s not hard but it can be time consuming. Various AEM API documentations are available in public so there’s nothing that can’t be added to the library. I’m happy to help if you have any question.

Lastly, thanks to the folks who already starred the original repo and to those who took the time to tell me that PyAEM was useful. Your feedback is much appreciated!

Roombox – Node Knockout 2013

A few weeks ago I participated in Node Knockout 2013 (NKO4), a 48-hour hackathon with 385 teams competing for the top spot in 7 categories (team, solo, innovation, design, utility/fun, completeness, and popularity).

And here’s a video of what I hacked: Roombox, a Roomba vacuum cleaner turned into a boombox using node.js . This demo shows the Roomba playing Rocky theme, Beverly Hills Cop theme, Hey Jude (The Beatles), Scar Tissue (Red Hot Chilli Peppers), Super Mario Bros. theme, and Airwolf theme.

 

Note: I put the wrong year for The Beatles’ Hey Jude in the video. I wanted to fix it, but it was already 1 am back then and I had to go to work in the morning. Sorry Beatles fans!

The result? Roombox finished 9th in innovation category, and 14th in solo category. Not bad for an idea that I improvised on the D-day itself. If there’s a solo innovation category, Roombox would’ve finished 1st on that inexistent leaderboard :).

Comments from some judges and fellow contestants:

Cool hack! I’m also amused by the rickroll fail :)

Hah now I need to get a Roomba. Great hardware project / hack.

This got innovation points for me as it never would have occurred to me to do this. Made me laugh and share with others.

Most out-of-the-world idea on NKO :D

Completely useless but very innovative!

I would have given you 5 stars on innovation, but I once heard a hard drive play Darth Vader’s theme song so there is a precedent.

How does Roombox work? To put it simply, Roombox parses abc notation sheets, maps the music notes to fit Roomba notes range, splits each song into 4 segments where each segment would be registered to a Roomba slot, then finally the Roomba is instructed to play the song. Most of the development effort was spent on finding a suitable music format, and on testing the music sheets because in reality only few songs would sound decent on a vacuum cleaner.

Here’s a sketch I scribbled after deciding on how I would hack Roombox:

Huge thanks to Mike Harsch for writing mharsch/node-roomba, and Sergi Mansilla for writing sergi/abcnode. And an apology to my wife and brother for suffering through the weekend listening to dozens of horrible songs being tested :p.

Update (08/12/2013):

DBrain told me about DJ Roomba from Parks and Recreation. If iRobot ever upgraded Roomba’s sound system, Roombox code would be totally useful to achieve ‘music player on a moving vacuum cleaner’ a la DJ Roomba.

NodeUp 53: NodeUp Listeners On NodeUp

About a month ago, I joined D-Shaw, Nizar Khalife, Erik Isaksen, and Matt Creager on NodeUp 53 where we discussed about NodeUp podcast and node.js community from NodeUp listeners point of view, and I also talked a bit about Australia, kangaroos, and node. Thanks to Rodd Vagg for pinging me about this particular episode.

Recording the show itself was an interesting experience :). For one, it started at 4am Melbourne EST. I totally missed the two alarms I set up, and was finally awaken by my mobile’s push notification alert from dshaw’s tweet telling me to accept the Skype invitation about two minutes before 4. Ran down the stairs, head spun a bit for the first hour lol.

Here’s the transcription of NodeUp 53 thanks to Noah Collins. I made a mistake where I thought I said that Flickr Photo migrated to node.js as davglass tweeted, but I actually said Facebook Photo on the show. It should be Flickr Photo. My bad, I’m sorry folks.

An Old Dryer, A Watts Clever, and A Ninja Blocks

This was another quick weekend hack to fix my old dryer’s busted timer problem (busted timer = having to stay around when it’s time to switch off the dryer).

Step one was to use Watts Clever Easy-off Remote Control Socket which allowed me to switch the power on and off remotely. This product comes with a remote control which saved me from having to get out of the house to get to the garage during winter. But that’s not all…

Step two was to program the socket on a Ninja Blocks, which gave remote control ability via the web. This allowed me to turn off the dryer all the way from my office.

Step three was to write a node.js script that talks to Ninja Blocks which in turn switches the power socket on and off. This script was then executed from a scheduled Jenkins job.

Voila, the old dryer had a new timer, albeit a long-winded one :p.

Monitor Jenkins From The Terminal

Here’s how I’ve been monitoring my Jenkins setup…

A combination of Nestor + watch + Terminator » one view for monitoring failing builds, one view for executors status, and one view for job queue. A summary of Jenkins status info on a small screen estate that I can place at the corner of my workspace.

If you want to set up something similar, here are the commands: (assume JENKINS_URL is already set)

  • watch -c “nestor dashboard | grep FAIL”
  • watch nestor executor
  • watch nestor queue

DataGen Workers Optimisation

I released DataGen v0.0.9 during lunch break yesterday. This version includes the support to limit how many workers can run concurrently, which is something that I’ve always wanted to add since day one. I finally got the time to do it last weekend, and it turned out to be an easy task thanks to Rod Vagg‘s worker-farm module.

Why is this necessary?

The problem with previous versions of DataGen was that when you want to generate 20 data files, then 20 worker processes will be created and run concurrently. It’s obviously not a great idea to have 20 processes fighting over 2 CPUs.

With v0.0.9, you can specify this limit using the new -m/–max-concurrent-workers flag: (if unspecified, it will default to the number of CPUs)

datagen gen -w 20 -m 2

When I first wrote about DataGen last year, I mentioned that I still needed to run some tests to verify my assumption about the optimal number of workers. So here it is one year later…

The first test is on a Linux box with 8 cores, where each data file contains 500,000 segments, each segment contains a segment ID, 6 strings, and 3 dates.

The second test is on an OSX box with 2 cores, where each data file contains 500,000 segments, but this time each segment only contains a segment ID.

As you can see, the performance is almost always best when the concurrent running worker processes are  limited to the number of available CPUs (8 max concurrent workers on the first chart, and 2 on the second chart).

When you specify 20 workers and your laptop only has 2 CPUs, only 2 workers will generate the data file concurrently at any time, and you can be sure that it will be faster than having 20 workers generating 20 data files at the same time. And that’s why DataGen’s default setting allows as many concurrent workers as the available CPUs.

Introducing Repoman

Q: How do you clone 30 repositories from your personal GitHub accounts and 150 repositories from your organisation GitHub accounts in just one line?

A: repoman --github-user myuser1,myuser2 --github-org myorg1,myorg2 config && repoman init

Q: How do you execute a set of commands against all repositories in just one line?

A: repoman exec 'git stash && git pull --rebase && git stash apply'

I wrote Repoman back in 2011 and I’ve been using it ever since. It was my solution to resolve the annoyances involved with working on multiple machines, multiple OSes, multiple SCMs, and multiple repositories that depend on each other.

Repoman works against a list of repositories listed in .repoman.json file. You can use repoman config to generate a sample file, or add --github-user / --github-org flags to generate a list of GitHub repositories. This .repoman.json file can be placed in either the user home directory or the current directory (your workspace). The rest of Repoman commands like init, get, exec, etc, can then be run from that workspace directory.

Problem: switching between multiple laptop and desktop machines.

After working with multiple machines for a while, I ended up with some repositories existing on only some of the machines, never on all of them. And when I had to use a different machine, then I had to manually clone the repositories that don’t yet exist on that machine. One by one.

With Repoman, I only needed to maintain a .repoman.json file containing all repositories that I worked on, and stored it on a remote repository, then clone it over to all machines. From then on, I could simply repoman init to clone all repositories and repoman get to make sure I have the latest code of all repositories on each machine.

Problem: identifying unfinished changes.

Sometimes I code on the train, on the way to and from work. The thing about coding on the train is that often I had to stop not when I finished a piece of change, but when I arrived at my destination. This resulted in unfinished changes across several repositories on the machine that I used at the time, and I often forgot about those changes until the next time I worked on those repositories again.

With Repoman, I built a habit of running repoman changes to identify unfinished changes before working on anything else.

Problem: working with Git and Subversion repositories.

I had some repositories hosted on GitHub, Gitorious, Bitbucket, and Google Code. This of course meant that I had to switch between Git and Subversion commands.

With Repoman, I only needed to run its simple commands repoman init | get | changes | save | undo, which covers the majority of my coding activities (note: Repoman does not aim to cover all Git and Subversion commands). Those commands are mapped to its Git or Subversion equivalent accordingly.

Problem: executing a custom command on all repositories.

This used to annoy me so much. I had a number of repositories and from time to time I had to add the same file to all of them, let’s say a .travis.yml file or a .gitignore file.

With Repoman, I just needed create the file once at /tmp/file, then run repoman exec ‘cp /tmp/file . && git commit -am “Add file” && git pull –rebase && git push’. Voila, all repositories had the new file.

Problem: grouping repositories by project.

I often had to switch between projects, where each project consisted of several repositories. When I worked on a particular project, I would like to update its repositories to the latest. Ditto when I moved to the next project.

With Repoman, I created a config file for each project, e.g. .project1.json and .project2.json . Then I symlink-ed .repoman.json to the project I work on. Or if I often needed to switch between the projects, then I would use Repoman with custom config file: repoman -c .project1.json get .

Check out the README on GitHub for more usage examples, and npm install -g repoman away!

Voice-Controlled Lamp Using Ninja Blocks + MacBook

Here’s a video of my latest quick weekend hack, using voice to switch a lamp on and off:

Ok, so it’s actually a combination of Watts Clever + Ninja Blocks + Node.js + Automator + Speakable Items. Speakable Items takes the voice commands via MacBook’s internal microphone, then calls the Automator applications, which then runs a Node.js script (which output gets spoken by Automator applications), which then tells Ninja Blocks to actuate Watts Clever power socket.

Here’s how I set it up:

Configure Watts Clever remote RF signals on Ninja Blocks dashboard (/hattip: @james and @Jeremy over at the forum).

Create this simple Node.js script file. I saved it as rf433.js .

var ninjaBlocks = require('ninja-blocks'),
  app = ninjaBlocks.app({
    user_access_token: 'your-ninjablocks-token'
  });

app.devices({ device_type: 'rf433' }, function (err, devices) {
  var subDevices = app.utils.findSubDevice({ shortName: process.argv[2] }, devices);
  console.log('Switching ' + name);
  Object.keys(subDevices).forEach(function (key) {
    var subDevice = subDevices[key];
    app.device(subDevice.guid).actuate(subDevice.data)
  });
});

Create two Automator applications, one called ‘Lamp on’, the other ‘Lamp off’, each containing:

  • Run Shell Script, which is used to run rf433.js .
  • Speak Text, which is used to notify when device#actuate is about to be called (that’s the console.log('Switching ' + name); line from the above Node.js script).

These applications must be available from ~/Library/Speech/Speakable Items/ .

Configure the Mac’s System Preferences -> Accessibility -> Speakable Items on Lion and Mountain Lion, or System Preferences -> Speech on Snow Leopard.

Done.

Overall, this is just an experiment to prove that it can be done. Speakable Items is obviously _not_ Siri, so you can’t expect the same quality of speech recognition. And having to open a MacBook every time I want to use this is obviously too troublesome.

It would be awesome for Ninja Blocks to have / work with something like Ubi. Every home automation solution needs at least a voice or gesture based input mechanism :).

Note to self: I’m totally looking forward to the future where every single thing in the house is powered by renewable energy-based wireless electricity, each running a tiny low-powered Node.js server which talks to one another via HTTP. Life would be much more efficient!

Update (14/12/2014): Ninja Blocks’ Dan Friedman showed a demo video of Ubi integration with Ninja Sphere. W00t w00t!

Jenkins Build Status On Ninja Blocks RGB LED

Nestor v0.1.2 is out and one of its new features is nestor ninja for monitoring Jenkins and displaying the latest build status on Ninja Blocks RGB LED device (if you have a block, it’s the ninja’s eyes).

Here’s a usage example:
export JENKINS_URL=<url>
export NINJABLOCKS_TOKEN=<token_from_https://a.ninja.is/hacking>
nestor ninja

Red for build failure, green for build success, yellow for build warning, and white for unknown status. The yellow light looks quite similar to green, and the white one does look blue-ish.

And the best place to run nestor ninja? On the block itself of course!

ssh ubuntu@ninjablock.local
apt-get install upstart
npm install -g nestor
cat /usr/lib/node_modules/nestor/conf/ninja_upstart.conf > /etc/init/nestor_ninja.conf
vi /etc/init/nestor_ninja.conf # and change JENKINS_URL and NINJABLOCKS_TOKEN values
shutdown -r now

Log messages will then be written to /var/log/nestor_ninja.log