Docker - Clean up disk space

When using Docker Toolbox on Mac OS X, it complains no space left on device after a while. So we need to clean up the disk space.

Delete exited containers

docker rm -v $(docker ps -a -q -f status=exited)  

Remove dangling images

docker rmi $(docker images -f "dangling=true" -q)  

Remove dangling volumes

docker volume rm $(docker volume ls -qf dangling=true)  

References: 1, 2

AngularJS - Insert Text at Caret Position

In AngularJS, if you want to insert text at current caret position, you can use following service. The code is written in CoffeeScript.

angular  
  .module('text-insert', [])
  .service('TextInsert', () -> 
    {       
      insert: (input, text) ->
        return if !input
        scrollPos = input.scrollTop
        pos = 0
        browser = if (input.selectionStart || input.selectionStart == '0') then 'ff' else (if document.selection then 'ie' else false)
        if browser == 'ie'
          input.focus()
          range = document.selection.createRange()
          range.moveStart('character', -input.value.length)
          pos = range.text.length
        else if browser == 'ff'
          pos = input.selectionStart
        front = (input.value).substring(0, pos)
        back = (input.value).substring(pos, input.value.length)
        input.value = front + text + back
        pos = pos + text.length
        if browser == 'ie'
          input.focus()
          range = document.selection.createRange()
          range.moveStart('character', -input.value.length)
          range.moveStart('character', pos)
          range.moveEnd('character', 0)
          range.select()
        else if browser == 'ff'
          input.selectionStart = pos
          input.selectionEnd = pos
          input.focus()
        input.scrollTop = scrollPos
        angular.element(input).trigger('input')
        ''
    }
  )

The first argument of insert method is the raw DOM node, can be input or textarea. The second argument is the text to insert. See following code about how to use it.

TextInsert.insert(angular.element('#input1')[0], 'hello')  

This AngularJS service is based on another CodePen project.

A complete example can be find in this CodePen project. If you want to use JavaScript, you can click 'View Compiled' in CodePen to see the compiled JavaScript code.

See the Pen Angular Text Insert at Caret Position by Fu Cheng (@alexcheng) on CodePen.

Spring Async Task Executor with Event Bus

I have a operation which talks to database, so it may be slow. So I was looking for a way to make it asynchronous, then I discovered Spring 4 has async task executor and ListenableFuture and it works well with current Google Guava EventBus.

We created a new AsyncListenableTaskExecutor first. SimpleAsyncTaskExecutor does not reuse any threads, rather it starts up a new thread for each invocation. But it's good enough.

private final AsyncListenableTaskExecutor taskExecutor = new SimpleAsyncTaskExecutor("my task");  

Then we submit a task to the executor.

this.taskExecutor.submitListenable(() -> {  
    saveToDB();
    return null;
}).addCallback(
    (result) -> this.eventBus.post(new SaveOKEvent()),
    (ex)     -> this.eventBus.post(new SaveFailedEvent(ex))
);

Then we adds listeners to both SaveOKEvent and SaveFailedEvent to handle the success and failure case. Done!

JGit Flow Maven plugin integration with Bamboo

JGit Flow is a good plugin to apply git-flow practice with Maven projects. Since it's a pure Java implementation, it's very easy to integrate with most CI servers.

However, if you are using Atlassian Bamboo, there are some workarounds for particular issues.

Git repository url

Bamboo use a fake Git repository after checkout. The repository's url is something like file:///nothing. So JGit Flow cannot perform actual Git operations on this repository. You can:

1) Set repository url in plugin configuration

<configuration>  
    <defaultOriginUrl>[repository url]</defaultOriginUrl>
    <alwaysUpdateOrigin>true</alwaysUpdateOrigin>
</configuration>  

2) Use Git command to update repository url

${bamboo.capability.system.git.executable} remote set-url origin ${bamboo.repository.git.repositoryUrl}

Git repository authentication

You can use -Dusername and -Dpassword in JGit Flow plugin to set the repository's username and password. To execute Bamboo shell script with Git commands, a .netrc file with authentication details needs to be created. This can be done via agent start script or using echo in inline script.

machine bitbucket.org  
login <username>  
password <password>  

Clean old release branches

After finishing a release using release-finish, the remote release branch is deleted by default. But the branch may still exist in local. These old release branches should be removed, otherwise next release-start goal will fail.

${bamboo.capability.system.git.executable} fetch --prune --verbose

${bamboo.capability.system.git.executable} branch -vv | awk '/: gone]/{print $1}' | xargs ${bamboo.capability.system.git.executable} branch -d 2> /dev/null

echo 'stale branches deleted'  

Elasticsearch - Delete documents by type

If you want to delete documents in Elasticsearch by type using Java API, below are some options:

  • For Elasticsearch 1.x, use the deprecated prepareDeleteByQuery method of Client. 2.x has removed this method.
  • For Elasticsearch 2.x, use delete-by-query plugin.

Or use scroll/scan API as below.

SearchResponse scrollResponse = this.client.prepareSearch(INDEX_NAME)  
        .setTypes(type)
        .setSearchType(SearchType.SCAN)
        .setScroll(new TimeValue(60000))
        .setQuery(QueryBuilders.matchAllQuery())
        .setSize(100)
        .get();
final BulkRequestBuilder bulkRequestBuilder = this.client.prepareBulk().setRefresh(true);  
while (true) {  
    if (scrollResponse.getHits().getHits().length == 0) {
        break;
    }

    scrollResponse.getHits().forEach(hit -> bulkRequestBuilder.add(
        this.client.prepareDelete(INDEX_NAME, type, hit.getId()))
    );
    scrollResponse = this.client.prepareSearchScroll(scrollResponse.getScrollId())
            .setScroll(new TimeValue(60000))
            .get();
}
if (bulkRequestBuilder.numberOfActions() > 0) {  
    bulkRequestBuilder.get();
}

Properties ordering of Groovy JsonSlurper parsing

Groovy JsonSlurper is a useful tool to parse JSON strings. For a JSON object, the parsing result is a Map object. In certain cases, we want to keep the iteration order of Map properties same as encounter order in original JSON string.

By default, JsonSlurper uses a TreeMap, so the properties are actually sorted. Given following program, the result will be obj => {a=0, x=2, z=1}.

public class Test {  
    public static void main(String[] args) {
        String jsonString = "{\"obj\": {\"a\": 0, \"z\": 1, \"x\": 2}}";
        JsonSlurper jsonSlurper = new JsonSlurper();
        Map map = (Map) jsonSlurper.parseText(jsonString);
        map.forEach((k, v) -> System.out.println(String.format("%s => %s", k, v)));
    }
}

To keep the original properties ordering, you can add -Djdk.map.althashing.threshold=512 as the JVM argument, then the output will be obj => {a=0, z=1, x=2}.

The reason is in the source code of groovy.json.internal.LazyMap used by JsonSlurper. See in GitHub. So If jdk.map.althashing.threshold system property is set, LazyMap will use a LinkedHashMap implementation instead of TreeMap, then it will keep the properties ordering.

private static final String JDK_MAP_ALTHASHING_SYSPROP = System.getProperty("jdk.map.althashing.threshold");

private void buildIfNeeded() {  
   if (map == null) {
        /** added to avoid hash collision attack. */
        if (Sys.is1_7OrLater() && JDK_MAP_ALTHASHING_SYSPROP != null) {
            map = new LinkedHashMap<String, Object>(size, 0.01f);
        } else {
            map = new TreeMap<String, Object>();
        }

        for (int index = 0; index < size; index++) {
            map.put(keys[index], values[index]);
        }
        this.keys = null;
        this.values = null;
    }
}

Please note this solution should be used as a hack as it depends on Groovy's internal implementation details. So this behavior may change in future version of Groovy.

Note for Java 8

jdk.map.althashing.threshold system property is removed in Java SE 8, but this hack still works in Java 8 as the implementation only checks the existence of this system property, but not actually uses it.

Install Ghost on CentOS 7

I just moved my personal blog from Jekyll on Heroku to Ghost on Digital Ocean. Although Digital Ocean provides a 1-click application image for Ghost, I decided that I wanted to install Ghost myself, so I can have more control over the instance and application.

Node version

After creating a droplet with CentOS 7, the first to install ins Node.js. Recommended Node for Ghost is >0.10.40. So I used nvm to install this Node version.

When installing nvm, it's a good idea to specify the NVM_DIR and point to a shared path. By default, nvm is installed to current user's home directory, which could be root user's home directory. This can cause file permission issues when Ghost is started using another user.

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | NVM_DIR="/var/nvm" bash  

Then use nvm install 0.10.43 to install Node. Now we can use which node to find the actual path of Node binary.

Service script

To make sure Ghost is started after system restarts, we need to add a service script. Create /etc/init.d/ghost file with following content. The script is based on what I found in this article.

The most important part in this script is the command to start Ghost: /var/nvm/v0.10.43/bin/node index.js >> /var/log/ghost/ghost.log &. Here I used the Node binary installed by nvm to start Ghost. I also used user ghost to run Ghost.

Use chkconfig --add ghost to have the script run at startup. Use service ghost start to start Ghost, service ghost stop to Stop Ghost.

#!/bin/sh
#
# ghost - this script starts the ghost blogging package
#
# chkconfig:   - 95 20
# description: ghost is a blogging platform built using javascript \
#              and running on nodejs
#

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

exec="/var/nvm/v0.10.43/bin/node index.js >> /var/log/ghost/ghost.log &"  
prog="ghost"

[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

lockfile=/var/lock/subsys/$prog

start() {  
    #[ -x $exec ] || exit 5
    echo -n $"Starting $prog: "
    # if not running, start it up here, usually something like "daemon $exec"
    export NODE_ENV=production
    cd /var/data/ghost/
    daemon --user=ghost $exec
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {  
    echo -n $"Stopping $prog: "
    # stop it here, often "killproc $prog"
    pid=`ps -u $prog -fw | grep $prog | grep -v " grep " | awk '{print $2}'`
    kill -9 $pid > /dev/null 2>&1 && echo_success || echo_failure
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {  
    stop
    start
}

my_status() {  
        local base pid lock_file=

        base=${1##*/}

        # get pid
        pid=`ps -u $prog -fw | grep $prog | grep -v " grep " | awk '{print $2}'`

        if [ -z "${lock_file}" ]; then
        lock_file=${base}
        fi
        # See if we have no PID and /var/lock/subsys/${lock_file} exists
        if [[ -z "$pid" && -f /var/lock/subsys/${lock_file} ]]; then
                echo $"${base} dead but subsys locked"
                return 2
        fi

        if [ -z "$pid" ]; then
                echo $"${base} is stopped"
                return 3
        fi

        if [ -n "$pid" ]; then
                echo $"${base} (pid $pid) is running..."
                return 0
        fi

}

rh_status() {  
    # run checks to determine if the service is running or use generic status
    my_status $prog
}

rh_status_q() {  
    rh_status >/dev/null 2>&1
}



case "$1" in  
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart)
        $1
        ;;
    status)
        rh_status
        ;;
    *)
        echo $"Usage: $0 {start|stop|restart|status}"
        exit 2
esac  
exit $?  

Nginx

Install Nginx by following this guide.

Add Ghost Nginx config /etc/nginx/conf.d/ghost.conf. Also make sure Nginx default server config is removed in file /etc/nginx/nginx.conf.

server {  
    listen 0.0.0.0:80;
    server_name midgetontoes.com;
    access_log /var/log/nginx/midgetontoes.log;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;

        proxy_pass http://127.0.0.1:2368;
        proxy_redirect off;
    }
}

New Book - A Practical Guide for Java 8 Lambdas and Streams

This book is not the first book about Java 8 lambda expressions and streams, and it's definitely not the last book about lambda expressions and streams. Java 8 is a Java platform upgrade which the community looking forward to for a long time. Lambda expressions and streams quickly gain popularity in Java developers. There are already a lot of books and online tutorials about lambda expressions and streams. This book is trying to explain lambda expressions and streams from a different perspective.

  • For lambda expressions, this book explains in details based on JSR 335.
  • For streams, this book covers fundamental concepts of Java core library.
  • This book provides how-to examples for lambda expressions and streams.
  • This book also covers the important utility class Optional.

Lambda expressions and streams are easy to understand and use. This book tries to provide some insights about how to use them efficiently.

Buy this book

New Book - Build Mobile Apps with Ionic and Firebase

With the prevalence of mobile apps, more and more developers want to learn how to build mobile apps. Developers can choose iOS or Android platforms to develop mobile apps. But learning Objective-C/Swift or Java is not an easy task. Web development programming languages, HTML, JavaScript and CSS, are easier to understand and learn. Building mobile apps is made possible by Apache Cordova, which creates a new type of mobile apps - Hybrid mobile apps. Hybrid mobile apps are actually running in an internal brower inside a wrapper created by Apache Cordova. With hybrid mobile apps, we can have one single code base for different platforms. Developers also can use their existing web development skills.

Ionic framework builds on top of Apache Cordova and provides out-of-box components which make developing hybrid mobile apps much easier. Ionic uses Angular as the JavaScript framework and has nice default UI style with similar look & feel as native apps. Firebase is a realtime database which can be accessed in web apps using JavaScript. With Ionic and Firebase, you just need to develop front-end code. You don't need to manage any back-end code or servers.

This book is an introductory guide to build hybrid mobile apps using Ionic and Firebase. This book is sample driven. In this book, we are going to build a Hacker News client app. This app can view top stories in Hacker News, view comments of a story, add stories to favorites, etc. This book covers various topics in mobile apps development:

  • Local development environment setup
  • Ionic quickstart
  • Work with Firebase
  • State transition
  • Common UI components: lists, cards, modals, popups
  • Forms & inputs
  • User authentication
  • Publish apps

Source code of the sample app is available on GitHub. View screen-shots of the sample at here.

Buy this book

NodeJS API proxy with CORS support

Our application's backend is Java-based and exports REST API, frontend is AngularJS-based. During frontend development, use Grunt connect to start the development server for CoffeeScript/LESS and static files. To enable AngularJS to access the API which running on different port, we need a proxy with CORS support. So I created a simple proxy server using connect and node-http-proxy.

The proxy code is very simple. In the code below, API server is running on port 8080 and proxy server is running on port 8000. The proxy server sets Access-Control-* headers to enable CORS support. It also provides basic authentication header.

var connect = require('connect'),  
  httpProxy = require('http-proxy');

var app = connect();

var proxy = httpProxy.createProxyServer({  
  target: 'http://127.0.0.1:8080'
});

proxy.on('proxyReq', function(proxyReq, req, res, options) {  
  proxyReq.setHeader('Authorization', 'Basic YWRtaW46cGFzc3dvcmQ=');
});

proxy.on('error', function(e) {  
  console.log(e);
});

app.use(function(req, res, next) {  
  if (req.headers['origin']) {
    res.setHeader('Access-Control-Allow-Origin', req.headers['origin']);
    res.setHeader('Access-Control-Allow-Methods', 'POST, PUT, GET, OPTIONS, DELETE');
    res.setHeader('Access-Control-Max-Age', '3600');
    res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, Authorization, Content-Type');
  }
  if (req.method !== 'OPTIONS') {
    next();
  }
  else {
    res.end();
  }
});

app.use(function(req, res) {  
  proxy.web(req, res);
});

app.listen(8000);  
console.log('Proxy server started.')  

AngularJS needs to have cross-domain requests enabled.

app.config(function($httpProvider) {  
  $httpProvider.defaults.useXDomain = true
});

Then you should be able to access the API.