Thursday, December 20, 2012

Set up Java 7, Eclipse and Netbeans on Retina Macbook Pro

Updates:

- Bad news: Netbeans 7.4 now requires Java 7 to run, which means the hack to force IDE to use Apple's Java 6 does not work anymore.

- Good news: According to Bug 215141, the Retina display issue is finally fixed in JDK 7u40+ and early access build of JDK 8! I just installed JDK 7u40 and Netbeans 7.4 RC2, and I can confirm that Retina display finally works!!!

I can happily announce that you guys can skip the rest of the post now. You can simply download latest JDK 7 7u40+ or JDK 8, and then install either Eclipse or Netbeans, Retina display should work as expected. No more blurriness!!!

Just got my laptop updated by my generous employer, it is a Retina Macbook Pro (rMBP). This is the laptop/computer with the highest performance I ever used, and the most expensive one too.

Here are the steps to set up Java 7 (J2SE 7u10), Eclipse (4.2.1) and Netbeans (7.2.1) on the rMBP as of this writing.

1. Install Jave 7.


Starting from Mountain Lion, Java is not installed by default. So, if you run a Java application, you might be prompted to install the Apple Java 6, just like this screenshot shows:


Unless you have Java applications that must use Java 6, I really don't see the point of installing both Java 6 and 7, especially now Oracle provides J2SE 7 for Mac already. There was a time when Oracle has not officially released J2SE 7 for Mac and I had to install OpenJDK to try out Java 7 new features.

So, simply to go Oracle Java 7 download page and download "Mac OSX x64", as of my writing it is jdk-7u10-macosx-x64.dmg. Install it and you will get it installed to:

/Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk 
To verify that you have installed it successfully, simply type "java -version" and you should see the following:

$ java -version
java version "1.7.0_10"
Java(TM) SE Runtime Environment (build 1.7.0_10-b18)
Java HotSpot(TM) 64-Bit Server VM (build 23.6-b04, mixed mode)
You also need to add JAVA_HOME environment variable by inserting this line to ~/.bash_profile:

export JAVA_HOME=`/usr/libexec/java_home -v 1.7`
java_home is a command to return to the Java home directory for the current user, -v 1.7 filters Java versions for 1.7.

2. Install Eclipse.


Installing Eclipse without installing Java 6 is quite tricky. I am simply surprised that Eclipse has not support Java 7 out of box yet. If you haven't installed Java 6, the Eclipse will simply give the above error message and won't start at all. There are bugs filed for this issue (bug 382972, bug 374212, etc.) and it is very disappointing that this problem has not been solved yet.

After some online search, here is the hacky way:

Download and install Eclipse, I got Eclipse 4.2.1

- Create a symbolic link with the name of the Java 6 to hoax Eclipse that you have Java 6 by doing the following:

sudo mkdir /System/Library/Java/JavaVirtualMachines
sudo ln -s /Library/Java/JavaVirtualMachines/jdk.1.7.0_07.jdk /System/Library/Java/JavaVirtualMachines/1.6.0.jdk
After this hack, Eclipse will start correctly (you will need to do the ctrl+click trick since it is not downloaded from App Store and considered as from an untrusted source).


The above screenshot shows the Installed JREs in Eclipse Preferences, so funny that the symbolic link also shows up ;-)

To be frank, I am really disappointed by this Eclipse installation process. First, it should be installable right from App Store. Second, it should be bundled with JDK/JRE 7 directly (they actually have bugs for that, e.g. bug 374791, but not fixed yet).

3. Install Netbeans.


The installation process for Netbeans is much smoother than Eclipse. You simple download and install it and it automatically picks up the Java 7 installed in /Library/Java/JavaVirtualMachines/ and works out of box! No wonder lots of people actually switched from Eclipse to Netbeans.

4. The Retina Fix.

Note: currently, this Retina fix only works for Apple Java 6, not Oracle Java 7. Hope similar fix for Java 7 will be released soon. However, if you are running Apple Java 6, you can try this fix now. This is the bug to track the progress of Oracle fixing this issue.

Maybe because the rMBP is too expensive, many applications are not yet supporting Retina display out of box. In the case of Eclipse and Netbeans, you need to hack their Info.plist file a bit, here is how:

- First install Apple JDK 6. You will need to remove the symbolic link created in Step 2. Then download it from Apple and install it.

- Locate Eclipse.app and Netbeans.app, they are the ones that you double click to run. Eclipse.app should be in the unzipped directory from the tar.gz Eclipse package you untarred. Netbeans by default installs into Applications/Netbeans.

- Right click on .app, and click on "show package contents". Use a text editor to edit Contents/Info.plist. Insert two lines before the closing </dict></plist> tags:

...
    <key><NSHighResolutionCapable</key>
    <true/>
  </dict>
</plist>

- Now, we need to make Eclipse and Netbeans to run using Apple JDK 6 instead of the default Oracle Java 7.

To make Eclipse using Apple Java 6 instead of Java 7, you need to update eclipse.ini. Using the same way to show contents of Eclipse.app, edit Contents/MacOS/eclipse.ini and add -vm option:

-vm
/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home/bin/java
An alternative is to edit Contents/Info.plist and add inside <key>Eclipse</key><array>...<array>
<string>-vm</string><string>/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java</string>
To verify Eclipse is started with the correct JDK/JRE, in Eclipse, go to "About Eclipse" -> "Installation Details" -> "Configuration" and see the "java.runtime.version".

Maybe due to the Mac caching the previous Eclipse.app, changing Contents/Info.plist does not work immediately. You need to make a copy of the Eclipse.app, name it like Eclipse-retina.app. Double click on Eclipse-retina.app and enjoy Eclipse in Retina.

Here is a comparison of the info for Eclipse.app and Eclipse-retina.app (right click then select "get info"), notice for Eclipse-retina.app, "open in low resolution" check box has been unchecked!




To make Netbeans using Apple Java 6, similarly, you can edit the Netbeans config file. Show contents of Netbeans.app, edit Contents/Resources/Netbeans/etc/netbeans.conf and add netbeans_jdkhome option:

netbeans_jdkhome="/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home"
After saving the edit, restart Mac and Netbeans. You should see the Retina display working with Netbeans now. No idea why Netbeans does not need to copy to create a new app to work, but it certainly is much easier than Eclipse.

References:


Monday, October 1, 2012

Use HTTP PATCH method for partial updates

Had an interesting problem at work to design the RESTful API for partial updates of resources. We want an API to partially update an existing resource, e.g. some existing properties get updated or deleted, some new properties get added, etc. The body is in JSON format.

There are a few choices:

  • Use POST, add a query parameter to capture the operation (delete, add, update), put the affected properties in the POST body.
  • Use PUT, similar to above POST design.
  • Use POST, but instead of using a new query parameter, design the body to capture the operation and the data for the operation. For example, use delete, add, and update as the first level properties, then put the actual properties under the corresponding operations.
  • Use PUT, similar to the above POST design.
Then, a colleague suggested using PATCH for partial updates to make it really RESTful. Yeah, I vaguely remember there is one HTTP method for PATCH, but what does it do and why should we care?

First, let's see the difference between POST and PUT. One misconception is that POST is used to create a resource, PUT is used to update a resource. Actually, both POST and PUT can be used for creation. Here are the major points based my study:
  • PUT is idempotent, POST is not. This means you can send the same PUT request multiple times and the result should remain the same as you send it only once.
  • PUT URL uniquely identifies the representation in the request body. POST URL identifies the service to process the request body. For example, PUT URL is like the address on each regular mail, which uniquely identifies the mail, POST URL is like the address of the post office, which identifies the service to process the mails. The result of POST request handling on the server side does not necessarily creates new representations/resources that can be identified by a URL.
  • PUT response is not cacheable, in addition, PUT response should invalidate the cached copies of the representation identified by the PUT URL in the intermediate caches when the response passes through the caches. POST response, however, is cacheable if it contains "freshness" cache control headers. A cached POST 303 response contains Content-Location header redirecting User Agent to fetch the cached copy.
So, PUT URL identifies a complete representation to be updated or created. For example, you can use PUT to overwrite an entire representation. How about partial updates, which is more common. If you only want to do partial update, according to HTTP spec, you need to use a different URL that identifies the partial content and send the partial content as request body. Or you need to use the PATCH method (defined in rfc5789).

Mark Norttingham has a nice piece explaining why PATCH is good for a RESTful design. He is also working on a JSON PATCH draft (rev 05 was updated days ago), which defines semantics in JSON format, exactly what we are looking for ;-) Note it has a new content type of "application/json-patch".

The PATCH request body should contain operations (such as add, remove, update, etc.), the relative path to the URL identifying the entire representation, and the value. Here is an example from the JSON PATCH draft:

[
       { "op": "test", "path": "/a/b/c", "value": "foo" },
       { "op": "remove", "path": "/a/b/c" },
       { "op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ] },
       { "op": "replace", "path": "/a/b/c", "value": 42 },
       { "op": "move", "path": "/a/b/c", "to": "/a/b/d" },
       { "op": "copy", "path": "/a/b/c", "to": "/a/b/e" }
]

In fact, PATCH is going to be the main method for updates starting in Rails 4.0. However, PATCH method may not be as well supported as POST or PUT. In case your framework or server does not support it, you will have to fall back to one of the choices mentioned in the beginning of the post, probably.

References:


Friday, July 13, 2012

Setting up VJET as your Eclipse Node.JS IDE

First of all, after going through the setup process and found out some limitations about VJET, I am not really sure I'd recommend anyone to use it. The Eclipse JavaScript Development Tools (JSDT) as part of Web Tools Platform (WTP) is pretty much enough for JS IDE.

Here is a quick rundown of pros and cons of VJET (I only spent 1-2 hours playing with it):


pros:
- free and backed by ebay
- relatively active (last release was 3 months ago)
- code assist for Node.JS and many other common JS libraries.

cons:
- Node.JS lib was outdated, currently from node v4 I believe
- Cause launcher issues with chrome JS debugger (this is the biggest con to me)
- does not support Eclipse 4.2 Juno yet (has some plugin dependency on jetty, might not caused by VJET itself)

For those still interested to try it out, here are the quick instructions to set it up:



1. download Eclipse Indigo (VJET does not support JUNO yet), I picked JavaScript development distro

2. install VJET plugin:
- add new update site: https://www.ebayopensource.org/p2/vjet/eclipse/
- install VJET as Eclipse plugin

3. download VJET JavaScript Type Libraries in zip files, each zip is an Eclipse project
- http://www.ebayopensource.org/p2/vjet/typelib/
- you can only download NodejsTL.zip if you don't need others (it is for node v4, last modified was 4/18/2011)

4. import NodejsTL.zip as project into the workspace
- File->Import...
- Genearl->Existing Projects into Workspace
- Select archive file->Browse to NodejsTL.zip

5. create a sample helloworld VJET project add NodejsTL project to your helloworld VJET project build path
- create a new vjet project helloworld
- select helloworld project
- right-click -> Build Path -> Configure Build Path
- under Projects tab, add NodeJSTL project to the build path

6. configure external tool to run under Node.JS (assuming your have node installed already)
- configure external tool (see screenshot)
- select the JS file to run
- run external tool and you should see output in the Eclipse console



Bonus: JSHint Plugin


The JSDT plugin provides some realtime syntax validation, but there is a really nice jshint plugin. The best part is that it allows you to do per-project checks, it even allows you to switch to use jslint instead of jshint (not sure if you would want to).

Note that JSHint plugin is not enabled by default, you need to turn it on per project:
- select the project
- open project properties, select JSHint and select JS files (see screenshot)



Thursday, June 28, 2012

careful with synchronous operations in async iterator

We use async flow control Node.JS library a lot at work. It provides various convenient functions to guide you through async programming mess. But lately, we found an interesting issue/lesson using async library:

Better avoid using synchronous operations inside the iterator, otherwise, when the number of items to iterate is big enough, you will exceed the call stack size.

For example, this code snippet just gives the basic idea (although realistically, you don't really need async to output a large array ;-)

var async = require('async');
var a = [];

for(var i = 0; i < 3040000; i++) {
 a.push(i);
}

async.forEachSeries(a,
 function (item, cb){
  console.log(item); // non-async operation
  cb();
 },
 function () {
  console.log('all done');
 }
);


When you run it, you will get:

0
1
2
...
RangeError: Maximum call stack size exceeded

The problem here is we got synchronous operation inside the iterator, which ended up maxing out the call stack.

If you really cannot avoid mixing synchronous and asynchronous code in an iterator (most of the times you can!), one simple workaround is to wrap synchronous code inside a process.nextTick call, so you clean up the current stack frame and instead of keep increasing the size.


var async = require('async');
var a = [];

for(var i = 0; i < 3040000; i++) {
 a.push(i);
}

async.forEachSeries(a,
 function (item, cb){
  process.nextTick(function () {
   console.log(item);
   cb();
  });
 },
 function () {
  console.log('all done');
 }
);

This issue does not only apply to forEachSeries, but also other function calls like mapSeries, whilist, until, etc. Here is a more detailed discussion thread, where people proposed a patch to add async.unwind to fix the error.

Wednesday, April 25, 2012

JSLint: don't make functions within a loop

Interestingly got this JSLint warning "don't make functions within a loop" today, scratched my head a bit and realized it might be due to the fact that function inside a loop is very prone to cause errors. You are expecting to get different values for the collections you are looping through in each iteration, but it ended up all the same as the value from the last iteration. This post explains it well with a straightforward example.

Another post suggests having functions inside the loop actually cause performance degrade as well, quite interesting experiments.

Mockery: easy mocking in Node.JS

Many JavaScript mocking frameworks are not specifically designed for Node.JS, so when we need to mock built-in Node.JS modules or npm modules, things get quite ugly and hacky.

One approach is to expose __module from module and replace the things you want to test with the mocks. I am not sure if this would work for the built-in modules and it is quite dangerous to mess with __module. Here is the example borrowed from this StackOverflow post:

// underTest.js
var innerLib = require('./path/to/innerLib');

function underTest() {
    return innerLib.toCrazyCrap();
}

module.exports = underTest;
module.exports.__module = module;

// test.js
function test() {
    var underTest = require("underTest");
    underTest.__module.innerLib = {
        toCrazyCrap: function() { return true; }
    };
    assert.ok(underTest());
}


Another approach is to use Dependency Injection (DI). Basically, you pass in mock object through a overwritten module.exports. This post explains well with an example that I borrow below. But do you really want to change the way you invoke require? How about when you are using someone else's library that you cannot even change? What if there are many mock objects you need to pass in?

// overwrite the require with one that accepts a mock object
module.exports = function(http) {
  var http = http | require('http');
  // private functions and variables go here...

  //return the public functions
  return {
    twitterData: function(callback) {
     http.createClient(...etc...);
    }
  };
}

// use it *normally*
var twitter = require('twitter')();

// use it in the test
var mockHttp = { createClient: function() { assert(something); } };
var twitter = require('twitter')(mockHttp);
//do some tests.


So, finally, a more practical solution needs to modify how Node looks up and loads modules. We initially were using some hacks like these:

// Mock native modules, e.g. http
var mockHttp = { request : function () {} };
require.cache['http'] = { exports: mockHttp };

// Mock non-native modules
var path = './test';
var absPath = require.resolve(path);
var mockTest = {...};
require.cache[absPath] = { exports : mockTest };

But this is not always reliable, you need to pay attention to the order you overwrite the modules, be careful with nested require, also it is difficult to reuse the mock objects between different tests, etc.


Finally, a colleague Martin Cooper implemented an elegant solution that makes unit testing with mock objects easy as a breeze. It is called Mockery.

- It supports nested require cases.
- It gives warning about modules that are not mocked out (you can use registerAllowable if you are sure you don't need to mock those out).
- It manages life cycle of mock objects cleanly. You can easily use different mock objects for the same module in different tests (Node only loads a module or mocked module once throughout the process and Mockery can help clean it up).

Here is a quick example how to use it (with YUITest, but you can use Mockery with any testing framework of your choice).

- YUITest can be installed through "npm install yuitest".
- Mockery is installed through "npm install mockery".
- To run the test, do "node node_modules/yuitest/cli.js fsclient.test.js".

///////////////////
//fsclient.js
///////////////////
var fs = require('fs');

function getDate() {
    var today = new Date();
    return today.toUTCString();
}

function getFileContent(filename, callback) {
    fs.readFile(filename, function (err, content) {
        if (err) {
            callback(err);
        } else {
            callback(null, getDate() + "\n" + content);
        }
    });
}
module.exports.getFileContent = getFileContent;
///////////////////
//fsclient.test.js
///////////////////
var YUITest = require('yuitest');
var Assert = YUITest.Assert;
var TestCase = YUITest.TestCase;
var mockery = require('mockery');
var sut = '../fsclient';
var client;

var fsMock = {
 readFile: function (filename, callback) {
        if (filename === 'error') {
            callback(
                new Error('error reading file: ' + filename)
            );
        } else {
            callback(null, 'file content: hello!');
        }
    }
};

var tc = new TestCase({
    'name': 'demo yuitest testcase for fs mocking',

    setUp: function () {
        mockery.enable();
        //replace fs with our fsMock
        mockery.registerMock('fs', fsMock);
        //explicitly telling mockery using the actual fsclient is OK
        //without registerAllowable, you will see WARNING in test output
        mockery.registerAllowable('../fsclient');
    },

    tearDown: function () {
        mockery.deregisterAll();
        mockery.disable();
    },

 testGetFileContentError: function () {
        client = require(sut);
        client.getFileContent('error', function (err, content) {
            Assert.isInstanceOf(Error, err);
            Assert.isTrue(err.message.indexOf('error reading file') !== -1);
        });
    },

    testGetFileContentSuccess: function () {
        client = require(sut);
        client.getFileContent('success', function (err, content) {
            Assert.isNull(err, 'should not get error');
            Assert.areSame((new Date()).toUTCString() + "\nfile content: hello!", content);
        });
    }
});

YUITest.TestRunner.add(tc);
YUITest.TestRunner.run();

There are several other Node.JS mocking frameworks like node-sandboxed-module, injectr, which also worth taking a look.


Martin also has another weapon called Sidedoor, which exposes the private functions that are not exposed to the public. It really helps thoroughly testing the code and improve your code coverage. When your boss tells you code coverage needs to be 90%+ and some error branches are really hard to mock, what do you do?

Use Mockery+Sidedoor!


Other references:

- "Testing private state and mocking dependencies" by Vojta Jina
- "Mockery: hooking require to simplify the use of mocks" discussion thread on Node.JS group
- YUITest, now supporting Node.JS testing as well, also provides yuitest-coverage tool that generates code coverage reports to integrate with Hudson/Jenkins CI environment
- Node.JS module: exports v.s. module.exports

Wednesday, February 15, 2012

Janus with jslint Vim plugin

Nowadays, I write quite a bit of JavaScript and uses JSLint command line tool quite a lot. A co-worker recommends a nice Vim plugin that validates your JavaScript code when you are editing and when you save it.

While I was trying to install it for Janus (a Macvim clone that I uses), I ran into two issues:

1. By default, when you run rake inside the git cloned directory, the plugin gets installed into ~/.vim. For Janus, user customized plugins should go to ~/.janus, and Janus will load them automatically. For details, please check out the Customization section of the Janus documents.

So, to get around:

- create a jslint directory in ~/.janus
- edit vim plugin's Rake file, replace line 38 File.expand_path("~/.vim") with File.expand_path("~/.janus/jslint"), you get the idea.
- run rake from inside vim plugin directory and it should install into the correct Janus directory

2. The second issue is that I keep getting warning when I start trying out the plugin, something like "s:cmd" not defined. Did some poking around and it seems like ftplugin/javascript/jslint.vim is trying to find a JavaScript interpreter (line 62 to 75 for *UNIX systems) and somehow failed.

I am on Snow Leopard, which comes with an acceptable interpreter "jsc" at /System/Library/Frameworks/JavaScriptCore.framework/Resources/jsc, so not really sure why it did not work. I ended up just install Node.JS (for Mac users, highly recommend using Homebrew, just follow instructions here) and added node to my PATH. And it takes care of these errors.

Now, enjoy the jslint plugin and be a good JS developer ;-)


P.S. For people not happy with Crockford's personal styles (some styles don't make sense to me either), you can update options in ~/.jslintrc (see examples on jslint.vim site). Or simply use jshint vim plugin instead. JSHint is a more relaxed and reasonable fork of JSLint.

Sorry for going off the topic, but it is quite funny to read about why the original developer forked JSLint, especially entertaining are the comments. For example, Crockford's response to JSHint:
When asked for his "feelings on JSHint" Crockford replied "There are many stupid people in this world, and now there is a tool for them."

Saturday, January 28, 2012

Lua Development Tools Koneki, IDE based on Eclipse

Just got a comment from developers of Lua IDE Koneki on my previous post about setting up Eclipse-based Lua IDE. So, I gave it a try and it is quite neat, I really hope the developers can keep it moving and make it a great default IDE for Lua developers ;-) As of now, I feel it is not yet conveniently as useful as LuaEclipse I posted about a while back. But the project is under active development and it will become better every day, I am sure.

Here is a quick rundown what I have tried so far:

1. Installation: they offer both standalone and update site, I just installed through update site following their instructions. I am on Eclipse 3.7.1, btw. I also got lua and luarocks installed through Homebrew (by default they are installed in /usr/local/bin).

2. Create a new Lua project: create a new Lua project, add source files under src, similar to the typical Java projects.

3. Ready to run? I feel stuck at this step at first since there is no launch configuration that allows me to configure the local Lua environment. After running through several threads, it seems Koneki does not support launch configuration yet. It only supports "remote debug launch configuration".

But the developer also provides this workaround using "External Tools" and I got it to run and show the results in Console View. This is the sample configuration (you need to select the main.lua before run this external tool configuration). For the meaning of Eclipse variables like ${workspace_loc}, ${resource_loc}, see Eclipse external tools documentation.


4. Debugging. LDT supports remote debugging via DBGP, you can follow the LDT user guide to set it up.

Many thanks to the LDT developers for this nice IDE. I wish the launch configuration can be added so beginners like me can get started with Lua and LDT quickly. Also, I wish the configuration of debugging and remote debugging can be integrated into LDT.

Monday, January 23, 2012

Unbounded function wrappers

I was reading "JavaScript Garden" (highly recommend for JS beginner/intermediate) and could not understand the concept of the "fast, unbound wrappers" for functions. See Function arguments for the example.

function Foo() {}

Foo.prototype.method = function(a, b, c) {
    console.log(this, a, b, c);
};

// Create an unbound version of "method" 
// It takes the parameters: this, arg1, arg2...argN
Foo.method = function() {

    // Result: Foo.prototype.method.call(this, arg1, arg2... argN)
    Function.call.apply(Foo.prototype.method, arguments);
};


Luckily I found this Stackoverflow post that explains the idea behind it. It took me a while to wrap my head around Function.call.apply ;-)

So, the basic idea is that we have a function defined in a class, but we want to use it without binding to specific object, maybe the concept of static method in Java? So, instead of creating an object and invoke the function on the object, we define the function as a property of the class, not on its prototype.