Abstract Distractions by Neal Ford

Neal Ford presents his personal experience in ten valuable lessons about abstractions used in programming. I just watched Neal Ford’s presentation on abstraction distractions, and I think it is worth the time. Here is a summary of key notes that he presents:

Lesson #1 Don’t mistake the abstraction with the real thing.

Lesson #2 Understand one level below your usual abstraction.

Lesson #3 Once internalized, abstractions are hard to shake off.

Lesson #4 Abstractions are both walls and prisons.

Lesson #5 Don’t name things that expose underlying details.

Lesson #6 Your abstraction isn’t perfect.

Lesson #7 Understand the implications of rigidity.

Lesson #8 Good APIs are not merely high-level or low-level; they’re both at the same time.

Lesson #9 Generalize the 80% cases; get out of the way for the rest.

Lesson #10 Don’t be distracted by your abstractions.

The future of a missed deadline

My recent paper was accepted in COORDINATION 2013

In this paper, we introduce a real-time actor-based programming language and provide a formal but intuitive operational semantics for it. The language supports a general mechanism for handling exceptions raised by missed deadlines and the specification of application-level scheduling policies. We discuss the implementation of the language and illustrate the use of its constructs with an industrial case study from distributed e-commerce and marketing domain.

PDF Version

REST as C in MVC

It’s been a couple of years since REST style HTTP resources have become one of the first choices of exposing HTTP services in different platforms. They are often referred as RESTful web services. The fact that REST style makes it much simpler to expose a service may also have a side effect: One may gradually fall out of standard HTTP implementation for a REST service. RESTful web services is documented in RFC 2616. Let’s consider an example. I have an interface that defines an abstraction for a bank account service:

interface BankAccountService {
    double getBalance(String id);
    double deposit(String id, double amount);
    double withdraw(String id, double amount);
}

Quite simple. What different implementations of the interface can come to mind?

  • A data backend implementation such as JPABankAccountService
  • An in-memory (dummy) implementation for testing purposes such InMemoryBankAccountService
  • A REST implementation such as BankAccountServiceResource?

So, what’s wrong with the third option? It looks neat, serves as a true implementation of the interface besides it also exposes the service through REST. Let’s have a closer look.

When thinking in resources, I should be able to model a bank account as a resource. To identify a bank account, I probably should use the bank account; so it should look like roughly as /accounts/112233 which should resolve to the REST resource identified by the account number. Having the resource, I can get the balance and perform other operations, right? Everything looks fine until, for instance, I’d use a resource path such as above for which there is no bank account. By RFC, this should be a standard 404 response code. Bam! I’m stuck! My implementation observes the interface and cannot return HTTP response codes!

The problem roots in the fact the a REST resource should not directly implement an interface behavior. REST should be considered similar to that of a layer of “controller” in MVC. A controller is a layer over different service implementation objects so it should not directly provide (update the model) the service operations. So, to have a REST resource for bank account service, you should already have a service implementation. And, this actually makes the REST resource more flexible as you can have different REST resources over different implementations. For instance, we can have a REST resource that uses the JPA backend to provide the services:

public class BankAccountServiceResource {
    private final BankAccountService service;

    public BankAccountServiceResource(BankAccountService service) {
      this.service = service;
    }

    @GET
    @Path("/acounts/{id}")
    public Response getBalance(@PathParam("id") String id) {
      if (service.getBalance(id) == -1) {
        return Response 404!
      }
      return Response(service.getBalance(id));
    }
}

I wrote this piece since I’ve come across many mind sets including myself that start to expose a REST resource in the first place directly with the business interface.

My story with Git and Subversion

Subversion is the main code repository for the company that I work for currently. The same story also has happened for me as many other people: maintaining Subversion branches is difficult and annoying!

When I searched on the net about to possibly use on top of Subversion, there is a huge amount of content that explains how to maintain them on the server side. What I really needed was:

  • I do not have administrative privileges so I cannot touch Subversion server in this regards.
  • I did not aim to completely mix Git on top of Subversion.
  • I want to have Subversion as my main code repository. And, I need to have maintain local branches to track my development with feature branching and other useful concepts.
  • When my local work is done, I will just update Subversion repository with my updated code and optionally purge the local Git branches.

During my searches, I came across a very useful post by Rein Henrichs; the post talks about how to use Git branching feature to manage the work flows for branching with agile teams. The good point about such work flow is that it does not talk about the server side of the code repository. After some playing around, I came up with the following work flow with the following basic principles:

  • I do not use git-svn extension utility.
  • I do not use Git on top of my Subversion.
  • The main code repository is Subversion.
  • I have a local Git server to maintain my local changes.

So, my current daily work flow is roughly as follows: I start by checking out code from the main Subversion repository:

$ svn co svn+ssh://server/project project

On the side, I have installed an instance of GitLab on a machine in the network (let’s call it local Git server). So, next, I initialize a Git repository:

/project $ git init --bare

Having the project defined in GitLab, now I can push my original Subversion checkout as the initial commit to the master branch of my local Git:

git push origin master

It goes without saying that .git is ignored in Subversion configurations and .svn is the same in Git configuration. So, in the big picture, I have code base that is updated from the main Subversion repository, then worked on, locally pushed to a Git server and finally updated in the main Subversion repository.

Here comes the sweet part! Git branches are managed in such a way that Subversion has no clue of. So, I just follow Rein’s post on how to have branches and maintain local branches for different tickets that I work on. Occasionally, when there are Subversion updates, I simply

  • Checkout master
  • Push the Subversion updates to the origin master
  • Switch to a local branch
  • Perform a git rebase origin/master

Merging would be that simple and keeping the local branches just as up-to-date as the main Subversion repository. Again following Rein’s post, when my local work is done then I simply

  • Make sure that current branch is updated with master
  • Perform git rebase -i origin/master choosing the commits that I really want to have
  • Checkout master
  • Perform a final git merge MY_LOCAL_BRANCH

And, now I have a master (Subversion trunk), that is ready to be committed to Subversion repository. Since I started to follow such a workflow, I must admit that life has been much much more enjoyable with Subversion. I just did not get started with all the good stuff that I already have with Git. More interestingly, it seems that people are getting interested to collaboratively try GitLab and see how branching works. I look forward to this as a migration motivation for the team to move to Git very soon!

How ‘>’ instead of ‘>=’ can destroy everything

Lately, I’ve been working on a simple maximization algorithm which trivially looks simple. A simple method is supposed to choose a _server_ to be used to run some job on the cloud. Very simple, each server gets a score and the highest score is chosen:

for (Server s : servers) {
  double fitness = computeFitness(s);
  if (fitness > bestFitness) {
    // update the best server
  }
}

Happily, I committed the code and the product shipped to the live environment. BAM! The first observation was that only one server is being selected! There are abundant number of servers that can be chosen but why only that one is always selected?

The explanation was simple. In any how, consider that the first server in the collection is the one that gets the highest score in certain circumstances although there can be servers afterwards with at least the same score. Since “>” is used so the best server never gets updated although there can be the same servers afterwards. This algorithm should guarantee that in the worst case it works as Round-Robin could work. That’s how “>=” comes into the picture that when having ties in server scores, the last highest will be selected to guarantee uniform selection over servers. Quite an experience about something that was trivially studied durig bachelors in all those algorithm courses!

Maven 3 and java.net Maven 1 artifacts

Recently I needed to debug some features on JOSSO. So I checked out the source and started to build the project. At some point, it starts to nag about it cannot resolve some artifact because it wants to download some dependency from java.net Maven/1 repository layout:

(http://download.java.net/maven/1): No connector available to access repository java.net

After a couple of hours of searching and trying different things, the workaround for me is to download Apache Maven 2 and use it to build the project. I originally tried building the source with Apache Maven 3. The point is that although the project may specify legacy option on some dependencies, it does not work with Apache Maven 3.

LDAP authentication with restricted security principal

Lately, I have setting up Atricore’s ID Provider with an LDAP authentication. I had a specific issue with LDAP authentication that took me a couple of days to figure out. In LDAP terminology, you can use a BINDDN to authenticate and search for users and objects in an LDAP. Basically, it’s also is a user with specific permissions; e.g. authenticate a user and search for user properties. However, it seems that it is a common practice that this specific LDAP user is usually restricted to access the user’s password. On the other, the common libraries and frameworks in Java that connect to LDAP use a search filter to fetch user’s basic properties such as username and password and then try to attempt the authentication for the user. This approach, however, creates a problem that since BINDN user is not allowed to fetch the user’s password, it fails to continue to authenticate the user. Usually, what it is done is that to add more specific ACLs on LDAP configuration that such applications can have access to read the user’s password. And this will resolve the issue. On a side note, I am also starting to like Apache DS.

Debugging Surefire tests on Jenkins with eclipse

Today I faced an interesting issue. One of my unit tests passed on my local machine and it failed on our Jenkins server. So I decided to debug the unit test remotely to understand what the problem was. As Jenkins reference mentions you can use the typical

-Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=5001

setting to enable remote debug when starting a job on Jenkins. However, if you take a look at Maven Surefire’s page on debugging tests it has a bit slightly different usage that you should use

-Dmaven.surefire.debug=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Xnoagent -Djava.compiler=NONE

to enable remote debugging. To get this working with Jenkins, you should note that you should provide Surefire’s setting on Goal option of “Build” section of the job configuration page. If you do it on MAVEN_OPTS in the advanced section of the job build section, eclipse will not be able to get the breakpoints.

Installing Sakai OAE from source with MySQL

Recently, I started to install a vesion of “Sakai OAE”:http://sakaiproject.org/node/2239. I am writing this post to share my experience with it simply because I think although Sakai Project is a very powerful and nice product, it does not have straightforward documentation. Generally, you can take two approaches: (1) download the application and try to configure and deploy it, (2) build from source with your configuration. For a while, I tried to use (1), however, I finally did not manage to get what I wanted. So, I turned to (2).

h3. Check out source

You can find the source code for *nakamura* on GitHub, that is core product from Sakai project. I checked out the source into a directory I call @SRC_ROOT@:

cd $SRC_ROOT
git clone git://github.com/sakaiproject/nakamura.git

It creates a @nakamura@ directory after source checkout.

h3. Choose your version

Since there is ongoing development, I decided to use a release version so I used Version 1.1:

cd $SRC_ROOT/nakamura
git checkout 1.1

And, in this post, the version used for other modules will also be 1.1.

h3. MySQL Configuration

“This page”:https://confluence.sakaiproject.org/display/3AK/Installing+and+Configuring+MySQL+5.5+for+Nakamura+0.11 describe how to configure to use MySQL JDBC bundle at the time of deployment. But, we will do the configuration before building the application so that the final artifact includes a default deployment of MySQL JBDC bundle. I’ll use the same page with some modifications to do so.

h4. Including MySQL JDBC Bundle in the default build process

First, we need to install this bundle as a pre-requisite for @nakamura@. Verify and edit @$SRC_ROOT/nakamura/contrib/mysql-jdbc/pom.xml@ to look like in the beginning of the file:


     org.sakaiproject.nakamura
     base
     1.1
     ../../pom.xml

In the wiki page, there is a typo with @relativeUrl@ instead of @relativePath@. Now, we need to include the bundle for the default build process. Edit @$SRC_ROOT/nakamura/app/src/main/bundles/list.xml@ and add the following to the section with @startLevel=”1″@:


   org.sakaiproject.nakamura
   org.sakaiproject.nakamura.mysqljdbc
   1.1

h4. Configuring JackRabbit to use MySQL instead of default Apache Derby

JackRabbit is required to be configured to use MySQL. It uses Apache Derby by default. Let’s first make a back-up of the original configuration:

cp  \
	$SRC_ROOT/nakamura/bundles/server/src/main/resources/repository.xml \ 
	$SRC_ROOT/nakamura/bundles/server/src/main/resources/repository.xml.derby

Now, edit the @repository.xml@ in the above address. In the configuration section for @Workspace@ add the following configuration instead of the default Derby configuration:

     
		
		
     		
     		
     		
     		
    		
     

And, in the section for @Versioning@:

     
		
		
     		
     		
     		
     		
    		
     

Remember to change the database name, user, and password for your installation. Also, in the wiki page, it mentions to use class @org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager@ which is deprecated by JackRabbit and replaced in the above configuration.

h3. Installing the application in a local repository

The configuration for building the application is complete:

cd $SRC_ROOT/nakamura
mvn install

You can verify that the application should be installed at @$MAVE_REPO_PATH/org/sakaiproject/nakamura/org.sakaiproject.nakamura.app/1.1@

h3. Preparing for deployment

To start and run nakamura, you need to have a working directory. So, create a directory somewhere and let’s call it @WORK_ROOT@.

h4. Configuration directory

Whenever nakamura starts, it looks at a special directory at @$WORK_ROOT/load@. So, we will create it and place some initial configuration under this directory. As far as I have understood, there are two standard ways to configure nakamura for runtime:

# @$WORK_ROOT/load@: In this directory you will place @*.cfg@ files to hold different properties. The configuration files are actually standard Java properties files. The configuration is done using the “fully qualified name” of the class that will read the configuration. We will have example in the following sections.
# @$WORK_ROOT/sling/config@: When you start the application, it will create a Sling Home Directory at @$WORK_ROOT/sling@ by default. In this directory, there can be different configuration files @*.config@ under @sling/config@. The directory path structure here is created based on the fully qualified names used in @load@ directory. Additionally, the @*.config@ files are *not* standard Java properties files and they are OSGi configurations containers. So, if you need to modify them pay attention to the difference of the way you should introduce a property.

“This page”:https://confluence.sakaiproject.org/display/3AK/OAE+Configuration+and+Deployment also provides an overview of the same topics before deployment.

h4. Server protection service configuration

Create @org.sakaiproject.nakamura.http.usercontent.ServerProtectionServiceImpl.cfg@ under @load@ directory to have the following content:

trusted.secret=MY_OAE_SECRET
trusted.hosts=MY_SERVER_NAME:8080=http://MY_SERVER_NAME:8082

*Note* that @MY_SERVER_NAME@ includes public valid IP/address and local network names. Local invalid IPs *do not* work.

h4. JDBC storage client configuration

Create @org.sakaiproject.nakamura.lite.storage.jdbc.JDBCStorageClientPool.cfg@ to have the following content:

service.pid=org.sakaiproject.nakamura.lite.storage.jdbc.JDBCStorageClientPool
jdbc-driver=com.mysql.jdbc.Driver
jdbc-url=jdbc:mysql://localhost/nakamura?autoReconnectForPools=true
password=root
username=root

Remember to match the information here to what you provided for the @repository.xml@.

h3. Running nakamura

Before running the application, make sure that the database for the application is created with proper permissions and there is no tables in there. As the final step, copy the application JAR file to @$WORK_ROOT@:

cp \ 
	$MAVEN_REPO_PATH/org/sakaiproject/nakamura/org.sakaiproject.nakamura.app/1.1/org.sakaiproject.nakamura.app-1.1.jar \ 
	$WORK_ROOT/

You can run the application:

java -server -DXms1024m -DXmx1024m -XX:MaxPermSize=512m -jar org.sakaiproject.nakamura.app-1.1.jar &

After a couple of log messages, you can follow what’s happening with:

tailf $WORK_ROOT/sling/logs/error.log

When the application is started, there should be around 28 tables in the database. You can browse the root of the application with @/index.html@ at the address you specified in the configurations.

I’m still having problems with this configuration including getting e-mail service to work or the huge *WARN* messages from JackRabbit. Anyway, I hope this would be helpful for those starting Sakai as me.