Thursday, December 26, 2013

Resolving duplicate collection reference with @NamedQuery

Considering a scenario where I have 2 tables, one table called user, the other table call authority. There is one-to-many relationship, which means each user could have multiple roles. Table below illustrate how this relationship is constructed in database.

Users table

+----------+-------------+------+-----+---------+-------+
| Field    | Type        | Null | Key | Default | Extra |
+----------+-------------+------+-----+---------+-------+
| username | varchar(10) | NO   | PRI |         |       |
| password | varchar(32) | YES  |     | NULL    |       |
| enabled  | int(11)     | YES  |     | NULL    |       |
+----------+-------------+------+-----+---------+-------+

Authority table

+-----------+-------------+------+-----+---------+-------+
| Field     | Type        | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| username  | varchar(10) | NO   | MUL | NULL    |       |
| authority | varchar(10) | NO   |     | NULL    |       |
+-----------+-------------+------+-----+---------+-------+ 
On Java site, each table is being mapped to the corresponding Java entity as illustrate below:

User entity

@NamedQuery(
 name="findByUser",
 query="select u from User u join u.roleList r where r.userId = :username ")
@Entity
@Table(name="users")
public class User {
 ...

 @OneToMany(targetEntity = Role.class, cascade = {CascadeType.ALL})
 @JoinColumn(name="username")
 private List roleList;

 ...

Role entity

@Entity
@Table(name="authority")
public class Role {
 ...
With this setup, I don't see any defect in the first run because the test data are a combination of one user pair with one role. The problem comes only when there is one user pair with two roles, I notice that the roleList will contain duplicate data. To prove my statement, I did some scanning on the data retrieve from the query.
  ...

  User theUser = emf.createEntityManager().createNamedQuery("findByUser", User.class).setParameter("username", username).getSingleResult();
  System.out.println(theUser.getRoleList());
  ...
Code snippet above shows that theUser.getRoleList() is having duplicate data as it is showing the same memory location [org.huahsin.Role@1b332d22, org.huahsin.Role@1b332d22] in the console. This problem is due to the Authority table is missing primary key. Add a primary key into Authority table will fix this problem. A lesson learn, whenever working with JPA, it is best to have the primary key declare in database to allow JPA to have a clear view on the relationship.

Wednesday, December 25, 2013

Evolving query from database to JAVA

Last time if I want to make a query through Session interface, I will invoke createQuery as shown below:
session = getSession();
Query query = session.createQuery(“from org.huahsin.Book”);
List<Book> booklist = query.list();
This way I only tackle for one single table. Somehow this need to work together with Hibernate mapping file. If the mapping file is missing, error would be seen like this

org.hibernate.hql.ast.QuerySyntaxException: Book is not mapped [from Book]

If I someone that could standalone without any dependency on Hibernate mapping, createSQLQuery would be a great help. This API is so friendly that it could allow me invoke a native SQL query and also able to tackle for multiple tables free of charge. So nice.
session = getSession();
Query query = session.createSQLQuery(“select ... from ...”);
But free thing doesn't mean good quality, some time this could be (very) error prone. Besides that, there is also another problem when retrieving the result set, it is so horrible. See code below.
for( Object row : query.list() ) {
 Object[] col = (Object[]) row;
    
 System.out.println((String)col[0]);
 System.out.println((String)col[1]);
 ...
}
Be cautious when working with the code snippet above because I have to ensure I'll never run out of array bounds and make sure I know which col index I'm working with. Since the retrieve object are Object type, I have no idea which type I should cast (String or BigDecimal), sometimes could easily lead to ClassCastException error.

Fortunately, software engineering is evolving, and it wouldn't just stop in that way. A better resolution has come into picture - Java Persistence Architecture. With this new technology, I'm not require to create XML mapping and retain my work in POJO, just that some new element has been introduce. For example, as the sample shown below, I need to declare which table does this POJO interact with? And how the column is being map on which field? In this case the User POJO class is map to users table, and the username field is the primary key corresponding to what have been declare in that table.
@Entity
@Table(name="users")
public class User {

 @Id
 private String username;

 ...
}
This is what have been declared in users table:

+----------+-------------+------+-----+---------+-------+
| Field    | Type        | Null | Key | Default | Extra |
+----------+-------------+------+-----+---------+-------+
| username | varchar(10) | NO   | PRI |         |       |
...
...
+----------+-------------+------+-----+---------+-------+
When making a query, @NamedQuery come into play, and this is how the query being configure, just right at the top of the POJO class.
@NamedQuery(
 name="findByUser",
 query="select u from User u where u.username = :username ")
@Entity
@Table(name="users")
public class User {
 ...
To invoke this query, follow this way:
 List<users> userList;
 userList = emf.createEntityManager().createNamedQuery("findByUser", User.class).setParameter("username", username).getResultList();
 ...
Something to note when working with this method, I've gone through the pain due to my mistake when I first using it.
  1. The query is almost similar to SQL syntax, just that there are base on object entity, not the one declare in database.
  2. When something was not right in the query, error could be seen immediately after the web app is loaded. This is what they called fail-fast, never wait for error happen only when it is invoke.

Implementing a test stub pattern

First thing first, I don’t create this pattern. I got this idea when I was reading the unit testing strategy from this blog. I like this test pattern because the codes were so clean and very object oriented style. (Before this my unit test code were very messy just like spaghetti). This way, I can easily isolate my mock code into a standalone stub class without messing up with real method.

Before I start using this pattern, let’s see how this pattern could fit into my project. I pick reporting template module for this experiment, which is my favourite module because this module is like a Sergeant who able to tackle different type of reporting problem. The UML diagram below shows the real design on the report template module.

In order to utilize its reporting service, one must inherit HibernateReportService because this class was declare as abstract, and it is the only interface that has contact to the outside world, and all the initialization work are done through the constructor. The query method is the only entry point that could allow customizes query selection in order to complete the reporting job. Code snippet below shows its use:
public class MyReportService extends HibernateReportService<Object> {

    public MyReportService(...) {
        ...
    }

    protected void query() {
        ...
    }

}
There would be a difference when come to initializing the report template unit testing work. As according to the design, I don’t inherit HibernateReportServiceStubber from HibernateReportService, whereas I make a generic template of HibernateReportService type. This idea was great as it allow me to isolate the mock code or fake implementation without messing up with the real implementation in HibernateReportService. Code snippet below showing the primary piece that makes up this stub class:
public class HibernateReportServiceStubber<T extends HibernateReportService<Object>> {

 private T daoReportService;

 public HibernateReportServiceStubber(T daoService) throws Exception {
  
  // capture the DAO that going to be test
  this.daoReportService = PowerMockito.spy(daoService);
 }
 
 public T getReportService() {
  return this.daoReportService;
 }

 // more mock method down the road
} 
Here come to the unit test guy. TheReportingServiceTest is the guy that takes up responsibility to execute the test. Again this guy doesn’t inherit from HibernateReportServiceStubber but he instantiate it and execute the fake implementation provided from the stub class. Below is the sample of his hard work:
public class TheReportingServiceTest {

 private HibernateReportServiceStubber<MyReportService> reportServiceStubber;
 
 @Before
 public void setUp() throws Exception {
  reportServiceStubber = getHibernateReportService();
 }

 private HibernateReportServiceStubber<MyReportService> getHibernateReportService() throws Exception {
  return new HibernateReportServiceStubber<MyReportService>(new MyReportService (...));
 }

 @Test
 public void test(){
  reportServiceStubber.mockCodeA();
  reportServiceStubber.mockCodeB();

  ...
 }

} 
At the end of this experiment, I feel that I'm actually running a behavioural test more than running unit testing on each single detail function/method.

Return a mock value when an object is being spy

My objective is to develop a test stub that will return me a true value whenever the following method is invoked.
public class ReportService {

    protected boolean readInRawReport(Vector inputFileNameList) {
        boolean found = true;

        for( String inputFileName : inputFileNameList ) {
        ...
        ...
    }
    ...
    return found;
}
I was unit testing the code snippet above using the stub code shown below, unfortunately the test was failed due to NullPointerException happened on inputFileNameList.
public class HibernateReportServiceStubber<T extends HibernateReportService<Object>> {

 private T daoReportService;

 public void readInRawReportReturnTrue() throws Exception {
  PowerMockito.when(this.daoReportService, PowerMockito.method(ReportService.class, "readInRawReport"))
  .withArguments(Mockito.any(Vector.class))
  .thenReturn(true);
 }

 ...
}

This is not supposed to happen when an object is being mock. During the investigation, I found it interesting when I’m tracing the code in debug mode, the code were actually flow into the real method, and inputFileNameList is showing NULL. My first question to myself is why do I need to bother the details implementation since I’m doing mocking? Is that mean Mockito.any() not doing his job? But later I found out I'm actually spying the test object, not mocking. Opps... To prove my justification is correct, I make some changes on the code like this:
  Vector<String> v = new Vector<String>();
  v.add("ASD");

  PowerMockito.when(this.daoReportService, PowerMockito.method(ReportService.class, "readInRawReport"))
  .withArguments(v)
  .thenReturn(true);

Now I can see inputFileNameList is having one element which shows ASD in it. The return value has nothing to do with thenReturn() and this function were totally disabled because the control has already been transfer into the hand of the real method when the test object is being spy. I wonder whether is this the right way to do this? Fortunately, I was so lucky that I manage to find the right way to do this. Thank God. Here is the correct solution on mocking the return value when a test object is being spy is shown below:
  PowerMockito.doReturn(true)
  .when(this.daoReportService, PowerMockito.method(ReportService.class, "readInRawReport"))
  .withArguments(Mockito.any(Vector.class));
This was like an English grammar, tweak from active sentence to a passive sentence.

Wednesday, December 18, 2013

Performing unit test on non-public method with PowerMockito

Recently I’m running a series of unit testing on my own works. I found out that it is quite difficult for me because my code consists of abstract classes, protected fields and protected methods. I’ve been using Java Reflection on this issue before I found the solution on PowerMockito. Somehow, I find it easier to implement it using PowerMockito due to the number of LOC is smaller and also it is much more expensive as I read in this post.

Here is the details implementation on accessing private method through PowerMockito.
objectUnderTest = PowerMockito.spy(new TheRealImplementation());

PowerMockito.doNothing()
            .when(objectUnderTest, PowerMockito.method(TheTargetClass.class, "thePrivateMethod"))
            .withNoArguments();

Assuming TheRealImplementation class is as shown below:
public class TheRealImplementation {

    private void thePrivateMethod() {
        ...
        ...
    }
    ...
}

getDeclareMethod(String, Class...) never accept null

At one day I was playing around with unit test using Java Reflection. I found it interesting when I pass in null as shown in code snippet below, a warning would be seen in Eclipse IDE.
private Method funcA;

funcA = TheClassA.class.getDeclaredMethod("funcA", null);
funcA.setAccessible(true);
Here is the warning message being shown on Eclipse IDE.
The argument of type null should explicitly be cast to Class<?>[] for the invocation of the varargs method getDeclaredMethod(String, Class<?>...) from type Class<TheTargetClass>. It could alternatively be cast to Class<?> for a varargs invocation
May be I have overlooked on the getDeclaredMethod’s declaration, as it shows:

Method java.lang.Class.getDeclaredMethod(String name, Class... parameterTypes)


The second argument is accepting a class type rather than an object. On top of that, it is a variable number of argument lists. Thus it is still acceptable if I do this.


funcA = TheClassA.class.getDeclaredMethod("funA");

Alternate way of connecting log4j to database

There is always difference between production server and development server at local PC. Take for an example on the audit trail module, this module provides audit service to an application. And this service will redirect the log4j logging activity to a database rather than storing logs to a file. The codes in SVN will always adhere to the configuration on production server, and this is how log4j configuration looks like:
log4j.appender.SEC = org.huahsin.JDBCConnectionPoolAppender
log4j.appender.SEC.jndiName = jdbc/myDatasource
If I don’t have the data source being configure at my local server (I’m using WebSphere Application Server Liberty Profile), executing that code will cause the server complaining this error:
[err] javax.naming.NameNotFoundException: jdbc/myDatasource
[err]     at com.ibm.ws.jndi.internal.ContextNode.lookup(ContextNode.java:242)
[err]     at [internal classes]
This isn’t that bad actually, if I insist choose not to configure data source at my local server, following changes need to be done in log4j configuration file:
log4j.appender.SEC=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.SEC.URL=jdbc:informix-sqli://xxx.xxx.xxx.xxx:2020/csdb:INFORMIXSERVER=online
log4j.appender.SEC.driver=com.informix.jdbc.IfxDriver
log4j.appender.SEC.user=xxx
log4j.appender.SEC.password=xxx

Monday, December 9, 2013

Compile GLUT with Cygwin required Cygwin/X X Server

Great news for today. I have resolved the problem which occured in the pass few weeks ago. This  note will be the follow up on the problem occured in previous note published in Revisit Opengl With Glut on Linux. I have an OpenGL program develop with GLUT using Eclipse IDE on Windows box. Somehow the EXE doesn't work due to the following error:

This application failed to start because cygwin1.dll was not found. Re-installing the application may fix this problem.

The missing cygwin1.dll is actually located under <CYGWIN_HOME/lib> folder. This error could be resolved easily by putting this path into the environment variable. This is Windows's SOP. After this, there is another error come in complaining that:

freeglut (PROGRAM_NAME): failed to open display ''

This error is a bit tricky. It was due to the Cygwin/X X Server was not install. Ensure the xinit package has install in cygwin, after that launch cygwin terminal, and then type this command startxwin to launch the X Terminal. Then only the EXE can be run. I did try to execute the EXE after xinit package has installed, and also did try executing it under cygwin terminal, but still failed. I think this is due to the EXE was compiled using cygwin compiler, its dependencies will also depends on X Server library, thus causing the EXE must run on top of Cygwin/X X Server.

To prove my statement, I run the same piece of code using VS Express 2012 and the EXE works like charm.

3D drawing tool for programmer

Not long ago, I dropped a question on which of the 3D characters drawing tools that is easy for programmer. I don't want it to be so complex or fantastic artwork, just enough for me to present the basic look and feel game character in the game. I want it to be able to render in all graphics card as well as the integrated graphic chip on the board. This is what I get from the expert:
  1. Wings 3D
  2. Blender
  3. MakeHuman

Sunday, December 1, 2013

Revisit OpenGL with GLUT on Linux

Remember in year 2002, the time I begin my game programming journey. The first thing I learn to program OpenGL during that time is Win32, and because I'm running on Windows machine, Win32 is the perfect option I started with Windows programming. Today, I'm revisit the OpenGL code on Linux, unfortunately Win32 is not runnable on Linux. What am I suppose to do next?

I discover that it's still doable with GLUT library. One thing that impress me is the code has nearly chop off 70% LOC just on the windows creation. Code snippet below showing the basic code framework done on GLUT:
int main(int argc, char **argv)
 glutInit(&argc, argv);
 //Simple buffer
 glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB );
 glutInitWindowPosition(50,25);
 glutInitWindowSize(500,250);
 glutCreateWindow("Green window");
 glutMainLoop();
}
Comparing with the code done on Win32, it is bloody disaster. I didn't show the code in this post because it is bloody long, it is reachable at NeHe's link. There are variation on the GLUT setup, Windows and Linux. On Linux, the require files are reachable through the Linux repository of my choice, and the header files always located at /usr/header, and library files located at /usr/lib.

Whereas on Windows, there is a litter tricky, and very dependent on compiler choice. I have try 3 of the compilers which there are Cygwin, MinGW, and VS 2012 Express. I failed to make it run on Cygwin and MinGW due to proprietary DLL file was missing during run-time. Luckily it is workable on VS 2012 Express. I got a bad news initially when working on VS where the Microsoft SDK doesn't come with GLUT library, it has to be download separately. I was so lucky that I found this link that drive me all the from setup till running the program. One thing to note is that I'm running Windows8 machine, putting the glut32.dll under C:\windows\system32 will not going to work, it has to be put under C:\windows\SysWOW64.

In conclusion, Linux is always the best choice for developer because it is a development box at the beginning with header and library files ready set aside on the path.

Friday, November 22, 2013

May I know why Header file name must tally with CPP file name?

I must be very long time didn't code C++. Why this error could happened on yesterday? In my memory, it is not suppose be an error, is it only happened in Eclipse? Is it because I'm a long time Visual Studio fans?

The problem is very simple, I have a base class with virtual destructor declare in Base.h
    #ifndef BASE_H_
    #define BASE_H_

    class Base {
        public:
            Base();
            virtual ~Base();
    };
    #endif

And then I have a Child class inherited Base class declare in Child.h
    #ifndef CHILD_H_
    #define CHILD_H_

    #include "base.h"

    class Child : public Base {  // (1)
        public:
            Child();
    };
    #endif
Now make a main.cpp and put the implementation of Base class virtual destructor.
    #include "Base.h"

    Base::Base() {}

    Base::~Base() {}
When building the source code, there is an error complaining that undefined reference to 'Base::Base()' at (1). If I change main.cpp to Base.cpp, the error will gone. There are 2 possibility, it is either a new C++ specification or there isn't a compilation rules define for main.cpp in makefile. Later I found that the second option is making more sense on this. I didn't resolve the problem since the makefile is auto generated, if I make modification on it, I'm afraid this will generate another problem and my development time will be drag.

Monday, November 18, 2013

JPA configuration in Spring without persistence.xml

I'm just so lucky. Finally I got the JPA configure with Spring without the need of persistence.xml. I couldn't tell whether this is a correct configuration but it just works. The motivation on the integration of JPA is to seek for alternate resolution beside pure implementation on DAO with Hibernate

This is how I configure JPA in Spring.
 ...

 <bean class="org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor">
 </bean>
 
 <bean class="org.springframework.jdbc.datasource.DriverManagerDataSource" id="dataSource">
  <property name="driverClassName" value="com.mysql.jdbc.Driver">
  </property>
  <property name="url" value="jdbc:mysql://localhost:3306/test">
  </property>
  <property name="username" value="root">
  </property>
  <property name="password" value="root">
  </property>
 </bean>
 
 <bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="entityManagerFactory">
  <property name="jpaVendorAdapter" ref="jpaVendorAdapter">
  </property>
  <property name="dataSource" ref="dataSource">
  </property>
  <property name="packagesToScan" value="org.huahsin">
  </property>
 </bean>
 
 <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" id="jpaVendorAdapter">
  <property name="databasePlatform" value="org.hibernate.dialect.MySQLInnoDBDialect">
  </property>
  <property name="showSql" value="true">
  </property>
 </bean>
The primary object in this configuration was the LocalContainerEntityManagerFactoryBean. According to the documentation, this is the most powerful way to set up a shared JPA EntityManagerFactory in Spring application context. I take 3 parameters in this configuration:
  1. dataSource - the bean that establish connection to database.
  2. jpaVendorAdapter - optional parameters in persistence.xml.
  3. packagesToScan - the packages where the entities is reside.
On DAO site, there are 2 ways to invoke a query. First one is to obtain an instance of EntityManagerFactory through @PersistenceUnit injection. From there obtain an instance of EntityManager to run a query.
@Repository
public class UserDao implements IUserDao {

 @PersistenceUnit
 private EntityManagerFactory emf;
 
 public User findByUsername(String username) {
  
  List<user> l = null;
  try {
   l = (List<user>) emf.createEntityManager().createQuery("from User").getResultList();
   
   // verify on the list of users
   for( User u : l ) {
    System.out.println(u.getUsername());
   }

   return null; // simply return NULL for testing purpose
  }
  finally {
   emf.close();
  }
 }
}
The second option is to obtain an instance of EntityManager through @PersistenceContext injection.
@Repository
public class UserDao implements IUserDao {

 @PersistenceContext
 private EntityManager em;
 
 public User findByUsername(String username) {
  
  List<User> l = null;
  try {
   l = (List<User>) em.createQuery("from User").getResultList();
   
   // verify on the list of users
   for( User u : l ) {
    System.out.println(u.getUsername());
   }

   return null; // simply return NULL for testing purpose
  }
  finally {
   emf.close();
  }
 }
}
Do not mess up with the injection. If I accidentally do this on EntityManagerFactory like this:

@PersistenceContext
private EntityManagerFactory emf;

This error would be seen when the bean is trigger during run-time:
Caused by: java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager]
 at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.checkResourceType(InjectionMetadata.java:134)
 at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.<init>(PersistenceAnnotationBeanPostProcessor.java:620)
 at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:381)
 at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:322)
 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:830)
 ... 27 more
Same to EntityManager, if I mistakenly do this:

@PersistenceUnit
private EntityManager em;

This will be the result during the run-time:
Caused by: java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManager] is incompatible with resource type [javax.persistence.EntityManagerFactory]
 at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.checkResourceType(InjectionMetadata.java:134)
 at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.(PersistenceAnnotationBeanPostProcessor.java:620)
 at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:381)
 at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:322)
 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:830)
 ... 27 more
There is a nice comment on this error why this shouldn't be done appear on this question in stackoverflow.com. The user claim that
An entity manager can only be injected in classes running inside a transaction. In other words, it can only be injected in a EJB. Other classe must use an EntityManagerFactory to create and destroy an EntityManager. - Andre Rodrigues

Wednesday, November 13, 2013

Overloading resolution in Java

Overloading is a very interesting topic in object oriented programming. As I know the what, when, why, and how, one more thing I didn’t know is the rule. Yes, there are rules defined on JVM how should resolved overloading method in JAVA. I got this piece of information when I was reading Programmer’s guide to SCJP. This is what mentions in the book:
The algorithm used by the compiler for the resolution of overloaded methods incorporates the following phases:
  1. It first performs overload resolution without permitting boxing, unboxing, or the use of a varargs call.
  2. If phase (1) fails, it performs overload resolution allowing boxing and unboxing, but excluding the use of a varargs call.
  3. If phase (2) fails, it performs overload resolution combining a varargs call, boxing, and unboxing.
When I first read on this, I never know such rules ever exist. Does this only happen to JAVA or it have been there since object oriented is invented? Anyhow, do not overlook on the rules mention above, there are priority when resolving overloading method. The condition define in first rule will always verify first before second and third rule is verify. If it fails, compiler will move on to the next rule, until it found a match. Consider following situation:
public static void funcA(String str, Object... obj) {    // (1)
 ...
}

public static void funcA(String str, Integer... i) {     // (2)
 ...
}
I have 2 overloading method at above code, if I do this:

funcA(“STRING”, 10, 10);

Compiler resolved the overloading method easily by picking up method (2). Now if I change this:
public static void funcA(String str, int x, Object... obj) {    // (1)
 ...
}

public static void funcA(String str, Integer... i) {            // (2)
 ...
}
This would bring trouble to the compiler as both overloading method could be the best fit, compiler unable to identify the most specific overloading method, thus compiler were force to issue an compile error to me.

Tuesday, November 12, 2013

What is Delegating Constructor in C++?

Ahh... It has been so long I didn’t write any C++ code since 2010. While I was reading C++ article published at IBM’s website, I come across this term which I found very interesting and new to me. Consider following code snippet, all constructors use as a common initializer for its member variable.
class ClsA {
    private:
        int var;
        
    public:
        ClsA() : var(0) {}
        ClsA(int x) : var(x) {}
        
        ...
}
This is what I usually did at old school. Now there is slightly little improvement on the constructor syntax. Remember how the syntax applied when a member variable is initialized by a constructor. The same syntax can be applied to trigger another constructor to perform member variable initialization. This is what the syntax called delegating constructor. Consider:
class ClsA {
    private:
        int var;
        
    public:
        ClsA() : ClsA(123) {}    // (1)
        ClsA(int x) : var(x) {}  // (2)
        
        ...
};
When I do this:

ClsA clsA;

The constructor at (1) will get invoke and further call another constructor at (2) where member variable var get initialize to 123. The constructor at (1) is a delegating constructor, (2) is a target constructor. Somehow programmer still has the flexibility to invoke constructor at (2) directly to perform its initialization. More details on this specification can be found at open-std(dot)org.

Sunday, November 10, 2013

Variation on authentication-manager in Spring Security

During the feasibility study on authentication module, I been thinking to implement BO/DAO pattern. While working on the POC, I keep asking myself: "Is this the right way of doing this?" Without re-implementing the wheel, I search on the WEB to see whether is there any existing framework out there which specialize on authentication? Yes, I found Apache Acegi and Spring Security. Interestingly, there is a rumours about Spring Security is actually origin from Apache Acegi. Anyway what I'm concern is since I'm already using Spring framework at the earlier stage of the development, thus I will continue with the Spring family.

The configuration is pretty straight forward to setup. One thing that caught my attention is the usage of authentication-manager. The code snippet shown below is the typical usage to everyone which is to make a quick POC demo with a static account. In this case the login ID is huahsin and the password is 1234.
 ...
 <authentication-manager alias="authenticationManager">
  
  <authentication-provider>
   <password-encoder hash="plaintext"/>
   <user-service>
    <user authorities="ROLE_USER" name="huahsin" password="1234"/>
   </user-service>
  </authentication-provider>
 </authentication-manager>
In real world, the user account are usually store in a database for later verification. Now code snippet below shows that user-service has been replace by jdbc-user-service. Notice that users-by-username-query is responsible for retrieving the user name whereas authorities-by-username-query is the sub-query of previous query that retrieve the user role.
 ...
 <authentication-manager alias="authenticationManager">
  ...
  <authentication-provider>
   <jdbc-user-service authorities-by-username-query="select users.username, authority.authority as authority from users, authority where users.username = ? and users.username = authority.username" data-source-ref="dataSource" users-by-username-query="select username, password, 'true' as enabled from users where username=?"/>
  </authentication-provider>
 </authentication-manager>
This is the initial idea from me but somehow there are still objection on it as I have expose the risk on the SQL code is being published to the other developers. This is very subjective to some company. But in mine company there is IT governance look after us. They are very concern on this and they don't like any sensitive data being retrieved easily. Thus I have to move this into Java code like this:
package org.huahsin.security;

public class AuthServiceProvider implements AuthenticationProvider {

 private User user;
 
 private Role role;

 @Override
 public Authentication authenticate(Authentication authentication) throws AuthenticationException {
  
  String name = authentication.getName();
  String password = authentication.getCredentials().toString();
  
  /*
   * assuming there is an DAO object retrieving the particular user information
   * from table and the information is store inside User entity.
   */
  
  if( user.getUsername().equals(name) && user.getPassword().equals(password) ) {
   List<grantedauthority> grantedAuths = new ArrayList<grantedauthority>();
   grantedAuths.add(new SwitchUserGrantedAuthority(role.getRole(), authentication));
   Authentication auth = new UsernamePasswordAuthenticationToken(name, password, grantedAuths);
   
   return auth;
  }
  else {
   return null;
  }
 }

 @Override
 public boolean supports(Class authentication) {
  
  return authentication.equals(UsernamePasswordAuthenticationToken.class);
 }

}

And trigger this bean from Spring like this way:
 ...
   <authentication-manager alias="authenticationManager">
  
      <authentication-provider ref="authServiceProvider"/>
   </authentication-manager>

   <beans:bean class="org.huahsin.security.AuthServiceProvider" id="authServiceProvider"/>
   ...
Notice that the code above is extending the AuthenticationProvider class, this shows the general usage during authentication process as it provide wider flexibility on identifying a user. There is always an alternate solution that could allow me to authenticate a user, which is by extending the UserDetailsService. This class has an override method, loadUserByUsername() which will retrieve the specific user during the authentication process. As shown in the code snippet below:
@Service("authServiceProvider")
public class AuthServiceProvider implements UserDetailsService {

 private User user;
 
 private Role role;

 @Override
 public UserDetails loadUserByUsername(String username)
   throws UsernameNotFoundException, DataAccessException {
  
  /*
   * assuming there is an DAO object retrieving the particular user information
   * from table and the information is store inside User entity.
   */
  
  return new org.springframework.security.core.userdetails.User(user.getUsername(), 
      user.getPassword(),
      true,
      true,
      true,
      true,
      role.getRole());
 }
}
Since this class is extending UserDetailsService, there is a slightly difference when trigger this bean from Spring, now the bean is register with user-service-ref.
 ...
 <authentication-manager alias="authenticationManager">
  <authentication-provider user-service-ref="authServiceProvider"/>
 </authentication-manager>
Usually the user's credential store in database are encrypted, says SHA512, then I just need to mention the encryption algorithm used in Spring like this:
 ...
 <authentication-manager alias="authenticationManager">
  <authentication-provider ref="daoAuthenticationProvider"/>
 </authentication-manager>

 <beans:bean class="org.springframework.security.authentication.dao.DaoAuthenticationProvider" id="daoAuthenticationProvider">
  <beans:property name="userDetailsService" ref="authServiceProvider"/>
  <beans:property name="passwordEncoder" ref="passwordEncoder"/>
 </beans:bean>

 <beans:bean class="org.springframework.security.authentication.encoding.ShaPasswordEncoder" id="passwordEncoder">
  <beans:constructor-arg index="0" value="512"/>
 </beans:bean>
Spring very kind for me, there is no single piece of Java code on user's password encryption, it is all handle automatically.

Tuesday, November 5, 2013

AnnotationException No identifier specified for entity

Sometimes a foolish mistake could drag me few hours to resolve although it is simple. At one time I was rushing to commit my code. But what a surprise is that I got this error throw out during unit test and I don't know what is really happening?
Caused by: org.hibernate.AnnotationException: No identifier specified for entity: org.huahsin.Rocket.Entity
 at org.hibernate.cfg.InheritanceState.getElementsToProcess(InheritanceState.java:243)
 at org.hibernate.cfg.AnnotationBinder.bindClass(AnnotationBinder.java:663)
 at org.hibernate.cfg.Configuration$MetadataSourceQueue.processAnnotatedClassesQueue(Configuration.java:3406)
 at org.hibernate.cfg.Configuration$MetadataSourceQueue.processMetadata(Configuration.java:3360)
 at org.hibernate.cfg.Configuration.secondPassCompile(Configuration.java:1334)
 at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1724)
 at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1775)
 at org.springframework.orm.hibernate4.LocalSessionFactoryBuilder.buildSessionFactory(LocalSessionFactoryBuilder.java:251)
 at org.springframework.orm.hibernate4.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:372)
 at org.springframework.orm.hibernate4.LocalSessionFactoryBean.afterPropertiesSet(LocalSessionFactoryBean.java:357)
 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514)
 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452)
 ... 14 more
I spent few hours on this searching and I found a silly mistake happened in my datasource-spring.xml:
 ...
 ...
 <bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="entityManagerFactory">
  <property name="jpaVendorAdapter" ref="jpaVendorAdapter"/>
  <property name="dataSource" ref="dataSource"/>
  <property name="packagesToScan" value="org.huahsin.Rocket.*"/>
 </bean>
packagesToScan is expecting a value of package name. org.huahsin.Rocket is a valid package name whereas org.huahsin.Rocket.* is not. I'm old. My eyes wasn't as good as when I was young. My eyes couldn't spot the tiny little star.