Thursday, December 31, 2015

Two classes have the same XML type name?

When I'm generating the SEI stub object from a WSDL using following command:
   <plugin>
       <groupId>org.codehaus.mojo</groupId>
       <artifactId>jaxws-maven-plugin</artifactId>
       <executions>
         <execution>
           <goals>
             <goal>wsimport</goal>
           </goals>
             <configuration>
             <wsdlUrls>
               <wsdlUrl>http://localhost:8080/ws2?wsdl</wsdlUrl>
             </wsdlUrls>
           <keep>true</keep>
           <packageName>org.huahsin.jaxws.staff</packageName>
           <sourceDestDir>${basedir}/src</sourceDestDir>             
         </configuration>
         </execution>
       </executions>
   </plugin>
I'm not happy with the package name, so I decide to give a better name by changing the package name in a manual way. But when the client program trying to access the service:
 URL url = new URL("http://localhost:8080/ws2/SeniorManager?wsdl");
 QName qname = new QName("http://staff.huahsin.org/", "SeniorManagerService");
 SeniorManagerService service = new SeniorManagerService(url, qname);
 ISeniorManager manager = service.getSeniorManagerPort();
 System.out.println(manager.getID());
I hit this error:
Exception in thread "main" javax.xml.ws.WebServiceException: Unable to create JAXBContext
 at com.sun.xml.internal.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:156)
 at com.sun.xml.internal.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:84)
 at com.sun.xml.internal.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:234)
 at com.sun.xml.internal.ws.client.WSServiceDelegate.createSEIPortInfo(WSServiceDelegate.java:673)
 at com.sun.xml.internal.ws.client.WSServiceDelegate.addSEI(WSServiceDelegate.java:661)
 at com.sun.xml.internal.ws.client.WSServiceDelegate.getPort(WSServiceDelegate.java:330)
 at com.sun.xml.internal.ws.client.WSServiceDelegate.getPort(WSServiceDelegate.java:313)
 at com.sun.xml.internal.ws.client.WSServiceDelegate.getPort(WSServiceDelegate.java:295)
 at javax.xml.ws.Service.getPort(Service.java:119)
 at org.huahsin.jaxws.SeniorManagerService.getSeniorManagerPort(SeniorManagerService.java:72)
 at org.huahsin.Client.main(Client.java:23)
Caused by: java.security.PrivilegedActionException: com.sun.xml.internal.bind.v2.runtime.IllegalAnnotationsException: 2 counts of IllegalAnnotationExceptions
Two classes have the same XML type name "{http://staff.huahsin.org/}getID". Use @XmlType.name and @XmlType.namespace to assign different names to them.
 this problem is related to the following location:
  at org.huahsin.jaxws.GetID
  at public javax.xml.bind.JAXBElement org.huahsin.jaxws.ObjectFactory.createGetID(org.huahsin.jaxws.GetID)
  at org.huahsin.jaxws.ObjectFactory
 this problem is related to the following location:
  at org.huahsin.jaxws.staff.GetID
Two classes have the same XML type name "{http://staff.huahsin.org/}getIDResponse". Use @XmlType.name and @XmlType.namespace to assign different names to them.
 this problem is related to the following location:
  at org.huahsin.jaxws.GetIDResponse
  at public javax.xml.bind.JAXBElement org.huahsin.jaxws.ObjectFactory.createGetIDResponse(org.huahsin.jaxws.GetIDResponse)
  at org.huahsin.jaxws.ObjectFactory
 this problem is related to the following location:
  at org.huahsin.jaxws.staff.GetIDResponse

 at java.security.AccessController.doPrivileged(Native Method)
 at com.sun.xml.internal.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:143)
 ... 10 more
Caused by: com.sun.xml.internal.bind.v2.runtime.IllegalAnnotationsException: 2 counts of IllegalAnnotationExceptions
Two classes have the same XML type name "{http://staff.huahsin.org/}getID". Use @XmlType.name and @XmlType.namespace to assign different names to them.
 this problem is related to the following location:
  at org.huahsin.jaxws.GetID
  at public javax.xml.bind.JAXBElement org.huahsin.jaxws.ObjectFactory.createGetID(org.huahsin.jaxws.GetID)
  at org.huahsin.jaxws.ObjectFactory
 this problem is related to the following location:
  at org.huahsin.jaxws.staff.GetID
Two classes have the same XML type name "{http://staff.huahsin.org/}getIDResponse". Use @XmlType.name and @XmlType.namespace to assign different names to them.
 this problem is related to the following location:
  at org.huahsin.jaxws.GetIDResponse
  at public javax.xml.bind.JAXBElement org.huahsin.jaxws.ObjectFactory.createGetIDResponse(org.huahsin.jaxws.GetIDResponse)
  at org.huahsin.jaxws.ObjectFactory
 this problem is related to the following location:
  at org.huahsin.jaxws.staff.GetIDResponse

 at com.sun.xml.internal.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:91)
 at com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:442)
 at com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:274)
 at com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:125)
 at com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1127)
 at com.sun.xml.internal.bind.v2.ContextFactory.createContext(ContextFactory.java:173)
 at com.sun.xml.internal.bind.api.JAXBRIContext.newInstance(JAXBRIContext.java:95)
 at com.sun.xml.internal.ws.developer.JAXBContextFactory$1.createJAXBContext(JAXBContextFactory.java:98)
 at com.sun.xml.internal.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:151)
 at com.sun.xml.internal.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:143)
 ... 12 more
At first, this was really clueless to me. Spending day and night searching whatever clue appears on the stacktrace. And eventually I found nothing. Until in the late night only I found the out root cause. Never change the stub object's package name by hand, if the it does make some noise on you, delete the whole thing and regenerate again.

Like it or not, this is the way.

JAXWS and EJB can live together

If the SEI code were accessible from the client within the same package, do I still need the following code in order to access the service code?
   URL url = new URL("http://localhost:8080/ws1?wsdl");
   QName qname = new QName("http://webmethod.huahsin.org/", "HelloWorldImplService");
   HelloWorldImplService service = new HelloWorldImplService(url, qname);
   IHelloWorld manager = service.getHelloWorldImplPort();
   System.out.println(manager.sayHelloWorld());
Imaging I have the following SEI code and there are sit in the same application:
package org.huahsin.webmethod;
 
…
 
@WebService
@SOAPBinding(style=Style.DOCUMENT)
public interface IHelloWorld {
 
 @WebMethod
 String sayHelloWorld();
}
 
 
package org.huahsin.webmethod;
 
…
 
@WebService(endpointInterface="org.huahsin.webmethod.IHelloWorld")
public class HelloWorldImpl implements IHelloWorld {
 
 @Override
 public String sayHelloWorld() {
  return "Hello World";
 }
 
}
I just feel a bit weird in doing this since both client code and server code are live in the same application. I did a search on the forum and got to know that web service code can be accessed through EJB. Just "top up" the @Stateless and @Remote to the SEI and we are done.
@Remote
@WebService
@SOAPBinding(style=Style.DOCUMENT)
public interface IHelloWorld {
 
 @WebMethod
 String sayHelloWorld();
}
 
 
package org.huahsin.webmethod;
 
…

@Stateless
@WebService(endpointInterface="org.huahsin.webmethod.IHelloWorld")
public class HelloWorldImpl implements IHelloWorld {
 
 @Override
 public String sayHelloWorld() {
  return "Hello World";
 }
 
}
To access the web service code in EJB way, do the following (excuse myself, I'm doing it in JSF bean):
@ManagedBean
@RequestScoped
public class HelloWorldController {

 @EJB
 private IHelloWorld helloWorld;
 ...
 ...
Sound cool? The best part of this is no more generating stub code.

Saturday, December 26, 2015

Asynchronous web service is a real thing

It has been so long for my wish to come true. And finally, it is proven that asynchronous web service can be done. First, I make a regular SEI:
package org.huahsin.webmethod;

…

@WebService
@SOAPBinding(style=Style.DOCUMENT)
public interface IHelloWorld {

 @WebMethod
 String sayHelloWorld();
}


package org.huahsin.webmethod;

…

@WebService(endpointInterface="org.huahsin.webmethod.IHelloWorld")
public class HelloWorldImpl implements IHelloWorld {

 @Override
 public String sayHelloWorld() {
  return "Hello World";
 }

}
And then construct the following wsimport plugin in POM.xml to generate Java artifact. But this will only generate synchronous method.
    <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>jaxws-maven-plugin</artifactId>
        <executions>
            <execution>
                <goals>
                    <goal>wsimport</goal>
                </goals>
                <configuration>
                    <wsdlUrls>
                        <wsdlUrl>http://localhost:8080/ws1?wsdl</wsdlUrl>
                    </wsdlUrls>
                    <bindingDirectory>${basedir}/resources/jaxws</bindingDirectory>
                    <keep>true</keep>
                    <packageName>org.huahsin.jaxws</packageName>
                    <sourceDestDir>${basedir}/src</sourceDestDir>             
                </configuration>
            </execution>
        </executions>
    </plugin>
To generate asynchronous method, I would require additional file to bind onto wsimport. It is just a regular XML file located at the path where <buildingDirectory> is pointing to, and the content is as shown in below:
<bindings
    xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
    wsdlLocation="http://localhost:8080/ws1?wsdl"
    xmlns="http://java.sun.com/xml/ns/jaxws">

    <!-- applies to wsdl:definitions node, that would mean the entire wsdl -->
    <enableAsyncMapping>false</enableAsyncMapping>

    <!-- wsdl:portType operation customization -->
    <bindings node="wsdl:definitions/wsdl:portType [@name='IHelloWorld']/wsdl:operation[@name='sayHelloWorld']">
        <enableAsyncMapping>true</enableAsyncMapping>
    </bindings>
  
</bindings>
It's job scope is to locate the sayHelloWorld method using XPath and then convert this method to support asynchronous. While not affecting to the other's behaviour, I remain the rest of the methods as synchronous. Once everything is ready, I would first need a publisher to start the engine before I can do the conversion:
public class Publisher {
 public static void main(String args[]) {
  Endpoint.publish("http://localhost:8080/ws1", new HelloWorldImpl());
 }
}
Then fire the command mvn compile to generate Java artifact. This will have additional two methods generated for asynchronous (as shown in the code snippet below) compare to the default generation.
public interface IHelloWorld {
    public Response<SayHelloWorldResponse> sayHelloWorldAsync();

    public Future<?>sayHelloWorldAsync(
        @WebParam(name = "asyncHandler", targetNamespace = "")
        AsyncHandler<SayHelloWorldResponse> asyncHandler);

    ...
}
Now, to test my code is really works? I would have this simple program firing the asynchronous method:
...

import org.huahsin.jaxws.HelloWorldImplService;
import org.huahsin.jaxws.IHelloWorld;
import org.huahsin.jaxws.SayHelloWorldResponse;

...


public class Client {
 static private String msg = "";
 
 public static void main(String[] args) … {
  System.out.println("before: " + msg);
  e.sayHelloWorldMethod();
  Thread.sleep(1000);
  System.out.println("after: " + msg);
 }

 private void sayHelloWorldMethod() throws MalformedURLException, InterruptedException, ExecutionException {
  URL url = new URL("http://localhost:8080/ws1?wsdl");
  QName qname = new QName("http://webmethod.huahsin.org/", "HelloWorldImplService");
  HelloWorldImplService service = new HelloWorldImplService(url, qname);
  IHelloWorld hello = service.getHelloWorldImplPort();
  
  Response res = hello.sayHelloWorldAsync();
  SayHelloWorldResponse output = (SayHelloWorldResponse) res.get();
  output.getReturn();
  
  hello.sayHelloWorldAsync(new AsyncHandler() {

   @Override
   public void handleResponse(Response res) {
    try {
     setMessage(((SayHelloWorldResponse)res.get()).getReturn());
    }
    catch (InterruptedException | ExecutionException e) {
     e.printStackTrace();
    }
   }
   
  });
 }

 private void setMessage(String msg) {
   this.msg = msg;
 }

}

Thursday, December 24, 2015

HornetQ unable to validate the user

Oh shit!! Error again.
[WARNING] 
java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
 at java.lang.Thread.run(Thread.java:745)
Caused by: javax.jms.JMSSecurityException: HQ119031: Unable to validate user: null
 at org.hornetq.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:399)
 at org.hornetq.core.client.impl.ClientSessionFactoryImpl.createSessionInternal(ClientSessionFactoryImpl.java:880)
 at org.hornetq.core.client.impl.ClientSessionFactoryImpl.createSessionInternal(ClientSessionFactoryImpl.java:789)
 at org.hornetq.core.client.impl.ClientSessionFactoryImpl.createSession(ClientSessionFactoryImpl.java:324)
 at org.hornetq.jms.client.HornetQConnection.authorize(HornetQConnection.java:654)
 at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:676)
 at org.hornetq.jms.client.HornetQConnectionFactory.createQueueConnection(HornetQConnectionFactory.java:119)
 at org.hornetq.jms.client.HornetQConnectionFactory.createQueueConnection(HornetQConnectionFactory.java:114)
 at org.huahsin.jms1.MetaData.main(MetaData.java:30)
 ... 6 more
Caused by: HornetQException[errorType=SECURITY_EXCEPTION message=HQ119031: Unable to validate user: null]
 ... 15 more
This time is HornetQ is unable to validate the user. I was running some test code on JMS which shows this:

Notice the createQueueConnection() taking empty arguments which cause this error. As of this writing, I'm using the following JMS maven configuration:

Actually, there is an overload method which taking additional two parameters that could pass the authentication. What frustrates me is that the intellisense only shows createQueueConnection(String arg0, String arg1). This shows the programmer is very a very irresponsible person. OK! Enough for that. By passing in the username and password, this would solve the problem.

Somehow there is a workaround for this, disable the HornetQ security in standalone-full.xml:

But I can't just so irresponsible to disable the security, right? I opt-out this workaround.

Better solution to exec:java in Maven

Hey! There is a simpler solution to execute a java program in Maven. Tracking back on my last development journey, there is a complicated way of doing this. Actually, I don't need any other except this:
<plugins>
      <plugin>
       <groupId>org.codehaus.mojo</groupId>
       <artifactId>exec-maven-plugin</artifactId>
       <version>1.4.0</version>
       <executions>
         <execution>
          <id>Meta</id>
          <goals>
            <goal>java</goal>
          </goals>
         </execution>
       </executions>
       <configuration>
         <mainClass>org.huahsin.jms1.QBorrower</mainClass>
       </configuration>
      </plugin>
      ...
      ...
</plugins>
And then just hit clean compile exec:java in Eclipse goal to execute the Java program.

Thursday, December 17, 2015

Wrong component? EJB can't used as web component?

Shit!! Something was wrong! When I deploy my web app into JBoss, it throws me this error:
(MSC service thread 1-2) MSC000001: Failed to start service jboss.deployment.unit."ejbWeb1.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."ejbWeb1.war".PARSE: JBAS018733: Failed to process phase PARSE of deployment "ejbWeb1.war"
 at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:127) [jboss-as-server-7.3.4.Final-redhat-1.jar:7.3.4.Final-redhat-1]
 at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.4.GA-redhat-1.jar:1.0.4.GA-redhat-1]
 at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.4.GA-redhat-1.jar:1.0.4.GA-redhat-1]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_79]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_79]
 at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_79]
Caused by: java.lang.RuntimeException: JBAS018043: org.huahsin.ejb.Authentication has the wrong component type, it cannot be used as a web component
 at org.jboss.as.web.deployment.component.WebComponentProcessor.deploy(WebComponentProcessor.java:120)
 at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:120) [jboss-as-server-7.3.4.Final-redhat-1.jar:7.3.4.Final-redhat-1]
 ... 5 more
The clue is org.huahsin.ejb.Authentication can not be used as web component. Ok! Fine. What is the root cause of this? I'm gonna find out this...

I got an EJB code as stated below:
package org.huahsin;

@Remote
public interface AuthenticationRemote {
    public int status();
}

package org.huahsin;

@Stateless(name="authenticate")
@LocalBean
public class Authentication implements AuthenticationRemote {
    @Override
    public int status() {
        return 1234;
    }
}
This code was located in a standalone EJB project. And then I have another set of code which will accessing the EJB code:
package org.huahsin;

@WebServlet("/authentication")
public class Authentication extends HttpServlet {

 @EJB
 private AuthenticationRemote authenticationRemote;

 public Authentication() {
  super();
 }

 @Override
 protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
  int result = authenticationRemote.status();
  ...
  ...
 }
 
 @Override
 protected void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException {
  
 }
}
Yet, this is another web app project. So, to make these two projects work together, I did the following configuration on the web project:
  1. Reference the EJB project in Project References.
  2. Add EJB project in Web Deployment Assembly.
With the code flat on the plane, I think I have spotted the error. Notice that both web and EJB project having the same class name (Authentication) and package name (org.huahsin), this is where the error came from. Thus, rework on the code by placing org.huahsin.ejb for EJB project and org.huahsin.web for web project will solve this issue.

Tuesday, December 15, 2015

Filter file name with Boost regular expression

The story of the use case is like this; given a file name having the pattern, dummyx.txt, where x is a running number, how could I effectively filter out those doesn't match with this pattern? My initial answer would be this:
set<wstring> files = ... // assuming a list of file name are return and kept in the variable files

set<wstring>::iterator it;

for( it=list.begin(); it != list.end(); it++ ) {
    if ((*it).find(L"dummy.txt") != string::npos) {
        /***** proceed with the flow *****/
    }
}
I just feel that this piece is so rough. Instead of comparing the file name using wildcard search, I start to think of a more elegant way to get this done. So the first idea pops up in my mind is to use regular expression, here come to the second revision of the piece:
boost::regex expr("([\\:/\\\\\\w]+)?dummy(\\d{1,3})(\\.txt)$");
set<wstring>::iterator it;

for( it=list.begin(); it != list.end(); it++ ) {
    string s((*it).begin(), (*it).end());
    if (boost::regex_match(s, expr)) {
        /***** proceed with the flow *****/
    }
}
Much better? At least the code look much more elegant than the previous piece. Not only that it will filter out those doesn't match with the pattern, it will also filter out the name with only dummy in the file name.

Sunday, December 13, 2015

Install OpenGL libraries with NuGet Package Manager

Actually, there is a better way to install OpenGL libraries in visual Studio 2013 Express Edition. Now only I got to realize that there is a tool, a powerful tool to do this - NuGet Package Manager. But one requirement for this powerful feature to function is that the project must first create in order to proceed to the download. The reason being is the libraries will be installed under the particular project's path. It wasn't installed under the Program Files.

When I first initiate the Package Manager Console, I was greeted by a welcome message:
Each package is licensed to you by its owner. Microsoft is not responsible for, nor does it grant any licenses to, third-party packages. Some packages may include dependencies which are governed by additional licenses. Follow the package source (feed) URL to determine any dependencies.

Package Manager Console Host Version 2.8.60610.756

Type 'get-help NuGet' to see all available NuGet commands.

PM>
And then type in Install-Package nupengl.core. Once done, this should be seen:
PM> Install-Package nupengl.core
Attempting to resolve dependency 'nupengl.core.redist (≥ 0.1.0.1)'.
Installing 'nupengl.core.redist 0.1.0.1'.
Successfully installed 'nupengl.core.redist 0.1.0.1'.
Installing 'nupengl.core 0.1.0.1'.
Successfully installed 'nupengl.core 0.1.0.1'.
Adding 'nupengl.core.redist 0.1.0.1' to OpenGl1.
Successfully added 'nupengl.core.redist 0.1.0.1' to OpenGl1.
Adding 'nupengl.core 0.1.0.1' to OpenGl1.
Successfully added 'nupengl.core 0.1.0.1' to OpenGl1.

PM>
When I first use this feature, I didn't know the project shouldn't close. End up I got this error:
PM> Install-Package nupengl.core
Install-Package : The current environment doesn't have a solution open.
At line:1 char:16
+ Install-Package <<<<  nupengl.core
    + CategoryInfo          : InvalidOperation: (:) [Install-Package], InvalidOperationException
    + FullyQualifiedErrorId : NuGetNoActiveSolution,NuGet.PowerShell.Commands.InstallPackageCommand
Ahhhh... so nice. This is far better than last time, everything was automated. Once I'm with done the installation, the first thing I do is to test the GL version installed in my machine:
int main(int argc, char** argv)
{
 ...

 glewInit();
 if (glewIsSupported("GL_VERSION_4_5"))
  std::cout << "GLEW version is 4.5" << std::endl;
 else
  std::cout << glGetString(GL_VERSION) << std::endl;

}
Unfortunately, I couldn't get my expected output. The output I got was 4.3.0 - Build 10.18.10.3995. Hmmm...

Clumsy mistake on Hibernate Filter

Before I met with Hibernate Filter, where clause would be my best friend doing the filtering in a query. Assuming I have a Use Case which will filter a date range from a resultset, this is what I usually did:
public void hibernateQuery() throws SQLException
{
 Configuration cfg = new Configuration();
 cfg.configure("hibernate.cfg.xml");
 SessionFactory fac = cfg.buildSessionFactory();
 Session s = fac.openSession();
 
 SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
 Calendar cal = Calendar.getInstance();
 cal.set(2015, 0, 27);
 
 Query res = s.createQuery("from TheTable where to_char(parseDateTime(theDate, 'yyyy-mm-dd'), 'yyyy-mm-dd') = '" + 
               sdf.format(cal.getTime()) + 
               "'");
 List<thetablet> l = res.list();
 
 /***** process your data *****/

 s.close();
}
And this is the Hibernate construct of the table:
<hibernate-mapping>
    ...
    <class catalog="TEST" name="org.huahsin.model.TheTable" table="THE_TABLE">
        <property name="theDate" type="timestamp">
            <column length="23" name="THE_DATE" not-null="true">
        </column>
    </class>
</hibernate-mapping>
Notice the where clause is attached to the query and it is on object type. But with Hibernate Filter, the where clause will detach from the query and the filtering will be done in Hibernate construct. Just like this:
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
Filter filter = s.enableFilter("FilterByDate");

Calendar cal = Calendar.getInstance();
cal.set(2015, 0, 28);
filter.setParameter("beforeDate", sdf.format(cal.getTime()));

Calendar cal2 = Calendar.getInstance();
cal2.setTime(cal.getTime());
cal2.add(Calendar.DATE, 1);
filter.setParameter("afterDate", sdf.format(cal2.getTime()));

Query res = s.createQuery("from TheTable");
And the Hibernate construct would be like this:
<hibernate-mapping>
    ...
    <class name="org.huahsin.model.TheTable" table="THE_TABLE" catalog="TEST">
        ...
        <filter name="FilterByDate" condition="to_char(parseDateTime(THE_DATE, 'yyyy-mm-dd'), 'yyyy-mm-dd') between :beforeDate and :afterDate" />
    </class>
    <filter-def name="FilterByDate">
        <filter-param name="beforeDate" type="string"/>
        <filter-param name="afterDate" type="string"/>
    </filter-def>
</hibernate-mapping>
Notice the condition attribute (which is the same thing as where clause) was in pure SQL pattern. What a clumsy mistake I made was that I'm using Hibernate object in the condition attribute and causing the resultset return nothing. I'm just so clumsy!

Sunday, December 6, 2015

Hijacking the logger, so rude!

Last Friday could be my happiest day in 2015 because I've figured out how could I override log4j logger in unit test. Take the use case below as an example, I'll have the logger being declared as private static member:
import org.apache.log4j.Logger;

public class Helper {
   private static Logger jdbcLogger = Logger.getLogger("jdbcLogger");

   ...
   ...
}
This was a nightmare because JDBC logger is very high depends on the database's availability, especially when I've been told to freeze any update into the database. End up I create a local database just for this purpose, by doing so, I'm required to reconfigure the JDBC connection in my source code. This still isn't a clean solution as I don't want Jenkins to reconfigure the database configuration in the raw code. Then I start to think about the hijacking way, this is what I did to hijack the logger:
private void hijackJdbcLogger() {
   Logger l = Whitebox.getInternalState(Helper.class, "jdbcLogger");
   l.removeAllAppenders();
   l.addAppender(appender);

   Whitebox.setInternalState(Helper.class, "jdbcLogger", l);
}
But later I found out this solution has a flaw where jdbcLogger must first initialize by Helper class before I can change its value. At least I have something surprise me where I mock the whole logger:
@BeforeClass
public static void setUpBeforeClass() throws NoSuchFieldException, SecurityException, IllegalArgumentException, IllegalAccessException {
   Logger mockLogger = Logger.getLogger("unitTestLogger");
   mockLogger.setLevel(Level.DEBUG);
   mockLogger.setAdditivity(false);
   mockLogger.addAppender(EasyMock.createMock(ConsoleAppender.class));

   Field logger = ReportHelper.class.getDeclaredField("jdbcLogger");
   logger.setAccessible(true);
   logger.set(null, mockLogger);
}
Now I feel the perfection with this solution design. Again, in rule of design, hijacking is very rude, don't do this. When something has declared as private, meaning it is not suppose to be exposed to the outside world. Code review is indeed a higher priority in such case.

Tuesday, December 1, 2015

No last call on EasyMock

I have a static function which is as simple as following:
public class Utility {
   public static String funcA() {
      return "ABC";
   }
}
I was shock seeing the following error when I am unit testing the Utility class:
java.lang.IllegalStateException: no last call on a mock available
 at org.easymock.EasyMock.getControlForLastCall(EasyMock.java:560)
 at org.easymock.EasyMock.expect(EasyMock.java:538)
 at org.huahsin.unittest.theprocess(SUT.java:72)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
 at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:88)
 at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:96)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
 at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:86)
 at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:49)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
 at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:33)
 at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:45)
 at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
 at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:106)
 at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
 at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
 at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
 at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
 at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
 at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
 at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192
The error message is quite confusing to me because this is to indicate that EasyMock was unable call the function as the function is being declared as static. Since EasyMock unable to handle static method, then PowerMock would be my best fit on this matter.

To allow EasyMock to see static method, I have to make an announcement to EasyMock that I'm sending in a class with static method and I'm going to unit test it now.
@RunWith(PowerMockRunner.class)
@PrepareForTest(Utility.class)
public class SUT {
   @BeforeClass
   public static void setUpBeforeClass() {
      PowerMock.mockStatic(Utility.class);
   }

   @Test
   public void theprocess() throws InvalidDataFileException {
      EasyMock.expect(Utility.funcA()).andReturn("genius");

      PowerMock.replayAll();

      /***** start test code when mock code are done *****/
   }
}
Then only EasyMock will know what to do. Do remember to call PowerMock.replayAll() to get the mock objects ready, otherwise EasyMock wouldn't recognize them.

Sunday, November 29, 2015

Accessing std::set container by index position

You like it or not? Accessing an element of a set container by index position is different from the vector container as it doesn't support operator[]. There is a weird way of doing this. Before I got to aware this shortcut, I was doing it in this manner:
   set<int> s;
   set<int>::iterator is;
   for (is = s.begin(); is != s.end(); is++)
      ...
This will loop until my desire string is found. Unlike vector, I can not do something like this: s[0]. The compiler will never allow this shit to proceed further. Sigh! If it is not supported, then I have to find the other way. As of my finding, this seems to be workable:
   ...

   str = (*std::next(s.begin(),4)).c_str();
   ...
With this, I'll be able to access the 4th elements of s. I'm sure the 4th element is not null, otherwise that code will cause my shit to blow up during runtime.

Thursday, November 26, 2015

Trick to iterate queue in C++

Oh shit! Iterator was not part of the queue interface, then how could I iterate the content of the queue?

While I was unit testing my code, I want to check whether the queue does contain any search string found in the queue. Thus, I have a QueuePath which storing some text and then iterate the queue with following way, it failed.
typedef queue<wstring> QueuePath;
QueuePath::iterator it;
for( it=list.begin(); it != list.end(); it++ ) {
   …
}
This will hit compilation error because an iterator isn’t a member of the queue, thus a better resolution would be to use deque. Since my earlier design was started with queue, I didn’t really want to change it for now. While searching for a solution, I had discovered the new trick in ideone.com which is to iterate the queue. This trick is a work around for the queue to support the iterator interface.
#include <deque>
#include <queue>

using namespace std;

template< typename T, typename Container=std::deque<T> >
class MyQueue : public queue<T,Container>
{
public:
	typedef typename Container::iterator iterator;

	iterator begin() { return this->c.begin(); }
	iterator end() { return this->c.end(); }
};
int _tmain(int argc, _TCHAR* argv[])
{
	MyQueue<int> q;
	for (int i = 0; i < 10; i++)
		q.push(i);

	for (auto it = q.begin(); it != q.end(); it++)
		cout << *it << endl;

	return 0;
}
Although this workaround is damn real cool, but I was so reluctant to change my code. Anyway, think about it, do I really need this just for the unit testing? Basically, I could have something like below to get my job accomplished since I’m working on the unit test.
while( !list.empty() ) {
        …
        list.pop();
}
Remember my objective? The objective of this unit test is to make sure that the content in the queue was correct.

Tuesday, November 24, 2015

JSF session in JMeter

Just a quick note, when running performance tests on JSF using JMeter, in order to successfully execute a single thread (single user), not even mention execute multiple thread (more than one user) at a time, a new JSF View State value need to be captured every time a thread start executes. Otherwise the test will fail, even though it was recorded through Recording Controller. To capture the new JSF view state value, I create a new Regular Expression Extractor post processor under the GET request and put in following details:

Parameter Name Value
Reference Name jsfViewState
Regular Expression id=\"javax.faces.ViewState\" value=\"(.+?)\"
Template $1$
Match No. (0 for Random)   1

And then in the POST request which fail in the test, replace the javax.faces.ViewState‘s parameter value with ${jsfViewState}. After this, I also need an HTTP Cookie Manager to be placed in thread group to make it work.

Sunday, November 22, 2015

My new reinforcement on C++ unit test

A few weeks ago, while I was working out on CppUnit in unit testing and I found out that it wouldn’t work as I doesn’t have MFC framework install in my Windows. Now I had discovered Boost.Test for this critical mission. The first contact on the new discovery, I have following code ready to charge:
#define BOOST_TEST_MODULE Hello
#include <boost/test/unit_test.hpp>

int add(int i, int j)
{
    return i+j;
}

BOOST_AUTO_TEST_CASE(Case1)
{
    BOOST_CHECK(add(2,2) == 4);
}
Interestingly, the test doesn’t get executed, but the main entry point of the program, int main(int argc, char* argv[]) was called. I spent the whole day reading through the documentation still has not got any clue on it. Until I remove the main entry point, and something were shown on the screen:
Running 1 test case...

*** No errors detected
Press <return> to close this window...
This is pretty exciting as I got a first unit test up and running. Thinking out from the plan, I need a separate project just for the unit test.

Saturday, November 14, 2015

initializationError occurred in unit testing

What a bad day.

It has been so long I never run my unit test since many versions has been update to the source code. According to experts, unit test must run once in a while to ensure the codes are still working as expected. But today when I execute the unit test, it shows me an initializationError. What the heck!! What is wrong with my code? This error was occurred even before any unit test codes were run.

As I check in the forum, this may cause by the classpath containing two different versions of Hamcrest. But looking at my classpath, I don’t see two Hamcrest whereas I see two Junit libraries in the classpath, one provided by Eclipse and another one provided by Powermock. Would this be the root cause of this error? Hmmm… try to remove the one provided by Eclipse, it works! What the heck!

Thus, the conclusion would be either use the Junit provided by Eclipse or import my own Junit.

Tuesday, November 10, 2015

Duplicate section has different size

Looking at the following error, I think I have messed up the build.
C:\Tool\boost_1_54_0\stage\lib\libboost_filesystem-mgw48-mt-1_54.a(operations.o):-1: 
error: duplicate section `.rdata$_ZTSN5boost6detail17sp_counted_impl_pINS_10filesystem6detail11dir_itr_impEEE[__ZTSN5boost6detail17sp_counted_impl_pINS_10filesystem6detail11dir_itr_impEEE]' 
has different size
I was rebuilding the Boost unit test framework, once done, when I try to build my application, this error comes out. As I search in the forum, it is due to the object file and executable file were out of date, meaning to say that the linker unable to link them together, thus compiler will make some noise.

I do not have much choice to solve this error, rebuild everything is the only way.

Hibernate pagination

Pagination pattern is very popular used in application development. Its usage is to reduce memory footprint on the application server while retrieving huge data from database. In SQL, it could be done in following way:
select * from (
   select r, blah, blah, blah from (
      select rownum as row, tabA.* from tableA tabA
   )
   where row < pageNumber*rowPerPage
)
where row >= pageNumber-1*rowPerPage
Unfortunately, I wasn’t using Spring JdbcTemplate for the work. To recap, I have my jdbcTemplate being configured in such a way:
    <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
        <property name="jndiName" value="java:/comp/env/jdbc/MyApp"/>
        <property name="lookupOnStartup" value="false"/>
        <property name="cache" value="true"/>
        <property name="proxyInterface" value="javax.sql.DataSource"/>
    </bean>

    <bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
        <property name="dataSource" ref="dataSource"/>
    </bean>
Instead, I was using Hibernate for the work. To recap, I have the Hibernate session being configured as following way:
    <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">

        <property name="dataSource">
            <ref bean="dataSource"/>
        </property>

        <property name="hibernateProperties">
            …
        </property>

       <property name="annotatedClasses">
            …
        </property>
    </bean>

    <bean id="theDAO" class="org.huahsin.theDAOImpl">
        <property name="sessionFactory" ref="sessionFactory" />
    </bean>
In order to achieve the same thing as in SQL, I have the Hibernate query done in the following way:
public List getFunction(int pageNumber, int rowPerPage) {
   Criteria criteria = theDAO.getCurrentSession().createCriteria(ModelA.class, "modA");
   criteria.setMaxResults(rowPerPage);
   criteria.setFirstResult((pageNumber - 1) * rowPerPage);
   return criteria.list();
}

Saturday, November 7, 2015

CppUnit require MFC library on Windows environment?!!

Anyone done any unit testing on C++? Am I speaking on my own?

Well, I was influenced by the unit testing I did in Java. After many years of working on C/C++ program, now only got to realize I was actually missing unit testing in the development. I was so curious why none of them mention this before? Or maybe they felt that unit testing is a waste? Or they don’t even care? But in Java world, they are so concern about unit testing. Anyhow, I just download a copy of the unit testing framework – CppUnit, and putting it into my very personal, private, secret project.

Just a site note when getting the source of CppUnit, never download from the website, it was a trap. This is because the files are suffixed with “,v” in the file extension. The real source ca be download through SVN. Once done, build it. There is no free lunch in the open source world.

When I was about to build, Oh No!! How to build on Windows machine? configure and make are only workable command in Linux…

Well, the instruction was in the file with file name start with INSTALL, there are many of them. In my case, I’m building it in a Windows environment, thus I’ll look for the INSTALL-WIN32.txt. Follow the instructions mention in that file to build CppUnit. Unfortunately, the building of CppUnit has failed due to the missing of afxwin.h. Since I’m using visual studio express edition, this edition doesn’t have MFC installed, thus it wouldn’t have that header file.

Crap! What a bad day.

Thursday, November 5, 2015

Automatically trigger event when user has done editing

In the existing use case behavior, the update event function is triggered when the user is pressing enter key while the cursor still focusing in the text field.
<h:form id="theform">
   <p:inputText id="search" value="#{controller.text}" placeholder="Press enter to search" onkeypress="if(event.keyCode == 13) { searchCommand(); return false; }" />
   <p:remoteCommand name="searchCommand" actionListener="#{controller.doSearch}" update="dataTable" />
   <p:commandButton onclick="searchCommand()" value="saerch" update="dataTable" />
</h:form>
When the field is reset, the user has to press enter key again in order to trigger the update field. I felt this behavior was so annoying, it would be much better if the field could automatically trigger the update event function when the user has done editing. Thus, I was to use an Ajax listener to do this.
<h:form id="theform">
   <p:inputText id="search" value="#{controller.searchText}" placeholder="Press enter to search">
      <p:ajax listener="#{controller.doSearch}" update="dataTable" />
   </p:inputText>
   <p:remoteCommand name="searchCommand" actionListener="#{controller.doSearch}" update="dataTable" />
   <p:commandButton onclick="searchCommand()" value="saerch" update="dataTable" />
</h:form>
AHhhh~ This was nice, now I feel complete.

Sunday, November 1, 2015

Building Boost with MinGW compiler

Does it really hard to build Boost libraries with MinGW compiler on Windows?

I have the Windows version Boost library build for Microsoft compiler, when I tried to link them in QT creator, as shown below, was failed even though I have configured the build kit to the Microsoft compiler in QT creator.
win32 {
   INCLUDEPATH += C:/Tool/boost_1_54_0
   LIBS += -LC:/Tool/boost_1_54_0/lib64-msvc-11.0 -lboost_filesystem-vc110-1_54
}
Looking on the Internet, the majority of them are using MinGW version of Boost library whenever working on QT, not the Microsoft version. To build a MinGW version of Boost, it wasn't that straight forward as following command.
c:\tool\boost_1_54_0>bootstrap.bat

c:\tool\boost_1_54_0>b2.exe --toolset=gcc
I did try on the command mention above, but end up nothing were being generated in stage/lib. I then found there is a workaround for building the MinGW version of Boost:

Step 1
Go to <BOOST_ROOT>/tools/build/v2/engine, and fire the command: build.bat mingw. This will generate bin.ntx86 folder under the same path.

Step 2
Set environment variable: set PATH=%PATH%;<BOOST_ROOT>/tools/build/v2/engine/bin.ntx86. Without this Windows will not recognize who is bjam.

Step 3
Set environment variable: set PATH=%PATH%;<MinGW_ROOT>/bin. Without this the compiler will not recognize who is gcc.

Step 4
Fire the command: bjam toolset=gcc. This should generate a bunch of libraries with file name that contain mgw with .a extension.

One last note, bjam command will require mingw32-libz to be installed with MinGW compiler before Boost is start building. Once done, reconfigure the .pro file with following code:

win32 {
   INCLUDEPATH += C:/Tool/boost_1_54_0
   LIBS += -LC:/Tool/boost_1_54_0/stage/lib -lboost_filesystem-mgw48-mt-1_54
}

Friday, October 30, 2015

Hibernate session is closed!

WHAT! The session was closed? Hibernate has closed the shop? No more business?
2015-10-29 12:29:04 ERROR javax.faces.event.MethodExpressionActionListener[180]: org.hibernate.SessionException: Session is closed!
 at org.hibernate.internal.AbstractSessionImpl.errorIfClosed(AbstractSessionImpl.java:129)
 at org.hibernate.internal.SessionImpl.createCriteria(SessionImpl.java:1576)
 at org.huahsin.MyBoImpl.filterBottle(MyBoImpl.java:123)
 ...
 ...
 ...
This is so ridiculous!! May be I have overlooked on this matter. Remember, I have TransactionInterceptor being define in the following way:
<bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor">
        <property name="transactionManager">
            <ref bean="transactionManager"/>
        </property>
        <property name="transactionAttributes">
            <props>
                <prop key="add*">PROPAGATION_REQUIRED</prop>
                <prop key="get*">PROPAGATION_REQUIRED,readOnly</prop>
            </props>
        </property>
    </bean>
Any method defined in the BO/DAO that doesn't start with add or get will expect to see this error. In my case as shown in the stack trace, I have filterBottle() defined in MyBoImpl. To fix this, I'm require to define additional transactionAttributes in TransactionInterceptor.

Thursday, October 29, 2015

I want the app to be flexible enough during run-time

In Eclipse IDE, the simplest way to configure class-path is to create a source folder. It is so clean and easy. But when using ANT to package a JAR, class-path is done in the following way:
<project ...>
   <target name="build">
      <jar destfile="./program.jar">
         <manifest>
            < attribute name="Main-Class" value="..."/>
            < attribute name="Class-Path" value="..."/>
         </manifest>
      </jar>
   </target>
</project>
Usually class-path is useful in referencing the library path. But I never though it can be use to reference configuration file, the file which usually in XML or properties pattern. Assuming configuration files were located at config_path, the class-path would be look like this:
<project ...>
   <target name="build">
      <jar destfile="./program.jar">
         <manifest>
            < attribute name="Main-Class" value="..."/>
            < attribute name="Class-Path" value="config_path/ lib_path/the.jar ..."/>
         </manifest>
      </jar>
   </target>
</project>
With this approach, I can have a greater flexibility to configure the application's behavior in anytime during run-time.

Sunday, October 25, 2015

There are two JNDI binding for EJB

This is so ridiculous, now only I got to realize there are two types of EJB JNDI context available when connecting the client to the server. Assuming I have the following JNDI binding ready:
   java:global/ejb1/AuthenticationImpl!org.huahsin.AuthenticationRemote
   java:app/ejb1/AuthenticationImpl!org.huahsin.AuthenticationRemote
   java:module/AuthenticationImpl!org.huahsin.AuthenticationRemote
   java:jboss/exported/ejb1/AuthenticationImpl!org.huahsin.AuthenticationRemote
   java:global/ejb1/AuthenticationImpl
   java:app/ejb1/AuthenticationImpl
   java:module/AuthenticationImpl
The first is org.jboss.naming.remote.client.InitialContextFactory, this will require additional library, jboss-client.jar to be loaded in the classpath. And also this is the most hassle free and easy to setup. The following code shows how this could be done:
   Properties props = new Properties();
   props.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
   props.put(Context.PROVIDER_URL, "remote://127.0.0.1:4447");

   InitialContext context = new InitialContext(props);

   AuthenticationRemote bean = (AuthenticationRemote) context.lookup("ejb1/AuthenticationImpl!org.huahsin.AuthenticationRemote");
The second is org.jboss.ejb.client.naming, it consists of two parts. The first is the setup in the code, as shown in the following:
   Hashtable props = new Hashtable();
   props.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
  
   InitialContext context = new InitialContext(props);

   String appName = "";
   String moduleName = "ejb1";
   String distinctName = "";
   String beanName = AuthenticationImpl.class.getSimpleName();
   String interfaceName = AuthenticationRemote.class.getName();
   String name = "ejb:" + appName + "/" + moduleName + "/" + distinctName + "/" + beanName + "!" + interfaceName;
  
   AuthenticationRemote bean = (AuthenticationRemote) context.lookup(name);
The second part would be the configuration file, jboss-ejb-client.properties, to be placed in the classpath. Missing this would not be able to establish connection to the server. The content of the configuration is as follows:
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
 
remote.connections=default
 
remote.connection.default.host=127.0.0.1
remote.connection.default.port = 4447
remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
Phew! Finally, I got the things clear.

Thursday, October 22, 2015

org.jboss.naming.remote.client.InitialContextFactory was not found in JBoss server runtime

The same piece of code, execute on different PCs, I'll have different results.
 Properties props = new Properties();
 props.put("java.naming.factory.url.pkgs", "org.jboss.ejb.client.naming");
 props.put("java.naming.factory.initial", "org.jboss.naming.remote.client.InitialContextFactory");
 props.put("java.naming.provider.url", "remote://127.0.0.1:4447");
 props.put("jboss.naming.client.ejb.context", "true");
 props.put("jboss.naming.client.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT","false");

 InitialContext context = new InitialContext(props);

The above code was compiled and execute successfully without error. But when I move the piece to another PC, the compilation will fail and following error would be seen.
Caused by: java.lang.ClassNotFoundException: org.jboss.naming.remote.client.InitialContextFactory
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:274)
 at com.sun.naming.internal.VersionHelper12.loadClass(VersionHelper12.java:72)
 at com.sun.naming.internal.VersionHelper12.loadClass(VersionHelper12.java:61)
 at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:671)
 ... 4 more
Although I do have already configured JBoss EAP 6 as the server runtime in my project configuration, unfortunately none of the JARs found the org.jboss.naming.remote.client.InitialContextFactory class. Actually, this class can be found in jboss-client.jar and it was so unlucky that it has been excluded being a member of Jboss server runtime. Sigh... Thus, I have to explicitly include this JAR into the classpath and it is easily found in <jboss_root>\bin\client.

Sunday, October 18, 2015

I'm so confused with WSDL

I’m confused, I’m screwed.

I’m confused because there seems so many things need to be done in order to generate a WSDL file. I need to create the POJO class; I need to create the XSD; I need to build the WSDL and link XSD together; I need to create publisher class to publish the web service. And then generate the JAX-WS JAVA artifacts… No, I don’t need that at the moment. Let’s see how I begin this mess.

With bottom up approach, one way to construct the JAX-WS Java artifacts is through the Endpoint class. Like what I did with the following code:
@WebService(serviceName="HelloWorldService", portName="HelloWorldPort", endpointInterface="org.huahsin.ws.HelloWorld")
public class HelloWorldEndpoint {
 public static void main(String args[]) {
  HelloWorld inst = new HelloWorld();
  Endpoint.publish("http://localhost:8080/wsAsync", inst);
  System.out.println("service published");
 }
}
Assuming I have the HelloWorld POJO declared in this way:
@WebService(serviceName="HelloWorldService")
public class HelloWorld {
 @WebMethod(operationName="hello")
 public String hello(@WebParam(name="name") String name) {
  return "Hello " + name;
 }
}
Executes the program, send the URL (http://localhost:8080/wsAsync?wsdl) to the web browser, then the WSDL content will be shown. After that save the WSDL content, by selecting all the text in the web page, and paste it into notepad, then save it as a file with WSDL file extension. Anyhow, this isn’t a good approach to generate the WSDL file.

Actually, there is a better approach for doing it with wsgen command. The usage is as shown below:

C:\project\ws>wsgen -verbose -wsdl -keep -cp target\classes org.huahsin.HelloWorld

Don’t confuse with –cp option, it is referring to the path where the compiled class of org.huahsin.HelloWorld is stored (I thought it was the JDK path). And the –wsdl option is to tell the wsgen command to generate a WSDL file. I feel much better with the tool now. Besides that, JBoss does have the similar utility, named wsprovide, and the usage is as follow:

C:\project\ws>wsprovide -w -k -c target\classes org.huahsin.HelloWorld

But make sure the \bin is registered in environment variable before it is usable.

Thursday, October 1, 2015

wsimport wasn't covered by lifecycle configuration

I was working on a project that requires me to use Maven to generate Java artifacts from WSDL. For the first time, I'm doing it with Maven. Somehow the Eclipse m2e seem doesn’t support wsimport lifecycle, my Maven configuration is as follows:
<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>jaxws-maven-plugin</artifactId>
   <executions>
      <execution>
            <goals>
               <goal>wsimport</goal>
            </goals>
            <configuration>
               <wsdlLocation>http://huahsin.org/wsasync</wsdlLocation>
               <wsdlDirectory>${basedir}/resources</wsdlDirectory>
               <keep>true</keep>
               <packageName>org.huahsin.ws</packageName>
               <sourceDestDir>${basedir}/src</sourceDestDir>    
            </configuration>
         </execution>
      </executions>
</plugin>
This is the original error message:
  Plugin execution not covered by lifecycle configuration: 
  org.codehaus.mojo:jaxws-maven-plugin:1.12:wsimport (execution: default, phase: generate-sources)
I though by adding the generate-sources phase into the configuration will help. But in fact, it wouldn’t.
   …
 
   <executions>
      <execution>
         <phase>generate-sources</phase>
            <goals>
   … 
Sigh~ asking myself why do I still want to scratch my head on a problem since m2e has already provided me the solution? With the POM file open in Eclipse, hover to that error, click on the Discover new m2e connectors to retrieve the jaxws-maven-plugin connector. This connector is specially designed to handle wsimport for me. Make me worry free.

Sunday, September 27, 2015

Duplicate class while generating artifacts from XSD

Given the following XSDs:

Request.xsd
 <xsd:element name="HelloWorldReqType">
  <xsd:complexType>
   <xsd:sequence>
    <xsd:element id="name" name="name" maxOccurs="1" minOccurs="1"/>
    <xsd:element id="gender" name="gender" maxOccurs="1" minOccurs="1"/>
   </xsd:sequence>
  </xsd:complexType>
 </xsd:element>
Response.xsd
 <xsd:element name="HelloWorld_response">
  &lt xsd:complexType>
   <xsd:sequence>
    <xsd:element id="greetings" name="greetings" maxOccurs="1" minOccurs="1"/>
   </xsd:sequence>
  </xsd:complexType>
 </xsd:element>
I should not explicitly define the same set of request and response elements in WSDL while I have chosen to import them. This is what I did to the WSDL:
  <wsdl:types>
   <xsd:schema targetNamespace="http://www.example.org/HelloWorld/">
     <xsd:import namespace="http://www.example.org/HelloWorldReq" schemaLocation="../resources/HelloWorldReq.xsd"/>
     <xsd:import namespace="http://www.example.org/HelloWorldRes" schemaLocation="../resources/HelloWorldRes.xsd"/>
     <xsd:element name="helloWorld_request">
      <xsd:complexType>
       <xsd:sequence>
        <xsd:element name="name" type="xsd:string" maxOccurs="1" minOccurs="1"/>
        <xsd:element name="gender" type="xsd:string" maxOccurs="1" minOccurs="1"></xsd:element>
       </xsd:sequence>
      </xsd:complexType>
     </xsd:element>
     <xsd:element name="helloWorld_response">
        <xsd:complexType>
          <xsd:sequence>
            <xsd:element name="greetings" type="xsd:string" maxOccurs="1" minOccurs="1"/>
          </xsd:sequence>
        </xsd:complexType>
      </xsd:element>    
    </xsd:schema>
  </wsdl:types>
Line 3 and 4 are the import statement that import the 2 XSDs into WSDL whereas starting from line 5, I define the same set of elements as in the XSDs. Well, this will receive an error complaining duplicate class while generating the JAVA artifact from XSD as shown below:
parsing WSDL...


[ERROR] A class/interface with the same name "org.huahsin.ws.HelloWorldResponse" is already in use. Use a class customization to resolve this conflict.
  line 23 of file:/home/kokhoe/workspace/wsAsync/resources/HelloWorld.wsdl

[ERROR] (Relevant to above error) another "HelloWorldResponse" is generated from here.
  line 8 of file:/home/kokhoe/workspace/wsAsync/resources/HelloWorldRes.xsd

[ERROR] Two declarations cause a collision in the ObjectFactory class.
  line 8 of file:/home/kokhoe/workspace/wsAsync/resources/HelloWorldRes.xsd

[ERROR] (Related to above error) This is the other declaration.   
  line 23 of file:/home/kokhoe/workspace/wsAsync/resources/HelloWorld.wsdl 
Thus experience from here was never be so greedy, choose either site when the element should define.

Thursday, September 24, 2015

Initialization list doesn't work for virtual constructor?

A virtual base class is always initialized before other derive class. This is a known behavior. I'm aware of it. But what's surprising me is that when I have an intermediate class pass parameter to the virtual base class constructor in their member initialization list, these initialization list will be ignored. The following piece could prove this statement.
class Parent {
public:
 Parent() : param(0) {
  cout << "Parent constructor" << endl;
 }

 Parent(int param) : param(param){
  cout << "Parent constructor(param)" << endl;
 }

 int getParam() { return param; }

private:
 int param;
};

class Base : virtual public Parent {
public:
 Base() : Parent(5373) {
  cout << "Base constructor" << endl;
 }
};

class Xtends : public Base {
public:
 Xtends() {
  cout << "Xtends constructor" << endl;
 }
};
When I execute the following piece, the param value would be 0:
   Xtends x;
   cout << "param: " << x.getParam() << endl;
Notice the Base class is an intermediate class. When I pass in the value of 5373 into Parent( int ) constructor, it simply ignores it. Next the Base class is no longer an intermediate class. Now when I execute the following code, the value of 5373 would be seen:
   Base b;
   cout << "param: " << b.getParam() << endl;

Tuesday, September 22, 2015

Insufficient memory to create a queue

I'm a poor kid. I can't afford to buy extra memory for my queue. Too bad. And too sad.

After the root access issue has been resolved in WebSphere MQ Explorer, I am able to make a queue, but there is a prompt showing the following details to me:
****************************************
* Command: /opt/mqm/bin/crtmqm  Q1
****************************************
WebSphere MQ queue manager created.
Directory '/var/mqm/qmgrs/Q1' created.
The queue manager is associated with installation 'Installation1'.
AMQ6024: Insufficient resources are available to complete a system request.
exitvalue = 36 
I wasn't sure whether this is a general greeting or welcome message after a queue has been created? But I have verified the queue was there in the path /var/mqm/qmgr. To further verify this message isn't friendly to me, I issue another command sudo strmqm Q1, I got the same result:
$ sudo strmqm Q1
The system resource RLIMIT_NOFILE is set at an unusually low level for WebSphere MQ.
WebSphere MQ queue manager 'Q1' starting.
The queue manager is associated with installation 'Installation1'.
AMQ6024: Insufficient resources are available to complete a system request.
Clearly, this isn't a good thing. According to experts advice, there should be an error log locate at the path: /var/mqm/errors/. When I open the file located in that path, I see something like below:
+-----------------------------------------------------------------------------+
|                                                                             |
| WebSphere MQ First Failure Symptom Report                                   |
| =========================================                                   |
...
...
...
| Comment1          :- Failed to get memory segment: shmget(0x00000000,       |
|   73834496) [rc=-1 errno=22] Invalid argument                               |
| Comment2          :- Invalid argument                                       |
| Comment3          :- Configure kernel (for example, shmmax) to allow a      |
|   shared memory segment of at least 73834496 bytes                          |
|                                                                             |
+-----------------------------------------------------------------------------+
This reminds me there are additional settings for WebSphere MQ on Linux systems needs to be configured. According to the guide, there is minimum configuration are required for WebSphere MQ:
    kernel.shmmni = 4096
    kernel.shmall = 2097152
    kernel.shmmax = 268435456
    kernel.sem = 500 256000 250 1024
    fs.file-max = 524288
    kernel.pid-max = 120000
    kernel.threads-max = 48000
Among the settings, only shmmax and sem are not up to par. Below is what I have at current system:
$ cat /proc/sys/kernel/shmmax
33554432
$ cat /proc/sys/kernel/sem
250 32000 32 128
The following steps are what I did to fix this issue:
  1. Open the file /etc/sysctl.conf.
  2. Append the require configuration for shmmax and sem to the end of the file.
  3. Reload the configuration with the command sysctl -p.

Wednesday, September 2, 2015

Where is MQ_INSTALLATION_PATH?

Sometimes I do wonder where will be the installation path is? There is a shortcut for this, issue a command dspmqver, it shows me the detail information about WebSphere MQ. From there I can see InstPath showing where this tool is installed:
Name:        WebSphere MQ
Version:     8.0.0.2
Level:       p800-002-150303.DE
BuildType:   IKAP - (Production)
Platform:    WebSphere MQ for Linux (x86-64 platform)
Mode:        64-bit
O/S:         Linux 3.13.0-62-generic
InstName:    Installation1
InstDesc:    
Primary:     No
InstPath:    /opt/mqm
DataPath:    /var/mqm
MaxCmdLevel: 801
LicenseType: Developer
Or maybe I have overlooked, the documentation did mention where does the MQ_INSTALLATION_PATH is. Below is the text extract from the guide:
The location where WebSphere MQ is installed is known as the MQ_INSTALLATION_PATH. The default location for the WebSphere MQ product code is shown in the following table:

Platform Installation Location
Linux, HP-UX, and Solaris /opt/mqm
AIX® /usr/mqm
Windows 32-bit C:\Program Files\IBM\WebSphere MQ
Windows 64-bit C:\Program Files (x86)\IBM\WebSphere MQ 

On UNIX and Linux systems, working data is stored in /var/mqm, but you cannot change this location.

Tuesday, September 1, 2015

Tracing Apache HTTP server installation path with RPM

Things get complicated when everything was messed up in AIX. How could I know which Apache HTTP server version I have installed on my UNIX box? Initially I though it is as simple as typing this command, httpd -v, unfortunately the system returns me ksh: httpd: not found. Wonder why the system can’t allocate httpd command? Then I try the command such as locate httpd and whereis httpd to find out where the hell this package is being installed? But both gave me negative response.

I’ll not going to defeat so easily, I still have one last hope to trace back where httpd is installed. Since the httpd was installed through RPM, I think rpm should be able to trace back the installation path. To verify my assumption is correct, I would use rpm -qa to ensure httpd is installed through rpm. Next is to use rpm -ql httpd | grep httpd to list the entire installation path that contains httpd in it. Then I’ll know where exactly the httpd was installed.

Have fun with AIX.

Monday, August 31, 2015

WebSphere MQ Explorer requires root access

WebSphere MQ Explorer is my favorite tool as its ease of use compared to command line. This tool is a separate installation apart from the MQv8 installation, the component name would have this file name pattern MQSeriesExplorer_<suffix>-8.0.0-2.x86_64.rpm. Unfortunately, it requires root user which has mqm group granted in order to create a queue. Otherwise a prompt showing following error stopping me to proceed further.
****************************************
* Command: /opt/mqm/bin/crtmqm  Q1
****************************************
Access not permitted. You are not authorized to perform this operation. (AMQ4036)
Since this tool is installed in Ubuntu 14.04, and it was launched in Unity Launcher. Unlike command prompt, sudo would be my best friend handling root access control. Unfortunately, this is a graphical environment, I would need graphical sudo to help me. Here is the clue, open this file: /usr/share/applications/IBM_WebSphere_MQ_Explorer-Installation1.desktop (this file requires root access as well), and modify the line:
...
Exec=/opt/mqm/bin/MQExplorer
...
with this:
...
Exec=gksudo -k -u root /opt/mqm/bin/MQExplorer
...

Saturday, August 29, 2015

Permission denied on crtmqm

This is so not good. I'm not able to create a queue after the WebSphere MQ installation. Following error was seen when I issue the command:
$ crtmqm Q1
bash: /usr/bin/crtmqm: Permission denied
$ sudo crtmqm Q1
AMQ7077: You are not authorized to perform the requested operation.
It has been so struggling with me at first, but later I have conquered the fear. As mention from this reference guide, I got a strong sense that I have not assigned my user ID to mqm group. Below is the text extract from the reference guide:
In WebSphere MQ, user id "mqm" and any ID which is a part of "mqm" group are the WebSphere MQ administrative users. WebSphere MQ queue manager resources are protected by authenticating against this user. Since the queue manager processes use and modify these queue manager resources, the queue manager processes will require "mqm" authority to access the resources. Hence, WebSphere MQ queue manager support processes are designed to run with the effective user-id of "mqm".
Since crtmqm is referring to /usr/bin/crtmqm, and it requires root access to it, thus it is wise to grand mqm group to the root user account instead of my user account.

Wednesday, August 26, 2015

MQv8 installation hit user limit error

Sigh~ There is another error while the MQv8 installation is undergoing. This time, the error was logged into a file, part of content causing the installation fails were shown here:
...

Current User Limits (root)
  nofile       (-Hn)  4096 files                         IBM>=10240        FAIL
  nofile       (-Sn)  1024 files                         IBM>=10240        FAIL
This has reminded me somewhere in the documentation did mention about the limit thing. Thus in my case, just append following configuration into /etc/security/limits.conf. This file may require root access in order to edit it.
...

mqm             hard    nofile          10240
mqm             soft    nofile          10240
# End of file
This should be the last error I face during the installation.

Tuesday, August 25, 2015

Cannot open Packages database during MQ installation

Continuing from the journey of previous installation on WebSphere MQ. This time, IBM has offered WebSphere MQ free of charge, the link of the site is here. As of this writing, I'm getting the MQv8 for Linux. No more waiting! This is a damn real great news for a poor guy like ME!

While I was about to install MQ after the step ./crtmqpkg suffix, something "abnormal" was blocking my way.
kokhoe@KOKHOE:/var/tmp/mq_rpms/LKH/x86_64$ sudo rpm -ivh --force-debian MQSeriesRuntime_LKH-8.0.0-2.x86_64.rpm 
error: db5 error(-30969) from dbenv->open: BDB0091 DB_VERSION_MISMATCH: Database environment version mismatch
error: cannot open Packages index using db5 -  (-30969)
error: cannot open Packages database in /home/kokhoe/.rpmdb
error: db5 error(-30969) from dbenv->open: BDB0091 DB_VERSION_MISMATCH: Database environment version mismatch
error: cannot open Packages database in /home/kokhoe/.rpmdb
I think this error doesn't belong to MQ specific because it happened on another package installer too. The clue I found was there is a hidden folder, .rpmdb is having some conflict since previous installation. By just removing that folder will make the rpm program smiling.

Friday, August 21, 2015

log4cpp::Category::xxx() do accept c_str()

Just got to know that in order to log a value of boost::filesystem::path with log4cpp, it is just as simple as follows:
   Category *pRoot = NULL;
   PropertyConfigurator::configure("log4c.properties");
   pRoot = &(Category::getRoot());

   path targetPath("./the_path");
   pRoot->info("%s", targetPath.c_str());
Assuming I have the following content in log4c.properties:
   log4cpp.rootCategory=DEBUG, rootAppender

   log4cpp.appender.rootAppender=ConsoleAppender
   log4cpp.appender.rootAppender.layout=PatternLayout
   log4cpp.appender.rootAppender.layout.ConversionPattern=%d [%p] %m%n

   ...
But before I got to know this, I heard there are people mention that the conversion from c_str() to const char* would not be straightforward. And it would require wcstombs() to do the conversion, thus I come out this:
   ...

   char pathName[50];
   memset(pathName, '\0', sizeof(pathName));
   wcstombs(pathName, targetPath.wstring().c_str(), sizeof(targetPath.native().length()));
   ...
Is this what they mean? Or I misunderstood something? No worry, log4cpp::Category::xxx() do accept c_str(). 

Monday, August 17, 2015

Setting up log4cpp with CMake

Remember last time when I was writing software application on C programming, this is what I did in diagnosing defects in the application.
#ifdef __DEBUG__
    // logging code here
#endif
This piece is everywhere, trapping malicious bugs. The usage of this piece is like this; If I want the log to show up, I turn on the __DEBUG__ switch like this: #define __DEBUG__, otherwise I'll remove the switch to turn it off.

Looking back my foolish happened in the pass, is a shame. Inspire from JAVA's log4j, I wish I could have something similar thing on C. Yup, there is one called log4cpp and it exists in the world for so long. It has very much similarity to log4j, just that some cautious to take attention during the initial setup.

As I'm using CMake in my project, the installation guide from the site don't have much detail on that. After some try and error, finally I need to have following piece in CMakeLists.txt:
...

include_directories(/usr/local/include/log4cpp)
link_directories(/usr/local/lib/)
find_package(Threads REQUIRED)

target_link_libraries(myprogram liblog4cpp.so ${Boost_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})

...
The first line is to tell where the log4cpp's header files location, the second line tells where can I load the log4cpp's library. Never though of log4cpp has a dependency on threading library, it requires -pthread option in the build, otherwise following error would be seen:
...

/usr/local/lib/liblog4cpp.so: undefined reference to `pthread_key_create'
/usr/local/lib/liblog4cpp.so: undefined reference to `pthread_getspecific'
/usr/local/lib/liblog4cpp.so: undefined reference to `pthread_key_delete'
/usr/local/lib/liblog4cpp.so: undefined reference to `pthread_setspecific'
Thus, ${CMAKE_THREAD_LIBS_INIT} will do the trick. The last line will wrap up the build and log4cpp is ready to serve.

Goodbye to the ugly code.

Friday, August 14, 2015

When I move to Mars...

This is so not good, JBoss server was not functioning after I have upgraded my workspace to Eclipse Mars. I'm not even able to create new JBoss server as the server adapter was also gone missing.

There 2 requirements need to fulfill in order to get JBoss work under Eclipse Mars:

Requirement 1
Eclipse Mars must have JBoss Tools 4.3 installed. As of this writing, I'm using 4.3.0.Beta2 (the only version works on Eclipse Mars). It can be downloaded from Eclipse Marketplace or JBoss Tool's website.

Requirement 2
JBoss Tools 4.3 need to feed by JDK 8 as mention in this blog. I don't really want to disturb my global environment variable, JAVA_HOME, as she is serving for peoples too. Thus, what I'll do is to configure it in eclipse.ini.
-vm
/home/user/tool/jdk1.8.0_51/bin/java
-vmargs
-Dosgi.requiredJavaVersion=1.7
-XX:MaxPermSize=256m
-Xms256m
-Xmx1024m
Do take note that -vm must before -vamrgs, otherwise -vm will become argument of -vamrg.

Monday, August 10, 2015

JBoss Messaging doesn't have permission to create Queue?

I have a piece of JMS subscriber code that's going to connect to JBoss Messaging. However, it throws me an error when the subscriber was trying to load up the topic:
Exception in thread "main" javax.jms.JMSSecurityException: HQ119032: User: huahsin68 doesnt have permission=CREATE_NON_DURABLE_QUEUE on address {2}
 at org.hornetq.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:378)
 at org.hornetq.core.client.impl.ClientSessionImpl.internalCreateQueue(ClientSessionImpl.java:1987)
 at org.hornetq.core.client.impl.ClientSessionImpl.createTemporaryQueue(ClientSessionImpl.java:356)
 at org.hornetq.core.client.impl.DelegatingSession.createTemporaryQueue(DelegatingSession.java:304)
 at org.hornetq.jms.client.HornetQSession.createConsumer(HornetQSession.java:559)
 at org.hornetq.jms.client.HornetQSession.createConsumer(HornetQSession.java:378)
 at org.hornetq.jms.client.HornetQSession.createConsumer(HornetQSession.java:348)
 at org.hornetq.jms.client.HornetQSession.createSubscriber(HornetQSession.java:858)
 at org.huahsin.TopicSubscriber.subscribeTopic(TopicSubscriber.java:45)
 at org.huahsin.TopicSubscriber.main(TopicSubscriber.java:26)
Caused by: HornetQException[errorType=SECURITY_EXCEPTION message=HQ119032: User: huahsin68 doesnt have permission=CREATE_NON_DURABLE_QUEUE on address {2}]
 ... 10 more
This is the piece that does the job:
  try {
   ctx = new InitialContext(prop);
   TopicConnectionFactory topicFac = (TopicConnectionFactory) ctx.lookup("jms/RemoteConnectionFactory");
   topicConn = topicFac.createTopicConnection("huahsin68", "abcd");
   
   while(!receive) {
    TopicSession topicSess = topicConn.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
    Topic topic = (Topic) ctx.lookup("jms/topic/test");
    topicConn.start();
    
    javax.jms.TopicSubscriber subscriber = topicSess.createSubscriber(topic);
    TestMessageListener listener = new TestMessageListener();
    subscriber.setMessageListener(listener);
    
    Thread.sleep(5000);
    subscriber.close();
    topicSess.close();
   }
  }
  finally {
   if( topicConn != null ) {
    topicConn.close();
   }
  }
This sound like the security issue or permission configuration error. I'm sure everything have been configured properly, but I was not sure whether is there anything wrong with the security settings:
                ...

                <security-settings>
                    <security-setting match="#">
                        <permission type="send" roles="guest"/>
                        <permission type="consume" roles="guest"/>
                        <permission type="createNonDurableQueue" roles="guest"/>
                        <permission type="deleteNonDurableQueue" roles="guest"/>
                    </security-setting>
                </security-settings>
Wait a minute! This reminds me that the role assignment was missed out during the user creation. Did a quick check on application-roles.properties and found out the role was not assigned to user huahsin68.

This is what it shows on application-role.properties:
# The following illustrates how an admin user could be defined.
#
#admin=PowerUser,BillingAdmin,
#guest=guest
huahsin68=
Since the topic creation must have a guest role as configured in the security settings. Fill it up with a guest role to the user and then re-run the program, everything will be fine.

Thursday, August 6, 2015

Establishing remote connection to JBoss messaging service

Just done some test drive on establishing a remote connection to the JBoss messaging service. As of this writing, I'm working on JBoss EAP 6.3.0. Assume I have the connection factory configuration being done in following way:
   <jms-connection-factories>
      <connection-factory name="RemoteConnectionFactory">
         <connectors>
            <connector-ref connector-name="netty"/>
         </connectors>
         <entries>
            <entry name="RemoteConnectionFactory"/>
            <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
         </entries>
      </connection-factory>
   <jms-connection-factories>

   ...
   ...

   <jms-destinations>
      <jms-topic name="testTopic">
         <entry name="topic/test"/>
         <entry name="java:jboss/exported/jms/topic/test"/>
      </jms-topic>

      ...
   </jms-destinations>
Then the following piece is the code trying to achieve the mission:
   ...

   Properties prop = new Properties();
   prop.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
   prop.put(Context.PROVIDER_URL, "remote://localhost:4447");
   prop.put(Context.SECURITY_PRINCIPAL, "huahsin68");
   prop.put(Context.SECURITY_CREDENTIALS, "abcdef.1");

   ctx = new InitialContext(prop);
   TopicConnectionFactory topicFac = (TopicConnectionFactory) ctx.lookup("/RemoteConnectionFactory");
   TopicConnection topicConn = topicFac.createTopicConnection();
   TopicSession topicSess = topicConn.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
   Topic topic = (Topic) ctx.lookup("/topic/test");
   topicConn.start();

   ...
There are a couple of mistakes here. First, the lookup(...) method will fail on connection factory initialization, RemoteConnectionFactory was unable to find. This error would be seen as:

Exception in thread "main" javax.naming.NameNotFoundException: RemoteConnectionFactory -- service jboss.naming.context.java.jboss.exported.RemoteConnectionFactory

Second is the authentication with SECURITY_PRINCIPAL and SECURITY_CREDENTIALS is not working during the context initialization. The authentication need to be done during the connection is first created, which is on createTopicConnection() method call.

Third mistake would be the /topic/test was unable to initialize too. The error is same as the first mistake. The fixes on the lookup(...) thing should prefix with jms. This following piece would be the complete set for the fixes:
   ...

   Properties prop = new Properties();
   prop.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
   prop.put(Context.PROVIDER_URL, "remote://localhost:4447");

   ctx = new InitialContext(prop);
   TopicConnectionFactory topicFac = (TopicConnectionFactory) ctx.lookup("jms/RemoteConnectionFactory");
   TopicConnection topicConn = topicFac.createTopicConnection("huahsin68", "abcdef.1");
   TopicSession topicSess = topicConn.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
   Topic topic = (Topic) ctx.lookup("jms/topic/test");
   topicConn.start();

   ...
Wonder why jms should put in the prefix? JBoss documentation did mention this:
Keep in mind that any jms-queue or jms-topic which needs to be accessed by a remote client needs to have an entry in the "java:jboss/exported" namespace. As with connection factories, if a jms-queue or jms-topic has an entry bound in the "java:jboss/exported" namespace a remote client would look it up using the text after "java:jboss/exported".

Friday, July 31, 2015

SVN copy task causing duplicate path?

There is a problem with the <copy> task after deployed to build server. The path will become duplicate when it is executing causing the path, unable to be retrieve:
   ...

   [svn] svn: File not found: revision 1768, path '/trunk/MessageFlow/trunk/MessageFlow'

   ...
Notice that the path has been duplicated. Here is the existing ANT source:
   ...
   <typedef resource="org/tigris/subversion/svnant/svnantlib.xml" classpathref="classpath"/>

   ...

   
      <copy srcUrl="${svnTrunkRoot}/MessageFlow}" destUrl="${svnTagsRoot}/MessageFlow/${newtagname}" message="Tagged by Jenkins."/>
   
As from my study, this is a known error for SVN version 1.7 and above. Luckily opticyclic has developed a new ANT clone which can get rid of this problem. Grab the piece and build it from source. The output will have this, svntask-1.1.1.jar. When deploy to build server, sequence-library-1.0.2.jar, sqljet-1.1.10.jar, svnkit-1.8.5.jar are also needed. There are gang of four, no one left behind.

Without messing up with the existing svn task, I create another task named svn2.
   <path id="svn2.classpath">
      <pathelement location="svntask-1.1.1.jar">
      <fileset dir= "../lib">
         <include name= "*.jar"/>
      </fileset>
   </path>

   <taskdef name="svn2" classname="com.googlecode.svntask.SvnTask" classpathref="svn2.classpath"/>
To use the new copy task command, do this:
   ...
   <svn2 username="admin" password="admin">
      <copy failOnDstExists="true" move="false"
            src="${svnTrunkRoot}/MessageFlow"
            dst="${svnTagsRoot}/MessageFlow/${newtagname}"
            commitMessage="Tagged by Jenkins."/>
   </svn2>
Do take note that the new svn2 doesn't support refid attribute, thus username and password are require whenever svn2 command is invoke.

Last note, this is last minute finding, antlr-runtime-3.4.jar may also needed as I deployed to build server. Otherwise an run-time error will be thrown.