<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\758334277\46blogName\75Sriram\47s+Blog\46publishMode\75PUBLISH_MODE_BLOGSPOT\46navbarType\75BLUE\46layoutType\75CLASSIC\46searchRoot\75http://metallicatony.blogspot.com/search\46blogLocale\75en\46v\0752\46homepageUrl\75http://metallicatony.blogspot.com/\46vt\75-8718433808682107797', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Friday, December 20, 2013

Binary Tree Traversals

Binary tree is a special case of a tree in which there are no more than 2 children for any node. The children are called left and right nodes respectively based on the fact whether they are located on the left or right side of the parent node. Binary trees are extensively used in software engineering because it just takes O(log n) for doing most of the day to day operations – insert, update, delete and search. Traversal algorithms are very important concepts in binary trees because if you want to do any type of operation on a binary tree then as a first step, the tree needs to be traversed to find the nodes on which the desired operation need to be performed. The three widely used binary tree traversal algorithms are

1) Preorder traversal
Processing order of the nodes are – parent, all its left descendants and then the right descendants. In other words, Parent is processed before left sub-tree and right sub-tree are processed.

2) Inorder traversal
Processing order of the nodes are – all left descendants, parent and then the right descendants. In other words, left sub-tree is processed before parent and right sub-tree are processed.

3) Postorder traversal
Processing order of the nodes are – all left descendants, right descendants and then finally the parent descendants. In other words, all the children are processed before the parent is processed.

For the impatient code readers, the entire code base for this project can be accessed from my Github repository – Binary Tree Traversal algorithms.

Recursive Solution
A recursive solution best suits the situation where there is a bigger problem that can be broken down into smaller and similar sub problems and so the same solution can be applied over the sub-problems. It is imperative to have a base case after which the solution should start winding up. Recursion can be best understood when the whole process can be visualized with a stack.

Iterative Solution
An iterative solution best suits the situation where there is a bigger problem that can be broken down into smaller problems and every such smaller problem is iteratively solved one after another. There is no connection between an iteration and its successive iterations. As in the case of recursion, an iteration does not pause or wait until all the sub-iterations are complete. And that’s why we need an additional tool(aka data structure) so that we can adapt an iterative solution for tree-traversals. A stack is needed in an iterative solution so that it can be used to temporarily store the nodes in the needed order and then apply the solution or process the node that comes out of the stack.

Preorder traversal
Recursive solution is simple and self-explanatory.

public void preOrderTraversalRecursive(Node node) {
  if (node == null) {
   return;
  }
  
  System.out.println(node);
  if (node.getLeft() != null) {
   preOrderTraversalIterative(node.getLeft());
  }
  if (node.getRight() != null) {
   preOrderTraversalIterative(node.getRight());
  }
 }

Iterative solution for any traversal needs at least a stack. Preorder traversal is the simplest among all the traversal iterative algorithms. To start with, the root node is added to stack. From there on, we navigate deep into the left-most sub-tree. And while doing so, we keep processing the node that we are navigating through and we keep adding the right child of the node (if exists) to the stack. The left child is added too but it is popped out in the next iteration, and that’s how we navigate down the tree. While we pop out the nodes from the stack, we go over the same process – navigate, process and add the right child to stack if it exists.

public void preOrderTraversalIterative(Node node) {
Stack<Node> stack = new Stack<Node>();
  stack.push(node);
  
  while (!stack.isEmpty()) {
   Node poppedNode = stack.pop();
   System.out.println(poppedNode);
   if (poppedNode.getRight() != null) {
    stack.add(poppedNode.getRight());
   }
   
   if (poppedNode.getLeft() != null) {
    stack.add(poppedNode.getLeft());
   }
  }
}


Inorder traversal
The recursive solution is pretty simple. Given a node, keep navigating to the left-most node until a leaf node is reached. Once reached, process the leaf node, its parent and then the right sub-tree (again go through the same above process).

public void inOrderTraversalRecursive(Node node) {
  if (node == null) {
   return;
  }
  
  if (node.getLeft() != null) {
   inOrderTraversalRecursive(node.getLeft());
  }
  System.out.println(node);
  if (node.getRight() != null) {
   inOrderTraversalRecursive(node.getRight());
  }
 }

As expected the iterative solution uses a single stack. The in-order traversal algorithm processes the left sub-tree first from deep down node => which in turn means, navigate to the left-most leaf node. So, as a first step, navigate deep down into the left sub-tree. We keep pushing all the left-nodes to the stack when we navigate, so that when we pop out of the stack for the first time, it’s going to be the left-most leaf node. For every popped node, we check whether there is a right node, if there is, then we add that node and then followed by all left-nodes (if there are) for the right node (that we are working on) to the stack. Once that is done, we pop out a node from the stack and then repeat the same above process until the stack goes empty.

public void inOrderTraversalIterative(Node node) {
  if (node == null) {
   return;
  }

  Stack<Node> stack = new Stack<Node>();
  // Navigate to the left most node of the left sub-tree. During that process, push all the navigating nodes to the stack.
  while (node != null) {
   stack.push(node);
   node = node.getLeft();
  }
  
  while (!stack.isEmpty()) {
   Node currNode = stack.pop();
   System.out.println(currNode);
   
   /* If the current node has a right child, push it to stack and navigate to the left node of that right child if it exists. 
   Remember we need to process the right sub-tree of any parent node. So, add all left nodes of the right child to stack and 
   keep navigating to left till the leaf node */ 
   if (currNode.getRight() != null) {
    Node rightnode = currNode.getRight();
    stack.push(rightnode);
    if (rightnode.getLeft() != null) {
     Node rightsLeftnode = rightnode.getLeft();
     while (rightsLeftnode != null) {
      stack.push(rightsLeftnode);
      rightsLeftnode = rightsLeftnode.getLeft();
     }
    }
   }
  }
 }


Postorder traversal
As always the recursive solution is simple. Given a node, the left descendants are processed first followed by the right descendants and then the node in consideration is processed. It looks pretty simple to do using a recursive approach.

public void postOrderTraversalRecursive(Node node) {
  if (node == null) {
   return;
  }
  
  if (node.getLeft() != null) {
   postOrderTraversalRecursive(node.getLeft());
  }
  
  if (node.getRight() != null) {
   postOrderTraversalRecursive(node.getRight());
  }
  System.out.println(node);
 }

The iterative solution for postorder traversal is the most difficult of all. The below iterative solution uses a single stack. This algorithm may not look same as other solutions provided in internet. This solution worked well for the usecases that I tested and hopefully it works for everyone. The tough part of this solution is: Determining whether we are currently navigating down the tree or up the tree. If we don’t determine that, we might end up looping back and forth in an endless loop. How to identify whether we are going up or down? If we remember the previous node that was processed or added to the stack then based on that, we can tell the relationship of the current node to the previous node. It can be a child node or the parent node!! This determination would help us making a decision whether to move up or down.

On a broad view, there are 2 cases that need to be covered in this algorithm
1) If the current node is the parent of the previously processed node (the previously processed node could either be a left or right child) then it means we are going up the tree. That in turn means we have already processed the child sub-trees of the current node. So, it’s time to process the parent node and go for the next element in the stack.
2) The other situations could be:
a) The current node can be the left or right child of the previously processed node. In this case, we are going down the tree.
b) The current node is the root node.
c) The current node can be the right companion of the previously processed node.

For all the above usecases (a), (b) and (c), we call the special private method “navigateAndProcessNodesInPostOrder” method. navigateAndProcessNodesInPostOrder method takes care of pushing the current node, left child and right child (in that order) to stack if the nodes exist. After pushing the nodes, it navigates to the left child (if it exists) because precedence is given to the left node. If not it tries to navigate to the right child. If there are no child nodes then this method just processes it without the need of adding that to stack because it’s a leaf node. “NodeWrapper” object is used as a wrapper object to hold the previous processed node, the current node and the stack.

/**
  * An iterative solution for postorder traversal of a binary tree.
  * 
  * On a broad view, there are 2 cases that need to be covered in this algorithm
  * 1) If the current node is the parent of the previously processed node (the previously processed node could either be a 
  *   left or right child) then it means we are going up the tree. That in turn means we have already processed the child 
  *  sub-trees of the current node. So, it’s time to process the parent node and go for the next element in the stack.
  * 2) The other situations could be: 
  *  a) The current node can be the left or right child of the previously processed node. In this case, we are going down the tree. 
  *  b) The current node is the root node. 
  *  c) The current node can be the right companion of the previously processed node. 
  *  For all the above usecases, we call the special private method “navigateAndProcessNodesInPostOrder” method. 
  *  navigateAndProcessNodesInPostOrder  method takes care of pushing the current node, left child and right child (in that order) 
  *  to stack if the nodes exist. After pushing the nodes, it navigates to the left child (if it exists) because precedence is given 
  *  to the left node. If not it tries to navigate to the right child. If there are no child nodes then this method just processes 
  *  it without the need of adding that to stack because it’s a leaf node. “NodeWrapper” object is used as a wrapper object to hold 
  *  the previous processed node, the current node and the stack.
  * @param node Node Object
  * 
  */
 public void postOrderTraversalIterative(Node node) {
  if (node == null) {
   return;
  }
  
  Stack<Node> stack = new Stack<Node>();
  stack.push(node);
  Node prevNode = null;
  Node currNode = null;
  
  while (!stack.isEmpty()) {
   currNode = stack.pop();
   
   if (currNode.getLeft() == prevNode || currNode.getRight() == prevNode) {
    // If the current node is the parent of previously processed node. Navigating up the tree. So its enough just to process the node
    System.out.println(currNode);
    prevNode = currNode;
   } else {
    // If previously processed node was the parent of current node. i.e currently processing either the left or right child (navigating down).
    // If the node is the root node.
    // If the current node is the leaf node of a sub-tree. In this case previously processed node and current node are same.
    // If the current node being processed is the right companion of the previously processed node
    
    NodeWrapper nodeWrapper = new NodeWrapper(prevNode, currNode, stack);
    nodeWrapper = navigateAndProcessNodesInPostOrder(nodeWrapper);
    prevNode = nodeWrapper.getPrevNode();
    currNode = nodeWrapper.getCurrNode();
    stack = nodeWrapper.getStack();
   }
  }
 }
 
 /**
  * A helper method for post-order traversal - if given a node, it traverses and processes the sub-tree
  * NodeWrapper object is used to exchange prev, curr nodes and stack between the calling and called method
  * @param nodeWrapper
  * @return
  */
 private NodeWrapper navigateAndProcessNodesInPostOrder(NodeWrapper nodeWrapper) {
  Node currNode = nodeWrapper.getCurrNode();
  Node prevNode = nodeWrapper.getPrevNode();
  Stack<Node> stack = nodeWrapper.getStack();
  
  Node leftNode = currNode.getLeft();
  Node rightNode = currNode.getRight();
  if (leftNode != null  || rightNode != null) {
   // current node (if its a parent) is the last to be processed and so it has to be pushed to stack first
   stack.push(currNode);
   if (rightNode != null) {
    // If there is a right node, push it to stack
    stack.push(rightNode);
   }
   // If there is a left node, push it to stack
   if (leftNode != null) {
    stack.push(leftNode);
   }
   prevNode = currNode;
   
   // Left takes precedence. If there is a left node, go left else go right 
   if (leftNode != null) {
    currNode = currNode.getLeft();
   } else if (rightNode != null) {
    currNode = currNode.getRight();
   }
  } else { // its a leaf node, so process it
   System.out.println(currNode);
   prevNode = currNode;
  }
  nodeWrapper.setPrevNode(prevNode);
  nodeWrapper.setCurrNode(currNode);
  return nodeWrapper;
 }
 
  /**
  * An inner class NodeWrapper to encompass prevNode, currNode and stack for post-order traversal
  *
  */
 class NodeWrapper {
  private Node prevNode;
  private Node currNode;
  private Stack<Node> stack;
  
  NodeWrapper(Node prevNode, Node currNode, Stack<Node> stack) {
   this.prevNode = prevNode;
   this.currNode = currNode;
   this.stack = stack;
  }

  private Node getPrevNode() {
   return prevNode;
  }

  private Node getCurrNode() {
   return currNode;
  }

  private Stack<Node> getStack() {
   return stack;
  }

  private void setPrevNode(Node prevNode) {
   this.prevNode = prevNode;
  }

  private void setCurrNode(Node currNode) {
   this.currNode = currNode;
  }

  private void setStack(Stack<Node> stack) {
   this.stack = stack;
  }
  
 }


The entire code base for this project can be accessed from my Github repository – Binary Tree Traversal algorithms. I have added as many comments as possible in the code. Hope it will help everyone!!

Labels: , ,

Monday, December 16, 2013

Linked List

Linked lists are the most important type of data structure. It contains a series of special elements. These special elements are called special because they hold the data and a pointer to the next element. This pointer is usually called as next pointer. The root element is the starting point of the linked list and the tail element is the last element in the linked list. After the tail element there are no more elements and so the last element’s next pointer has a null reference.

There are 3 types of linked lists
1) Single Linked List – There is one next reference per element
2) Double Linked List – There are two references per element called prev and next. The prev reference points to the previous element and the next reference points to the next element as always
3) Circular Linked List – This is a special type of linked list and they can be of Single or double linked list. The important property that differentiates them is that they do not have the end elements. Yes, there are no head or tail elements. It means that if you start from any element and traverse, you will end up in going circles and never end.

This project illustrates the various operations that can be done on a Single Linked List. Additionally it also shows a special stack implementation using Single Linked List.

Traverse and Print All Elements
Traverses the entire linked list and prints all elements.
 /**
  * Prints all elements in the linked list to out stream
  * @param void
  * @return void
  */
 public void print() {
  Element currElement = head;
  while (currElement != null) {
   System.out.println(currElement);
   currElement = currElement.getNext();
  }
 }


Add an element to the end
This adds the given data as an element to the end of the linked list. Now the new element becomes the tail element.
 /**
  * Adds the provided Integer object data to the end of linked list
  * @param data Integer
  * @return boolean
  */
 public boolean add(Integer data) {
  Element newElement = new Element(data);
  if (head == null) {
   head = newElement;
   return true;
  } else {
   Element prevElement = head;
   Element currElement = head.getNext();
   while (currElement != null) {
    prevElement = currElement;
    currElement = currElement.getNext();
   }
   if (prevElement != null) {
    prevElement.setNext(newElement);
    return true;
   }
  }
  return false;
 }


Add an element at the specified position
Given an integer data, this method creates an element out of it and adds at the specified position of the list.
 /**
  * Inserts the provided Integer object data at the requested position of linked list.
  * If insert was successful it returns true else returns false
  * 
  * 
  * Implementation Details:
  * Though the iteration variable tracks the position of current Element pointer, if a position
  * is found where the new element needs to be inserted, it is implicit that the position is going to be
  * in between the prevElement and curreElement. Idea is to visualize the position (input) to be in between 
  * prevElement and currElement
  * 
  * Say if the position is after the last element then it means you have to insert after the prevElement(pointing last element)
  * but before the curreElement (which is pointing a virtual non-existent element - beyond last element). So its imperative that 
  * the while loop runs until the prevElement is pointing the last element
  * 
  * @param data Integer
  * @param position Long
  * @return boolean
  * 
  */
 public boolean add(Integer data, Long position) {
  
  if (position == null || position.intValue() < 1) {
   return false;
  }
  
  if (head != null) {
   Element newElement = new Element(data);
   if (position.intValue() == 1) {
    newElement.setNext(head);
    head = newElement;
   } else {
    Element prevElement = head;
    Element currElement = head.getNext();
    int iteration = 2;
    while (prevElement != null) {
     if (iteration == position.intValue()) {
      newElement.setNext(currElement);
      prevElement.setNext(newElement);
      return true;
     } else {
      prevElement = currElement;
      currElement = currElement.getNext();
      iteration++;
     }
    }
   }
  }
  
  return false;
 }



Delete an element from the given position
Given a position in linked list, this deletes the element at that position. If there is no such position in the list, then it does not delete anything from the linked list.
 /**
  * This method deletes all elements in the linked list that have the provided Integer data. 
  * If one or many elements are deleted, this method returns true else it returns false
  * @param data Integer
  * @return boolean
  */
 public boolean delete(Integer data) {
  Element givenElement = new Element(data);
  boolean deleted = false;
  
  // Special case if the head element needs to be deleted
  if (head.equals(givenElement)) {
   Element temp = head;
   head = head.getNext();
   temp.setNext(null);
   deleted = true; 
  }
  
  if (head != null) {
   Element prevElement = head;
   Element currElement = head.getNext();
   while (currElement != null) {
    if (currElement.equals(givenElement)) {
     prevElement.setNext(currElement.getNext());
     currElement = currElement.getNext();
     deleted = true;
    }
    prevElement = currElement;
    currElement = currElement.getNext();
   }
  }
  return deleted;
 }


Delete all elements with a given data
Deletes all elements that have the given data. Takes care of chaining the element appropriately after the deletion is complete.
 /**
  * This method deletes the element at requested position of the linked list. 
  * If deletion happens, this method returns true else it returns false
  * @param position Long
  * @return boolean
  */
 public boolean delete(Long position) {
  if (position == null || position.intValue() < 1) {
   return false;
  }

  if (head != null) {
   if (position.intValue() == 1) {
    Element temp = head;
    head = head.getNext();
    temp.setNext(null);
    return true;
   } else {
    int iteration = 2;
    Element prevElement = head;
    Element currElement = head.getNext();
    while (currElement != null) {
     if (iteration == position.intValue()) {
      prevElement.setNext(currElement.getNext());
      currElement.setNext(null);
      return true;
     }
     prevElement = currElement;
     currElement = currElement.getNext();
     iteration++;
    }
   }
  }

  return false;
 }


Clear all elements (delete all)
Deletes all elements that have the given data. Takes care of chaining the element appropriately after the deletion is complete.
 /**
  * Deletes all elements in the linked list. 
  * If all elements are deleted, this method returns true else it returns false
  * @param void
  * @return boolean
  */
 public boolean clear() {
  if (head == null) {
   return false;
  }
  
  while (head != null) {
   Element element = head;
   head = head.getNext();
   element.setNext(null);
  }
  return true;
 }


Get Element At
Gets the data from the element at the specified position. If there are no elements at the given position, then this method returns null.
 /**
  * Returns the element at requested position of the linked list. 
  * The position is counted from the head element.
  * @param position Long
  * @return Element
  */
 public Element getElementAt(Long position) {
  
  if (position == null || head == null || position.intValue() < 1) {
   return null;
  }
  
  int iteration = 1;
  Element currElement = head;
  while (currElement != null) {
   if (position.intValue() == iteration) {
    return currElement;
   }
   currElement = currElement.getNext();
   iteration++;
  }

  return null;
 }


Get Nth Element From Last
Get Nth element from the tail element of the linked list. The first thought that strike's everyone is – Navigate to the tail element of the linked list and return the Nth previous element from there. But the point that we all tend to forget is - we can traverse only forward in a single linked list. We cannot traverse backwards, and so it makes the solution pretty tricky. The idea is to have two references in the linked list, say A and B. As a first step, have them separated by N elements. This is done by forwarding the B reference by N elements and thereafter keep traversing both references until B reaches tail element. When B is referring the tail element, this in turn means that A is N elements before the tail element.
 /**
  * Returns the element at requested position of the linked list. 
  * The position is counted from the tail or last element of the linked list.
  * @param position Long
  * @return Element
  */
 public Element getNthFromLastElement(Long position) {
  
  if (position == null || head == null || position.intValue() < 1) {
   return null;
  }
  
  Element currElement = head;
  Element nthBehindElement = head;
  int iteration = 1;
  
  while (currElement != null && iteration <= position.intValue()) {
   if (iteration++ == position.intValue()) {
    break; 
   }
   currElement = currElement.getNext();
  }
  
  if (currElement == null) {
   System.out.println("** Error: Current Element is past tail element");
   return null;
  }
  
  while (currElement.getNext() != null) {
   currElement = currElement.getNext();
   nthBehindElement = nthBehindElement.getNext();
  }
  
  return nthBehindElement;
 }


Stack Implementation
This implementation of Stack is done using a linked list. Stack is a simple LIFO data structure. There are two major actions that you can perform on a stack. Push and Pop. Additionally we have a clear operation that deletes all elements in the stack. And lastly we have print operation that prints all elements in the stack.
 /**
  * Returns true if stack is empty else returns false
  * @return boolean
  */
 public boolean isEmpty() {
  if (this.head == null) {
   return true;
  }
  return false;
 }
 
 /**
  * Returns true if the method was able to push data to the stack. Else returns false.
  * @param data Integer
  * @return boolean
  */
 public boolean push(Integer data) {
  if (data == null) {
   return false;
  }
  
  Element newElement = new Element(data);
  if (head == null) {
   head = newElement;
   return true;
  }
  
  if (head != null) {
   Element temp = head;
   head = newElement;
   head.setNext(temp);
   return true;
  }
  
  return false;
 }
 
 
 /**
  * Pops and returns the last element that was pushed to the stack
  * @return stackElement
  */
 public Element pop() {
  Element element = null;
  if (head == null) {
   System.out.println("Stack is empty!! Cannot pop anything out of it!!");
   return element;
  }
  
  Element firstElement = head;
  Element secondElement = head.getNext();
  head = secondElement;
  firstElement.setNext(null);
  return firstElement;
 }
 
 
 /**
  * Returns true if stack was cleared or if it was empty already
  * @return boolean
  */
 public boolean clear() {
  while (head != null) {
   Element element = head;
   head = head.getNext();
   element.setNext(null);
  }
  return true;
 }
 
 
 /**
  * Iterates over the stack and prints all its elements
  */
 public void printAll() {
  if (head == null) {
   System.out.println("Stack is empty currently!!");
   return;
  }
  
  Element currElement = head;
  while (currElement != null) {
   System.out.println(currElement);
   currElement = currElement.getNext(); 
  }
 }


The entire code base for this project can be accessed from my Github repository – Linked List algorithms. I have added as many comments as possible in the code. Hope it helps!!

Labels: , ,

Friday, November 08, 2013

CRUD Services (REST/SOAP) using Java, Apache CXF, Spring, Hibernate, Maven and Log4J

This sample project is an extension to the previous project CRUD using Java Spring and Hibernate. This project focuses on building a web application to expose the already developed features from the previous project as REST and SOAP webservices. To accomplish this task, Apache CXF was used. CXF makes the development of RESTful and SOAPful services simple by using java annotations. It internally uses HTTP binding to map a java method (service or operation) to any given URI and HTTP verb. Spring framework is internally used by CXF to create spring beans and inject those beans in the web application to service the incoming requests.

What’s new in this project?
1) Creation of a web application
2) Integration and configuration of Apache CXF to expose web services
3) Annotations to configure REST services @Path, @Produces, @GET, @POST, @PUT and @DELETE
4) Annotations to configure SOAP services @WebService, @WebMethod and @WebParam
5) REST and SOAP endpoint declarations and web.xml configurations

pom.xml
As a first step, the pom.xml of previous project was taken and added with the following dependencies
 
<dependency>
 <groupId>org.apache.cxf</groupId>
 <artifactId>cxf-rt-frontend-jaxrs</artifactId>
 <version>${cxf.version}</version>
</dependency>
<dependency>
 <groupId>org.apache.cxf</groupId>
 <artifactId>cxf-rt-frontend-jaxws</artifactId>
 <version>${cxf.version}</version>
</dependency>
<dependency>
 <groupId>org.apache.cxf</groupId>
 <artifactId>cxf-rt-transports-http</artifactId>
 <version>${cxf.version}</version>
</dependency>
<dependency>
 <groupId>org.apache.cxf</groupId>
 <artifactId>cxf-rt-rs-extension-providers</artifactId>
 <version>${cxf.version}</version>
</dependency>
<dependency>
 <groupId>org.codehaus.jackson</groupId>
 <artifactId>jackson-jaxrs</artifactId>
 <version>${jackson.version}</version>
</dependency>
<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-web</artifactId>
 <version>${spring.version}</version>
</dependency>

cxf-rt-transports-http is the transport layer of CXF which abstracts the binding and transport details to the rest of the layers. cxf-rt-frontend-jaxws and cxf-rt-frontend-jaxrs are the frontends of CXF that are used to expose SOAP and REST services respectively. cxf-rt-rs-extension-providers has optional extensions that may or may not be needed. jackson-jaxrs is needed for adding JSON functionality to the needed webservices. These two artifacts will be configured as extensions in CXF that will be covered later. Spring-web is internally used by CXF and so its needed. The complete version of pom.xml can be located in the project which can be downloaded from the link provided at the bottom of this post.

Apart from the above dependencies there is one more important addition to pom.xml. Its plugins. There are 2 plugins added – one for the maven-war-plugin. This is used to build a war artifact out of the project. The web.xml relative path need to be mentioned in this plugin so that it picks it up properly. Second plugin is about adding tomcat plugin to the project so that the built webapp artifact (war file) can be automatically deployed in a light-weight tomcat by just executing a simple maven goal. Had this option not available, it would be time consuming to have tomcat installed separately and to deploy the built war file every time a change was made in the code. Maven goals are essentially tasks (Mojos) which can be used effectively to execute something. These goals are configured as plugins for maven.
<build>
<pluginManagement>
  <plugins>
  <plugin>            
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-war-plugin</artifactId>
    <version>2.3</version>
    <configuration>
    <webXml>src/main/webapp/WEB-INF/web.xml</webXml>        
   </configuration>
   </plugin>
    <plugin>
      <groupId>org.apache.tomcat.maven</groupId>
      <artifactId>tomcat7-maven-plugin</artifactId>
      <version>2.1</version>
                <configuration>
                    <path>/</path>
                    <port>8090</port>
    </configuration>
    </plugin>
  </plugins>
</pluginManagement>
</build>

Once this configuration is done, running
1) mvn clean install will clean and then build the project to produce a war file.
2) mvn tomcat7:run will deploy the built war file and start the light-weight tomcat server.

web.xml
The next important configuration is the web application descriptor – web.xml. A folder WEB-INF is created under src/main/webapp where this file is created. This is the configuration that any application server is going to parse as a first step to know what to do.

As expected “org.springframework.web.context.ContextLoaderListener” is provided as a listener class. This is the spring context of our web application. When the application loads into memory, Spring creates the context by creating beans that are mentioned in the configuration files. These configuration file names are the params for “contextConfigLocation” param (here it is applicationContext.xml and cxf-bean.xml).

The most important configuration is for CXF. CXF is a servlet. A web servlet is something that keeps waiting and listening for requests at the configured URL. Once it receives a request, it serves them and continues to listen till it is brought down. That’s exactly what CXF does. It keeps listening at the root URL(/) wherever this application is deployed. Once it gets a request, it invokes the corresponding Spring beans to serve the request accordingly. Here is the complete web.xml file

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
 version="2.5">
 <display-name>Employee Service</display-name>
 <context-param>
  <param-name>contextConfigLocation</param-name>
  <param-value>
  classpath*:cxf-bean.xml,
  classpath*:applicationContext.xml
  </param-value>
 </context-param>

 <context-param>
  <param-name>initializeContextOnStartup</param-name>
  <param-value>true</param-value>
 </context-param>

 <listener>
  <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
 </listener>

 <servlet>
  <servlet-name>CXFServlet</servlet-name>
  <servlet-class>org.apache.cxf.transport.servlet.CXFServlet</servlet-class>
  <load-on-startup>1</load-on-startup>
 </servlet>

 <servlet-mapping>
  <servlet-name>CXFServlet</servlet-name>
  <url-pattern>/*</url-pattern>
 </servlet-mapping>
</web-app>

applicationContext.xml
This contains the configuration for all the spring beans needed for our application. This is exactly the same configuration what was used in the previous project. If you want to, please refer the explanation here

cxf-bean.xml
This is the master configuration file for apache cxf. This file is referred from web.xml. This means that web.xml file is parsed by the application server before even this file is parsed. This again in turn means all the spring beans are instantiated and readily available in the spring context before this step. This file is used to configure the jaxrs (rest) or jaxws(soap) service endpoints and map them to the corresponding spring beans. Any providers or extensions if any needed are also configured here. If you have a look at the file , its pretty much understandable. We ignore the built-in CXF JSON feature but instead use Jackson JSON and so its configured as a provider.

Employee Services (REST)
Apart from the above configurations, rest all is self-explanatory. I have created Employee Service interface (com.samples.service package) and exposed some HTTP operations. The corresponding Employee Service Impl (com.samples.service.impl package) has the implementation details. Things to note are the annotations that are used to map the HTTP methods of a specific URL to a specific method of a spring service bean. These annotations (@Path, @PathParam, @GET, @POST etc) are in the Interface (EmployeeService.java) These service beans internally call the BO beans which in turn call the DAO beans. The BO beans also depend on adapter beans to convert the hibernate entity objects to web response objects as needed by the service beans.

The REST services exposed (wadl) from this webapp can be seen by hitting the endpoint http://localhost:8090/?_wadl

Employee Services (SOAP)
A sample SOAP service is exposed in EmployeeServiceWS interface. The corresponding implementation can be found in EmployeeServiceWSImpl. This is different from REST services in just the annotations (@WebService, @WebMethod, @WebParam) that are used. An additional configuration is provided in cxf-bean.xml file to refer the bean name(employeeServiceWS) of SOAP service and the endpoint where it serves.
<jaxws:endpoint id="employeeServiceSOAP" implementor="#employeeServiceWS" address="/soapservices" />

The SOAP services exposed (wsdl) from this webapp can be known by hitting the endpoint http://localhost:8090/soapservices?wsdl

The above two endpoints can be imported in SOAP UI and the services can be validated easily. Below is a screenshot of the imported wadl and wsdl in SOAP UI



This project can be downloaded from my Git repository EmployeeWebApp Project @ Github and feel free to play around with it.

Labels: ,

Wednesday, April 03, 2013

CRUD using Java, Spring, Hibernate, Maven and Log4J

This post illustrates how to integrate Spring into a simple java project and how that in turn simplifies and paves way for an effective coding approach. This project is similar to the previous one CRUD with Java, Hibernate and Maven for the most part. The one big difference is that this project uses Spring in addition to the first one to demonstrate how the application code can be further simplified, modularized and can be more organized and structured. This was possible because Spring framework takes the responsibility of instantiating the required beans, autowiring and injecting the needed dependencies, managing the lifecycle of instantiated beans and it takes care of database sessions and transactions too. By using Spring, much of the boiler plate redundant code is removed and thus simplified. I tried to follow the same design patterns that were used in previous project.
Software Used
1) Java 1.6
2) Spring Tool suite (STS) 3.1.0
3) Spring framework 3.2.2
4) Oracle 11g Express Edition
5) Apache maven 3.0.4
6) Hibernate 4.1.8 Final
7) Log4j 1.2.16 and Slf4j 1.6.1

What's new in this project from the previous one?
a) Configuration and integration of Spring
b) Spring annotations - @Repository, @Component, @Service, @Transactional, @Autowired and component scanning
c) Hibernate annotations - One to many and Many to one, Fetch Types, Fetch modes
d) New tables EMPLOYEE_EXPENSE (which has the list of expenses for every employee) and DEPARTMENT (list of departments) are introduced

Pre-installation and validation steps
Installation and setup is exactly same like the previous project. An additional thing to do is – to create the new tables EMPLOYEE_EXPENSE and DEPARTMENTS and insert data. The below script was used for it

CREATE TABLE EMPLOYEE_EXPENSE
  (
    "EMP_EXP_ID"    NUMBER,
    "EMP_ID"        NUMBER(5,0),
    "YEAR"          NUMBER(4,0),
    "MONTH"         NUMBER(2,0),
    "EXPENSE_CLAIM" NUMBER(7,2),
    "APPROVED_AMT"  NUMBER(7,2),
    "PAID_DATE" DATE,
    CONSTRAINT "EMP_EXP_PK" PRIMARY KEY ("EMP_EXP_ID"), 
    CONSTRAINT "FK_EMPLOYEE" FOREIGN KEY ("EMP_ID") REFERENCES "EMPLOYEE" ("EMP_ID") ENABLE
  );

REM INSERTING INTO EMPLOYEE_EXPENSE
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (1,7369,2002,2,3072.43,3072.43,to_timestamp('03-MAR-02','DD-MON-RR HH.MI.SSXFF AM'));
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (2,7369,2002,4,30,30,to_timestamp('01-JUN-02','DD-MON-RR HH.MI.SSXFF AM'));
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (3,7369,2002,5,235.03,35.03,to_timestamp('01-JUN-02','DD-MON-RR HH.MI.SSXFF AM'));
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (4,7369,2002,9,5095.98,5095.08,to_timestamp('31-OCT-02','DD-MON-RR HH.MI.SSXFF AM'));
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (5,7369,2002,12,1001.01,1001.01,to_timestamp('01-FEB-03','DD-MON-RR HH.MI.SSXFF AM'));
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (6,7782,2002,1,111.09,111.09,to_timestamp('01-FEB-02','DD-MON-RR HH.MI.SSXFF AM'));
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (7,7782,2002,3,9.85,9.85,to_timestamp('01-APR-02','DD-MON-RR HH.MI.SSXFF AM'));
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (8,7782,2002,7,3987.32,3987.32,to_timestamp('01-AUG-02','DD-MON-RR HH.MI.SSXFF AM'));
INSERT INTO EMPLOYEE_EXPENSE (EMP_EXP_ID,EMP_ID,YEAR,MONTH,EXPENSE_CLAIM,APPROVED_AMT,PAID_DATE) VALUES (9,7782,2002,9,1200,1200,to_timestamp('01-OCT-02','DD-MON-RR HH.MI.SSXFF AM'));


CREATE TABLE DEPARTMENT
  (
    "DEPT_ID"     NUMBER(5,0) NOT NULL ENABLE,
    "NAME"        VARCHAR2(20 BYTE),
    "LOCATION_ID" NUMBER(3,0),
    CONSTRAINT "DEPARTMENT_PK" PRIMARY KEY ("DEPT_ID")
  );

REM INSERTING into DEPARTMENT
INSERT INTO DEPARTMENT (DEPT_ID,NAME,LOCATION_ID) values (10,'ACCOUNTING',122);
INSERT INTO DEPARTMENT (DEPT_ID,NAME,LOCATION_ID) values (20,'RESEARCH',124);
INSERT INTO DEPARTMENT (DEPT_ID,NAME,LOCATION_ID) values (30,'SALES',null);
INSERT INTO DEPARTMENT (DEPT_ID,NAME,LOCATION_ID) values (40,'OPERATIONS',167);


Create maven project(pom.xml)
The one additional step that was done here is to add spring related configuration in pom.xml. This involves adding spring artifacts spring-orm and spring-context. Here is the complete pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.samples</groupId>
  <artifactId>associationMapper</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>jar</packaging>
  <name>associationMapper</name>
  <url>http://maven.apache.org</url>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <spring.version>3.2.2.RELEASE</spring.version>
    <hibernate.version>4.1.9.Final</hibernate.version>
    <oracle.version>11.2.0</oracle.version>
    <junit.version>4.11</junit.version>
    <commons-logging.version>1.1.1</commons-logging.version>
    <log4j.version>1.2.16</log4j.version>
    <slf4j.version>1.6.1</slf4j.version>
  </properties>

  <dependencies>
  <dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-orm</artifactId>
  <version>${spring.version}</version>
 </dependency>
   <dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-context</artifactId>
  <version>${spring.version}</version>
 </dependency>
 <dependency>
  <groupId>org.hibernate</groupId>
  <artifactId>hibernate-core</artifactId>
  <version>${hibernate.version}</version>
 </dependency>
 <dependency>
  <groupId>com.oracle</groupId>
  <artifactId>ojdbc6</artifactId>
  <version>${oracle.version}</version>
 </dependency>
 <dependency>
  <groupId>org.slf4j</groupId>
  <artifactId>slf4j-log4j12</artifactId>
  <version>${slf4j.version}</version>
 </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>${junit.version}</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

applicationContext.xml (src/main/resources)
This is the master configuration file that Spring reads to instantiate all needed beans. The filename can be anything.xml. It doesn’t matter. I have provided the entire contents of this spring bean config below. All it contains is the configuration for the beans that would be instantiated by Spring framework. The needed properties for every bean is provided so that it gets instantiated successfully. For example, to create a datasource bean, we need all the database related properties and so it is provided. But instead of providing all database connection properties inline, we are reading them using spring’s PropertyPlaceholderConfigurer class. By doing so, these property values can be used elsewhere in the config file using ${variable} notation. And doing this way could help us having the environment specific database properties easily swapped based on which environment (dev/qa/prod) the code is deployed. In our case, the property place holder uses the configurations from the file database.properties which is present in the class path (src/main/resources)

jdbc.driverClassName=oracle.jdbc.driver.OracleDriver
jdbc.url=jdbc:oracle:thin:@127.0.0.1:1521:XE
jdbc.username=metallicatony
jdbc.password=xxx

Following the datasource configuration, comes the SessionFactory bean. To instantiate SessionFactory, we use the new LocalSessionFactoryBean class instead of the old AnnotationSessionFactoryBean class. SessionFactory bean needs datasource, hibernate properties and the list of annotated classes as properties. Using all these properties, Spring will be able to instantiate the hibernate sessionFactory bean which in turn will be used by the application code. In addition to that, we have the configuration for hibernate transaction manager and the definition which ensures that the application supports annotation based transactions.
 <bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
     <property name="sessionFactory" ref="sessionFactory"/>
 </bean>
 <tx:annotation-driven transaction-manager="transactionManager" />
And the last and most important configuration that saves all our programmers lives is “component scan”!

Just imagine, if we were to define all the hundreds and thousands of (application) beans that we are using in an enterprise application to be defined for Spring!! Argh!! So, instead of defining every single bean in this config file, we ask Spring to scan and autodetect implementation classes using the annotations and instantiate beans through that. The classes that were declared with @Service (service layer beans), @Component(generic beans), @Repository (DAO layer beans) will be created as beans.

ApplicationMain.java 

In ApplicationMain class, the previously discussed applicationContext.xml is used by Spring's ClassPathXmlApplicationContext class to create the application context. This context will hold all the instantiated beans - beans from both applicationContext.xml and the beans that were instantiated through annotations.

Service Layer

From application coding perspective, I have added one more layer – yes, it’s the service layer. The ApplicationMain class calls this service layer from which the control passes down to BO layer, DAO layer and then to the domain layer. 

Here is the complete applicationContext.xml file

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="http://www.springframework.org/schema/beans
              http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
              http://www.springframework.org/schema/context 
              http://www.springframework.org/schema/context/spring-context-3.0.xsd
              http://www.springframework.org/schema/tx 
              http://www.springframework.org/schema/tx/spring-tx-3.1.xsd">
              
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
 <property name="location">
  <value>database.properties</value>
 </property>
</bean>
 
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
 <property name="driverClassName" value="${jdbc.driverClassName}" />
 <property name="url" value="${jdbc.url}" />
 <property name="username" value="${jdbc.username}" />
 <property name="password" value="${jdbc.password}" />
</bean>
 
 <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
    <property name="dataSource">
      <ref bean="dataSource"/>
    </property>
 
    <property name="hibernateProperties">
       <props>
         <prop key="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</prop>
         <prop key="default_schema">METALLICATONY</prop>
         <prop key="hibernate.show_sql">true</prop>
       </props>
    </property>
 
    <property name="annotatedClasses">
 <list>
  <value>com.samples.domain.Employee</value>
  <value>com.samples.domain.Department</value>
  <value>com.samples.domain.EmployeeExpense</value>
 </list>
    </property>
 </bean>
 
 <bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
     <property name="sessionFactory" ref="sessionFactory"/>
 </bean>
 <tx:annotation-driven transaction-manager="transactionManager" />

 <context:component-scan base-package="com.samples"/>
</beans>

SessionFactory and Transactions
Please remember that as of Hibernate 3.0, there is no need of using HibernateTemplate. In the past, this was used to get a session factory object which in turn was used to get a session before every database transaction. This route is not taken these days and SessionFactory can be directly used to get a session object (sessionFactory.getCurrentSession())

Every DAO class will now have SessionFactory bean auto-injected like this

@Autowired 
SessionFactory sessionFactory;
using which we can get the current session to perform every database operation. Added to that, we now have @Transactional declaration
@Transactional(propagation=Propagation.REQUIRED)
on every method in the service layer (com.samples.service.impl) that does atleast one database operation. By doing that, we are trying to say that all db operations that happen in that method will be within a single transaction (a basic unit of work on a database). This annotation takes care of creating a new transaction and ending the transaction after the unit of work is complete. What really happened underneath was that the DAO method was intercepted and proxied inside a transaction before proceeding with the actual code that we have written!! Wheeee!! This kind of programming technique is called Aspect Oriented Programming (AOP). The transaction propagation (a parameter to the transactional annotation) is used to tell the behavior of a transaction. Propagation.REQUIRED says to continue the transaction if there was any before and to create a new transaction if there wasn’t any. 

One last thing that is new in this project is – Hibernate annotations for One to Many and Many to One. There are lot of confusions in internet about this. So let me clear this. Lets say if there is an EMPLOYEE table and for every employee there are many expenses in EMPLOYEE_EXPENSES table, then we say EMPLOYEE has ONE TO MANY relation to EMPLOYEE_EXPENSES. If the EMPLOYEE table has a department_id column and every employee will belong to one of the available departments in the DEPARTMENT table, then we say it’s a MANY (employees) TO ONE (department) relation using the foreign key column department_id in the EMPLOYEE table. 


It’s important to have @Join annotation on the property of the entity that holds the foreign key column. The entity on the other side of the relationship (parent table) will hold a Set property that can hold the list of the records from the foreign key (child) table. And that’s the reason, we have the below in EmployeeExpense.java
@ManyToOne
@JoinColumn(name="EMP_ID")
private Employee employee;
and the below in Employee.java
@OneToMany(mappedBy="employee", fetch=FetchType.EAGER)
@Fetch(FetchMode.JOIN)
private Set<EmployeeExpense> employeeExpenses;
In the same way, the Many to one relation between Employee and department is configured as follows. In Employee.java
@ManyToOne(fetch=FetchType.EAGER)
@Fetch(FetchMode.JOIN)
@JoinColumn(name="DEPT_ID")
private Department department;
and in Department.java
@OneToMany(mappedBy="department")
private Set<Employee> employees;

Huh, did you notice the fetching strategy that was mentioned above? In hibernate, there are 2 FetchTypes – LAZY, EAGER and 3 FetchModes – SELECT, JOIN, SUBSELECT.LAZY and SELECT go together i.e when we get Employee record, it won't get the in-relation records - EmployeeExpense but would wait for its access to fetch. EAGER and JOIN or SUBSELECT go together i.e when we get Employee record, it would get the in-relation records using a JOIN or SUBSELECT SQL query. Here the associated data is EAGER fetched.


I hope this project gives anyone an overall idea about Spring, Hibernate, their integration and the various coding and configuration details that needs to be adapted for creating a better coding world!! Feel free to browse through or download the source from my git repository CRUD using Spring and Hibernate @ Github

Labels: ,

Sunday, February 10, 2013

CRUD using Java, Hibernate, maven and Log4j

This post briefs about the very basics of Hibernate and how I integrated it with a simple java project to perform CRUD operations. CRUD stands for Create, Read, Update and Delete operations – which we end up doing in most of our day to day programming. I forced myself to follow business object (BO) and Data Access Object (DAO) patterns in this project so that I become habituated with both of them. I agree that business object pattern does not make any sense at all for this sample project but DAO pattern does make a lot of sense. As a matter of fact, DAO makes a lot of sense in any project that deals with data.

Software Used
1)      Java 1.6
2)      Spring Tool suite (STS) 3.1.0
3)      Oracle 11g Express Edition
4)      Apache maven 3.0.4
5)      Hibernate 4.1.8 Final
6)      Log4j 1.2.16 and Slf4j 1.6.1

Pre-installation and validation steps
The below steps were done before the actual java project was created
1)      Installed Java 1.6 and added JAVA_HOME in System Path and System environment variables
2)      Installed Spring Tool Suite
3)      Installed Maven 3.0.4 and added M2_HOME in System Path and System environment variables. Configured STS to use this installed maven instead of its native one. This can be done in STS Window menu -> Preferences -> Maven -> Installations.
4)      Installed Oracle 11g Express Edition, created a new database user and ran the below script (as the new user) to create a table in his schema. This script will also insert data into the table

CREATE TABLE EMPLOYEE(
    EMP_ID            NUMBER(5)    NOT NULL,
    FNAME             VARCHAR2(20),
    LNAME             VARCHAR2(20),
    DEPT_ID           NUMBER(5)    NOT NULL,
    MANAGER_EMP_ID    NUMBER(5),
    SALARY            NUMBER(5),
    HIRE_DATE         DATE,
    JOB_ID            NUMBER(3),
    ACTIVE            CHAR(1)  DEFAULT 'Y' NOT NULL,
    CONSTRAINT employee_pk PRIMARY KEY (EMP_ID)
);

-- Insert Data into the tables.

insert into employee
(EMP_ID,FNAME,LNAME,DEPT_ID,MANAGER_EMP_ID,SALARY,HIRE_DATE,JOB_ID)
select e.emp_id, e.fname, e.lname, e.dept_id, e.manager_emp_id, e.salary, e.hire_date, e.job_id
from
(
select 7369 emp_id, 'JOHN' fname, 'SMITH' lname, 20 dept_id, 7902 manager_emp_id, 800 salary, '17-DEC-80' hire_date, 667 job_id from dual union all
select 7499 emp_id, 'KEVIN' fname, 'ALLEN' lname, 30 dept_id, 7698 manager_emp_id, 1600 salary, '20-FEB-81' hire_date, 670 job_id from dual union all
select 7521 emp_id, 'CYNTHIA' fname, 'WARD' lname, 30 dept_id, 7698 manager_emp_id, 1250 salary, '22-FEB-81' hire_date, null job_id from dual union all
select 7566 emp_id, 'TERRY' fname, 'JONES' lname, 20 dept_id, 7839 manager_emp_id, 2000 salary, '02-APR-81' hire_date, 671 job_id from dual union all
select 7654 emp_id, 'KENNETH' fname, 'MARTIN' lname, 30 dept_id, 7698 manager_emp_id, 1250 salary, '28-SEP-81' hire_date, 670 job_id from dual union all
select 7698 emp_id, 'MARION' fname, 'BLAKE' lname, 30 dept_id, 7839 manager_emp_id, 2850 salary, '01-MAY-80' hire_date, 671 job_id from dual union all
select 7782 emp_id, 'CAROL' fname, 'CLARK' lname, 10 dept_id, 7839 manager_emp_id, 2450 salary, '09-JUN-81' hire_date, 671 job_id from dual union all
select 7788 emp_id, 'DONALD' fname, 'SCOTT' lname, 20 dept_id, 7566 manager_emp_id, 3000 salary, '19-APR-87' hire_date, 669 job_id from dual union all
select 7839 emp_id, 'FRANCIS' fname, 'KING' lname, 10 dept_id, null manager_emp_id, 5000 salary, '17-NOV-81' hire_date, 672 job_id from dual union all
select 7844 emp_id, 'MARY' fname, 'TURNER' lname, 30 dept_id, 7698 manager_emp_id, 1500 salary, '08-SEP-81' hire_date, 670 job_id from dual union all
select 7876 emp_id, 'DIANE' fname, 'ADAMS' lname, 20 dept_id, 7788 manager_emp_id, 1100 salary, '23-MAY-87' hire_date, null job_id from dual union all
select 7900 emp_id, 'FRED' fname, 'JAMES' lname, 30 dept_id, 7698 manager_emp_id, 950 salary, '03-DEC-81' hire_date, 667 job_id from dual union all
select 7902 emp_id, 'JENNIFER' fname, 'FORD' lname, 20 dept_id, 7566 manager_emp_id, 3000 salary, '03-DEC-81' hire_date, 669 job_id from dual union all
select 7934 emp_id, 'BARBARA' fname, 'MILLER' lname, 10 dept_id, 7782 manager_emp_id, 1300 salary, '23-JAN-82' hire_date, 667 job_id from dual
) e;
commit;

Create maven project
The first and foremost task is to create a maven project in STS which generates a default maven project structure that includes the pom.xml file. Use the below pom.xml file instead of the default one that STS provides.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.samples</groupId>
  <artifactId>hibernateCrud</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>jar</packaging>
  <name>hibernateCrud</name>
  <url>http://maven.apache.org</url>
  
  <repositories>
  <repository>
   <id>JBoss repository</id>
   <url>http://repository.jboss.org/nexus/content/groups/public/</url>
  </repository>
 </repositories>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <hibernate.version>4.1.8.Final</hibernate.version>
    <oracle.version>11.2.0</oracle.version>
    <junit.version>4.11</junit.version>
    <commons-logging.version>1.1.1</commons-logging.version>
    <log4j.version>1.2.16</log4j.version>
    <slf4j.version>1.6.1</slf4j.version>
  </properties>

  <dependencies>
 <dependency>
  <groupId>org.hibernate</groupId>
  <artifactId>hibernate-core</artifactId>
  <version>${hibernate.version}</version>
 </dependency>
 <dependency>
  <groupId>com.oracle</groupId>
  <artifactId>ojdbc6</artifactId>
  <version>${oracle.version}</version>
 </dependency>
 <dependency>
  <groupId>org.slf4j</groupId>
  <artifactId>slf4j-log4j12</artifactId>
  <version>${slf4j.version}</version>
 </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>${junit.version}</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>
In this pom.xml file –<groupid>, <artifactid>, <version>, <packaging> and <name> elements describe about the type and name of the artifact that this project would generate after building with maven. Usually it’s a good practice to match the groupId value with the base package name of java class. <repositories> tells about the nexus repository where the dependent artifact jars (like hibernate, log4J etc) can be found. <properties> is a place where you can centralize the configuration of version numbers of all the artifacts and then can be referred elsewhere int the pom.xml file with ${property-name} representation. <dependencies> lists the various artifacts that this project is dependent on. Once pom.xml is configured, navigate to the project location in command line and type mvn clean install to clean and build the project. This automatically downloads the needed artifacts that were configured in pom.xml too. The downloaded artifacts as well as the jar or war artifact that you build out of this project are placed in /.m2 folder.

Log4j configuration
SLF4J (Simple Logging Facade for Java) is a simple façade or abstraction framework over other logging frameworks like java.util.Logging and Log4J. The desired logging framework is decided during the deployment time.
<dependency>
 <groupId>org.slf4j</groupId>
 <artifactId>slf4j-log4j12</artifactId>
 <version>${slf4j.version}</version>
</dependency>
This downloads slf4j-log4j12-<version>.jar, slf4j-api-<version>.jar and log4j-<log4j-version>.jar to your local .m2 repository folder. Between, <version> and <logj-version> are the versions that you have mentioned under the properties tag of pom.xml

Log4j Properties
This property file is used to configure the root Logger and how the output is formatted when you use a log statement. Below is the configuration I have used to send the logs to console using the specified pattern
log4j.rootLogger=INFO,out
log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.ConversionPattern=[%t] [class: %-c{1}] %-5p - %m%n

Database and Hibernate configuration in pom.xml
To configure database (oracle), the pom needs an oracle driver (ojdbc) and to configure Hibernate ORM, the pom needs hibernate-core jar file and so the below configuration
<dependency>
 <groupId>com.oracle</groupId>
 <artifactId>ojdbc6</artifactId>
 <version>${oracle.version}</version>
</dependency>
<dependency>
 <groupId>org.hibernate</groupId>
 <artifactId>hibernate-core</artifactId>
 <version>${hibernate.version}</version>
</dependency>
HibernateUtil.java contains code to instantiate a session. Create a new configuration object which will hold all properties from the hibernate xml config file. If a different file name is used then you have to mention the filename to the configuration object so that it can read all the properties. These properties are used to create a service registry using service registry builder. Using the resulting service registry, the immutable session factory object is created. The way we create session factory object has changed from Hibernate version 4. A snippet from HibernateUtil’s buildSessionFactory() method
Configuration configuration = new Configuration();
//If you ignore the below configure filename, then it searches hibernate.cfg.xml by default
configuration.configure("hib.cfg.xml");
ServiceRegistry serviceRegistry = new ServiceRegistryBuilder().applySettings(configuration.getProperties()).buildServiceRegistry();
return configuration.buildSessionFactory(serviceRegistry);

The only hibernate entity that we have used is the “Employee” class that is mapped to EMPLOYEE table which we already created using the above script. An important point to note is that this entity is referred as value for the element in hibernate xml config file.
  <mapping class="com.samples.domain.Employee"></mapping>

BO, DAO and other layers
As I said, this project is all about doing CRUD operations using hibernate. Totally there are four different packages or layers in this project.
Application Layer (com.samples)
Contains application code in HibernateCRUD.java and code for creating session factory object in HibernateUtil.java
Business object layer (com.samples.BO)
This layer contains code for business validations to check whether the in or out data conforms to the business rules.  As we know, it interfaces with DAO layer and so it converts every business object or the individual java properties to its equivalent domain object.
Data access object layer (com.samples.DAO)
This layer encapsulates the code for all the database operations, session and transaction management. This layer interacts with the underlying Hibernate ORM or Database. Any data related operation cannot by pass this layer.
Domain layer (com.samples.domain)
This holds all the domain or entity objects of the application. These are simple POJOs that map to the underlying database database tables.

Following a layered approach makes us to write uncluttered code and making us to think - to which layer does a new piece of code will fit in. The above approach helps us to use business object pattern and data access object pattern. Every read/write operation from the application code passes through the BO layer then to the DAO layer and then finally transforms as an action on the database. Similarly when the action is complete, the data passes through the DAO layer and to BO layer and then finally to the application layer. Having this in mind, if you navigate the code, it could be easily understood. But as a whole, this project depicts how to perform – add, get, list (get all), update and delete operations using hibernate.

Before bringing this post to an end, let me talk about the domain or hibernate entity mapping of the “Employee” POJO. The mapping is pretty straight forward but the one thing to note is the ID property mapped to EMP_ID column of table.
@Column(name="EMP_ID")
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "emp_id_seq")
@SequenceGenerator(name="emp_id_seq", sequenceName="emp_id_sequence", allocationSize=1)
private Integer empId;
Here we are asking to use an Oracle sequence generated value for the column “EMP_ID”. By default hibernate uses HILO algorithm to generate the sequence numbers and bumps the numbers by 50 for every fetch. To override this we have used an allocation size of 1. By doing this way there won’t be big gap in two successive rows of this table.

All this is possible by having a database sequence called “emp_id_sequence” which can be created with the below BEFORE trigger
CREATE OR REPLACE TRIGGER "METALLICATONY"."EMP_ID_TRIGGER"
BEFORE INSERT ON EMPLOYEE REFERENCING NEW AS NEW
FOR EACH row
BEGIN IF :NEW.emp_id IS NULL THEN
  SELECT emp_id_sequence.nextval INTO :NEW.emp_id FROM dual;
END IF;
ALTER TRIGGER "METALLICATONY"."EMP_ID_TRIGGER" ENABLE;
Having a CHAR(1) column ACTIVE in database, we can map it to a Boolean type in Hibernate entity using the @Type annotation. The ACTIVE column can take Y or N which actually maps to true or false in the domain object
@Column(name="ACTIVE")
@org.hibernate.annotations.Type(type="yes_no")
private Boolean active;

This project is pretty straight forward and it helped me to get easy with maven, hibernate and Log4J. Hope it would help you too. Feel free to browse through or download the source from my git repository  CRUD with Hibernate @ GitHub  and import it as a maven project into your favorite IDE. 

Labels: ,