2013
01
March

Practical Liferay Series: Glassfish Clustering

The goal of the Practical Liferay Series is to provide a series of discussions on practical Liferay concerns such as installations in environments beyond a standard bundled Tomcat installation.

This discussion will cover several specific issues when installing Liferay in a clustered Glassfish environment with specific examples and links to resources.

Developers and administrators of portals using Liferay generally use a bundled Tomcat installation (CE or EE versions), which is the simplest and most well documented to set up. In some environments though, due to enterprise business concerns or other factors, use of other non-bundled application servers is necessary. One example is using the enterprise edition of Oracle’s Glassfish application server in a clustered environment.

Installing Glassfish and Clustering

Glassfish is installed by downloading and installing a zip file from Oracle. Once Glassfish has been installed in a target folder, the ‘asadmin’ tool is used either directly from the command line with arguments or in an interactive shell mode to configure Glassfish. Configuration tasks include creating data sources to databases, creating nodes in a cluster, enabling SSH between the nodes, etc. Note that these configuration tasks should be completed before deploying Liferay.

A typical configuration is a two node cluster. With Glassfish, the nodes were created and configured via asadmin. Once they are created, they can be administered via the console or asadmin. With this Glassfish cluster, we are assuming that certain resources will need to be shared between the Liferay instances – the database, the document library, Jackrabbit and Lucene.

In this example we will be using two Linux servers – the Glassfish domain controller and one remote node. The domain controller is on 10.64.36.19 and the remote node is on 10.64.36.20. We will be using Glassfish 3 and Liferay 6.1 EE GA2.

Unzip the Glassfish distribution into a folder such as /usr/local/liferay on the *.19 (master) server. This will create the folder /usr/local/liferay/glassfish3.

Start the asadmin shell from the bin subfolder:

./asadmin

Setup SSH to log into remote server:

 asadmin> setup-ssh 10.64.36.20 

Install Glassfish on the remote server:

 asadmin> install-node --installdir /v40/liferay/glassfish3 10.64.36.20 

Start the Glassfish domain:

 asadmin> start-domain domain1 

Set the admin password:

 asadmin> enable-secure-admin 

Restart the domain:

 asadmin> restart-domain domain1 

Create the cluster:

 asadmin> create-cluster mycluster 

Create an SSH node to the cluster:

 asadmin> create-node-ssh --nodehost 10.64.36.20 --installdir /usr/local/liferay/glassfish3 node01 

Create one instance of Glassfish on the remote node (node-i1) and one on the local node (node-i2):

asadmin> create-instance --node node01 --cluster mycluster node-i1
asadmin> create-local-instance --cluster mycluster node-i2

Start the cluster:

 asadmin> start-cluster mycluster 

Restart the local instance:

 asadmin> restart-instance node-i2 

Create a database connection pool:

 asadmin> create-jdbc-connection-pool --restype javax.sql.DataSource --datasourceclassname oracle.jdbc.pool.OracleDataSource --property "user=lportal:password=test1234:url=jdbc\\:oracle\\:thin\\:@10.64.32.30\\:1521\\:lrdb" LiferayPool

Verify the connection settings:

 asadmin> get resources.jdbc-connection-pool.LiferayPool.property 

Test the connection:

 asadmin> ping-connection-pool LiferayPool 

Create a jdbc resource that Liferay can use:

 asadmin> create-jdbc-resource --connectionpoolid LiferayPool --target mycluster jdbc/lportal 

Installing Liferay

Liferay is deployed into Glassfish. This can be accomplished via the command line with the asadmin tool or via the console. Two artifacts are needed – the Liferay dependencies war file and the Liferay portal war file. Before installing Liferay, the Liferay dependencies war file with jar files that Glassfish needs to run Liferay needs to be placed in the classpath of Glassfish.

To install Liferay, login to the Glassfish Domain controller console (url would be something like http://10.64.36.19:4848) as administrator.

Select Applications from the tree on the left.
Select Deploy.

Under “Packaged File to Be Uploaded to the Server,” click “Choose File” and browse to the location of the Liferay Portal.war file. Enter Context Root (such as /cportal).

Enter Application Name (such as cportal).
Select the cluster as the target.
Select Ok.

Add your Liferay license. You need a cluster license that supports the two nodes (by IP address and Mac address). Install this license on each instance by copying it to the deploy folder on each node (/usr/local/liferay/glassfish3/deploy). Liferay will pick up the license automatically – you will see a message in the log file that the license was registered.

The document library should be installed on a shared drive that is accessible to each node. That is, the shared drive should be mounted on each server in the cluster. In the standard Liferay location (data folder) on each node, a symbolic link needs to be created to the shared drive to make the document library visible to Liferay. For example, in /usr/local/liferay/glassfish3/data, add this link:

 ln -s /mnt/share/data/document_library document_library 

where ‘/mnt/share/data/document_library’ is the document library on the shared drive.

Jackrabbit and Lucene folders should be configured in the same manner so that they are accessible to both Liferay nodes.

 ln -s /mnt/share/data/jackrabbit jackrabbit 

 

 ln -s /mnt/share/data/lucene lucene 

Deployment

Deploying a war file of Liferay content (a theme, hook, portlet, layout, etc) is more involved with Glassfish than with Tomcat.

In Liferay’s portal-ext.properties (which should be in /usr/local/liferay/glassfish3 on each node), set these properties:

auto.deploy.enabled=true
auto.deploy.deploy.dir=/usr/local/liferay/glassfish3/deploy
auto.deploy.glassfish.dest.dir=/mnt/share/deploywar

Restart the cluster for these changes to take effect.

Create a ‘deploywar’ folder such as /mnt/share/deploywar. This step is critical in that this folder must exist on the shared drive between the cluster nodes. Otherwise, Glassfish will throw an error during deployment about the folder not existing on the remote node.

Put the desired war file on the ‘master’ server where the Glassfish domain controller is running, such as in a home directory, /tmp or another accessible folder.

Copy the war file into the deploy directory for Glassfish – the auto.deploy.deploy.dir specified in portal-ext.properties (/usr/local/liferay/glassfish3/deploy in this example).

Glassfish then generates the expanded folder for the portlet in the deploywar directory.

The console log will state that it recognizes the file and that deployment will start momentarily, but nothing will happen after that. Example:

 cp /tmp/upload/custom-theme.war /usr/local/liferay/glassfish3/deploy 

Log into the Admin Console.

Select “Local Packaged File or Directory That Is Accessible from GlassFish Server” and browse for the folder that is generated in the deploywar folder. Example:

 /mnt/share/deploywar/custom-theme 

Select the type as a Web Application and the appropriate target (mycluster in this example), then Save.

Admin Console

From the Glassfish admin console, tasks such as adjusting log levels, JVM parameters, starting and stopping the cluster, deploying applications, deploying Liferay artifacts, etc. can be accomplished.

Conclusion

Out-of-the-box Liferay should work with any application server. However, using Liferay with some application servers – such as Glassfish – requires more steps than a bundled Liferay/Tomcat combination and requires more thought on your configuration.

Additional Resources

Liferay on Glassfish Install Guide

Oracle Glassfish asadmin guide

Oracle Glassfish Documentation Library

Robert Hall applies his impressive research, implementation and support skills to customer engagements as a Senior Consultant for Isos Technology.

Robert Hall's development as a software engineer was built on a strong foundation in researching parallel and distributed systems over five years at Kent State and the University of Michigan.  He readily applied the skills garnered during this time to his career as a professional software engineer.  

Over the course of the following fifteen years, Robert Hall worked in fields including telecom, banking, insurance and aviation.  The profiles of the companies Robert has worked with range from small startups to large Fortune 500 companies.  Robert has been involved in all aspects of the software development lifecycle, with extensive experience in design, development, release and post-release support as both a team member and team lead.

With five years of software engineering research and fifteen years as a professional software engineer, Robert Hall has cultivated the versatility that is the hallmark of an accomplished software engineering consultant.

Tagged with: , , , — Posted in Liferay

Leave a Reply

Connect with:

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>