The goal of the Practical Liferay Series is to provide a series of discussions on practical Liferay concerns such as installations in environments beyond a standard bundled Tomcat installation.
This discussion will cover several specific issues when installing Liferay in a clustered Glassfish environment with specific examples and links to resources.
Developers and administrators of portals using Liferay generally use a bundled Tomcat installation (CE or EE versions), which is the simplest and most well documented to set up. In some environments though, due to enterprise business concerns or other factors, use of other non-bundled application servers is necessary. One example is using the enterprise edition of Oracle’s Glassfish application server in a clustered environment.
Installing Glassfish and Clustering
Glassfish is installed by downloading and installing a zip file from Oracle. Once Glassfish has been installed in a target folder, the ‘asadmin’ tool is used either directly from the command line with arguments or in an interactive shell mode to configure Glassfish. Configuration tasks include creating data sources to databases, creating nodes in a cluster, enabling SSH between the nodes, etc. Note that these configuration tasks should be completed before deploying Liferay.
A typical configuration is a two node cluster. With Glassfish, the nodes were created and configured via asadmin. Once they are created, they can be administered via the console or asadmin. With this Glassfish cluster, we are assuming that certain resources will need to be shared between the Liferay instances – the database, the document library, Jackrabbit and Lucene.
In this example we will be using two Linux servers – the Glassfish domain controller and one remote node. The domain controller is on 10.64.36.19 and the remote node is on 10.64.36.20. We will be using Glassfish 3 and Liferay 6.1 EE GA2.
Unzip the Glassfish distribution into a folder such as /usr/local/liferay on the *.19 (master) server. This will create the folder /usr/local/liferay/glassfish3.
Start the asadmin shell from the bin subfolder:
Setup SSH to log into remote server:
asadmin> setup-ssh 10.64.36.20
Install Glassfish on the remote server:
asadmin> install-node --installdir /v40/liferay/glassfish3 10.64.36.20
Start the Glassfish domain:
asadmin> start-domain domain1
Set the admin password:
Restart the domain:
asadmin> restart-domain domain1
Create the cluster:
asadmin> create-cluster mycluster
Create an SSH node to the cluster:
asadmin> create-node-ssh --nodehost 10.64.36.20 --installdir /usr/local/liferay/glassfish3 node01
Create one instance of Glassfish on the remote node (node-i1) and one on the local node (node-i2):
asadmin> create-instance --node node01 --cluster mycluster node-i1 asadmin> create-local-instance --cluster mycluster node-i2
Start the cluster:
asadmin> start-cluster mycluster
Restart the local instance:
asadmin> restart-instance node-i2
Create a database connection pool:
asadmin> create-jdbc-connection-pool --restype javax.sql.DataSource --datasourceclassname oracle.jdbc.pool.OracleDataSource --property "user=lportal:password=test1234:url=jdbc\\:oracle\\:thin\\:@10.64.32.30\\:1521\\:lrdb" LiferayPool
Verify the connection settings:
asadmin> get resources.jdbc-connection-pool.LiferayPool.property
Test the connection:
asadmin> ping-connection-pool LiferayPool
Create a jdbc resource that Liferay can use:
asadmin> create-jdbc-resource --connectionpoolid LiferayPool --target mycluster jdbc/lportal
Liferay is deployed into Glassfish. This can be accomplished via the command line with the asadmin tool or via the console. Two artifacts are needed – the Liferay dependencies war file and the Liferay portal war file. Before installing Liferay, the Liferay dependencies war file with jar files that Glassfish needs to run Liferay needs to be placed in the classpath of Glassfish.
To install Liferay, login to the Glassfish Domain controller console (url would be something like http://10.64.36.19:4848) as administrator.
Select Applications from the tree on the left.
Under “Packaged File to Be Uploaded to the Server,” click “Choose File” and browse to the location of the Liferay Portal.war file. Enter Context Root (such as /cportal).
Enter Application Name (such as cportal).
Select the cluster as the target.
Add your Liferay license. You need a cluster license that supports the two nodes (by IP address and Mac address). Install this license on each instance by copying it to the deploy folder on each node (/usr/local/liferay/glassfish3/deploy). Liferay will pick up the license automatically – you will see a message in the log file that the license was registered.
The document library should be installed on a shared drive that is accessible to each node. That is, the shared drive should be mounted on each server in the cluster. In the standard Liferay location (data folder) on each node, a symbolic link needs to be created to the shared drive to make the document library visible to Liferay. For example, in /usr/local/liferay/glassfish3/data, add this link:
ln -s /mnt/share/data/document_library document_library
where ‘/mnt/share/data/document_library’ is the document library on the shared drive.
Jackrabbit and Lucene folders should be configured in the same manner so that they are accessible to both Liferay nodes.
ln -s /mnt/share/data/jackrabbit jackrabbit
ln -s /mnt/share/data/lucene lucene
Deploying a war file of Liferay content (a theme, hook, portlet, layout, etc) is more involved with Glassfish than with Tomcat.
In Liferay’s portal-ext.properties (which should be in /usr/local/liferay/glassfish3 on each node), set these properties:
auto.deploy.enabled=true auto.deploy.deploy.dir=/usr/local/liferay/glassfish3/deploy auto.deploy.glassfish.dest.dir=/mnt/share/deploywar
Restart the cluster for these changes to take effect.
Create a ‘deploywar’ folder such as /mnt/share/deploywar. This step is critical in that this folder must exist on the shared drive between the cluster nodes. Otherwise, Glassfish will throw an error during deployment about the folder not existing on the remote node.
Put the desired war file on the ‘master’ server where the Glassfish domain controller is running, such as in a home directory, /tmp or another accessible folder.
Copy the war file into the deploy directory for Glassfish – the auto.deploy.deploy.dir specified in portal-ext.properties (/usr/local/liferay/glassfish3/deploy in this example).
Glassfish then generates the expanded folder for the portlet in the deploywar directory.
The console log will state that it recognizes the file and that deployment will start momentarily, but nothing will happen after that. Example:
cp /tmp/upload/custom-theme.war /usr/local/liferay/glassfish3/deploy
Log into the Admin Console.
Select “Local Packaged File or Directory That Is Accessible from GlassFish Server” and browse for the folder that is generated in the deploywar folder. Example:
Select the type as a Web Application and the appropriate target (mycluster in this example), then Save.
From the Glassfish admin console, tasks such as adjusting log levels, JVM parameters, starting and stopping the cluster, deploying applications, deploying Liferay artifacts, etc. can be accomplished.
Out-of-the-box Liferay should work with any application server. However, using Liferay with some application servers – such as Glassfish – requires more steps than a bundled Liferay/Tomcat combination and requires more thought on your configuration.