Call to undefined function mysql_pconnect()

Wow. This took ages.
A customer of mine has developed a PHP application on his windows machine (yeah yeah I know). I tried installing it on my Linux laptop – and all I got was a white page.
Well, loving the challenge, I started putting in debu prints in all the PHP files. Turns out the customer uses CodeIgniter, so I started with core/CodeIgniter.php, moved to core/Loader.php, core/db/DB.php and ended up with the mysql_driver.php.
Turns out I was crashing on this line:

return @mysql_pconnect($this->hostname, $this->username, $this->password);

The error message (after removing the @ of course) was
Call to undefined function mysql_pconnect()
So lame. I forgot to run sudo apt-get install php5-mysql. 2 hours of my life wasted 🙁

The dangers of high availability

About 15 years ago I heard someone giving a lecture on a system he managed. He claimed they need to provide 5 9’s of availability, as it’s an extremely critical system. Needless to say, he didn’t deliver 5 9’s. Not even one. The system worked perhaps 79.999% of the time. Not very surprising. But the real surprise here is that this level of availability was good enough.

People tend to think their applications require extreme high availability for a wide variety of reasons. However – most don’t need high availability.

Now, since high availability is so costly to implement (hardware costs, architectural and development costs, etc), I think you really need to ask yourself – does my application really need high availability – or am I taking all this effort just to soothe my ego?

There are levels of high availability, and think well before you answer. Do all of the users need access to the application 24*7? What are their working hours? Can some users experience failure while most users still work? Can users experience partial data loss (restart a wizard, or re-filling a form)?

I, for one, think that giving a good enough availability, and have a very good monitoring solution with someone to solve crashes on a 30 minute notice is probably good enough – and is much more cost effective.

Tunneling with JProfiler

I had some issues running JProfiler from a remote machine. I had a JBoss running on a remote Linux server, and for some reason, XWindows just didn’t work. Turns out I had to tunnel all my JProfiler connection, and luckily – that’s easy to do.
I typed a command on my own laptop (running Linux Mint)

ssh -f root@XXX.XXX.XXX.XXX -L 2000:localhost:8849 -N

And now, all I have to do is open a connection from JProfiler. I use attach to remote process, select localhost, port 2000, and that’s it – I can now profile the remote server.


First and foremost – a week ago, I never even knew this method existed in Java. Basically – it let you force the file writing to the disk. Turns out Arjuna (JBoss transactions) is using it in its ShadowStore class, to ensure transaction data is stored to disk. It makes sense – as they want to recover transactions in case of a server crash.
Now, if you read my last post, on the inflation of EJBs, you know that 200 EJBs working together is a mess. And I’ve reached a point where 15% of CPU time of a single transaction is spent on this FileDescriptor.sync() method. Since I couldn’t refactor the whole code – I had to think of another solution. Here goes.

I’ve written a new class, that extends ShadowStore.

public class TonaStore extends ShadowingStore {
    public TonaStore(ObjectStoreEnvironmentBean objectStoreEnvironmentBean) throws ObjectStoreException
    	syncWrites = false;

I deployed it to a JAR file, and placed it in the server/all/lib directory.

Now, I opened the /server/all/deploy/transactions-jboss-beans.xml file, and changed ActionStore section to the following:

    <bean name="ActionStoreObjectStoreEnvironmentBean" class="com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean">

        <annotation>@org.jboss.aop.microcontainer.aspects.jmx.JMX(name="jboss.jta:name=ActionStoreObjectStoreEnvironmentBean", exposedInterface=com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBeanMBean.class, registerDirectly=true)</annotation>
        <constructor factoryClass="com.arjuna.common.internal.util.propertyservice.BeanPopulator" factoryMethod="getNamedInstance">
        <property name="objectStoreDir">${}/tx-object-store</property>
	<property name="objectStoreType">com.tona.ts.common.TonaStore</property>

I got almost a 100% increase in hits/second. Sweet.

EJB inflation

Ever since the JavaEE standard introduced the Local interfaces (and especially in EJB3), I see people abusing EJBs. The logic is simple – if EJB calls are local, let’s use EJBs, and enjoy dependency injection.
Recently I assisted a customer who had over 200 EJBs in a project that had about 500 classes! I call it EJB inflation, and it’s bad. Really bad.
The reason – the EJB container does more than just proxy remote calls. It handles security, transactions, pooling and more. Using those, for every class, is paying a huge price in performance. Let’s just say that when I run a profiler on the customer code I saw that over 20% of the server time is wasted on the application server EJB related code (JTA, specifically).
I will post workarounds for this in future posts, but in the mean time – beware abusing EJBs. Don’t fall to the “if you have a hammer, everything looks like a nail” trap.

Errors while performing remote deployment from Eclipse to WebLogic Server

Using local WebLogic server with Eclipse is a piece of cake, especially when using OEPE (the Oracle Eclipse flavor, with all the relevant plugins installed). However, I run into issues while deploying an application to a remote WebLogic server, running on Linux (the machine I used run Windows).
The error was:

weblogic.deploy.api.spi.exceptions.ServerConnectionException: [J2EE Deployment SPI:260041]Unable to upload 'C:\workspaces\labWorkspaceNew\.metadata\.plugins\org.eclipse.core.resources\.projects\HelloWorld\beadep\remote_weblogic\HelloWorld.war' to 't3://XXX.XXX.XXX.XXX:7041'

java.lang.Exception: Exception received from deployment driver. See Error Log view for more detail.
at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(
at org.eclipse.wst.server.core.internal.Server.publishImpl(
at org.eclipse.wst.server.core.internal.Server$
Caused by: weblogic.deploy.api.internal.utils.DeployerHelperException: The source 'C:\DOCUME~1\train\LOCALS~1\Temp\1\HelloWorld.war' for the application 'HelloWorld' could not be loaded to the server 'http://XXX.XXX.XXX.XXX:7041/bea_wls_deployment_internal/DeploymentService'.
Server returned HTTP response code: 403 for URL: http://XXX.XXX.XXX.XXX:7041/bea_wls_deployment_internal/DeploymentService
at weblogic.deploy.api.internal.utils.JMXDeployerHelper.uploadSource(
at weblogic.deploy.api.spi.deploy.internal.ServerConnectionImpl.upload(
at weblogic.deploy.api.spi.deploy.internal.BasicOperation.uploadFiles(
at weblogic.deploy.api.spi.deploy.internal.BasicOperation.execute(
at weblogic.deploy.api.spi.deploy.WebLogicDeploymentManagerImpl.deploy(
... 8 more

At first I thought this was caused by the size of the EAR file, but after creating a small HelloWorld.war and getting the same error, I traced it rather quickly.
The problem is network related. First, make sure your browser can connect to the admin console of the remote WebLogic Server (http://XXX.XXX.XXX.XXX:7001/console). If it doesn’t work – configure the proxy, or make the sys-admin open the 7001 port.
If it does work, ensure your eclipse has the proxy configured – using Window/Preferences/General/Network Connections

MySQL replication on Windows

I feel like a real MySQL replication configuration wiz these days – I can probably even do it with my eyes closed.
But lately, I had the joy of configuring MySQL replication on Windows. I don’t know why, don’t know how – but all configurations in the my.ini file just didn’t work. I constantly got the server-id param not set error on the slave database.
Finally, I gave up. I opened MySQL Workbench, configured the remote administration capabilities, and walla – everything worked like a charm.