Apache Segmentation Fault (11) – related to memory

This is a problem we have on both a dedicated server and a VPS. It is not something I’ve seen before and cannot find any descriptions of others seeing it. I’m logging it here in case a solution comes walking past, so I have my fingers crossed.

We are running:

  • CentOS 6.4
  • Apache/2.2.15
  • PHP 5.3.19

The problem is related to the user of memory. The maximum memory that a PHP process can use is set to 128Mbyte:

memory_limit = 128M

When a PHP process attempts to use more memory than this, it halts with an error. This is the kind of thing that can happen all too easily in an application such as SugarCRM:

[Mon Dec 17 12:22:30 2012] [error] [client 1.2.3.4] PHP Fatal error:  
Allowed memory size of 134217728 bytes exhausted (tried to allocate 523800 bytes) 
in /httpdocs/data/SugarBean.php on line 76, 
referer: http://example.com/index.php?module=Administration&view=module&action=UpgradeWizard_commit

So, I tried raising the memory limit to 256M:

memory_limit = 256M

No more errors were logged. In fact nothing was logged, and I was still getting a white screen. Why no memory errors? The core apache log (/var/log/httpd/error_log) gave some clues here:
[Sun Dec 16 01:52:49 2012] [notice] child pid 12415 exit
signal Segmentation fault (11)

So Apache was creating a segmentation fault – signal 11 – instead of telling me it was running out of memory. Whether this is happening when it reaches 256M, or happens as soon as memory usage exceeds 128M, I don’t know. All I do know is that I’m stuffed if I have a process that cannot be optimised to run in 128M of memory.

I’m convinced it was working a few weeks ago, so perhaps it is a bug in this version of Apache or PHP? We have Plesk 11 running on the server, and that does its own regular updates, which also patches PHP and Apache.

Any clues what could be going on here?

Updates: I have no byte cache running on the VPS, but do on the dedicated server. The behaviour is the same in both cases. A segmentation fault is something bad going wrong in memory – the CPU trying to access a block of memory that it cannot or should not be able to access. Assuming the problem is the use of any memory above 128M, and not the act of running out of memory above 128M, then it could be some kind of mis-match between block sizes in CentOS, Apache and PHP. That’s just a wild guess though.

—————–

I have raised this issue here, and slowly some light is shining on this issue:

http://serverfault.com/questions/473797/php-segmentation-fault-when-using-more-than-128m

 

This version of PHP (5.3.21) will segmentation fault if a function is called up recursively enough times. It will occasionally manage to write a “no more memory” error to the Apache log file, but it seldom gets that far.

Looking through the SugarCRM code that is causing this fault, it seems to get into a recursive loop somewhere in the core where it writes out its cache files on first running. Just looking through the code, I believe it is a lack of error handling when functions such as fopen() and fwrite() don’t do what they are supposed to do, but I haven’t narrowed it down to on specific line of code yet.

However, SugarCRM does check whether it can write to a directory before it blindly goes ahead and does so. What it isn’t ready for, is SELinux kicking in with some additional security rule that PHP functions such as is_writeable() simple don’t cater for. SugarCRM should be hardened up to handle this IMO, and the project can get some way to doing this by using tried-and-trusted modules from Composer or other projects, and not trying to reinvent every tiny process and script that is needs.

My next step is going to try and turn off SELinux for this site and see if that makes any difference.

4 Responses to Apache Segmentation Fault (11) – related to memory

  1. Rodrigo Nobrega 2013-10-14 at 20:15 #

    Thank you for posting this! The segmentation fault happened to me and I noticed memory_limit was set to 1024M by another worker here. Since I do not need more than 128M, just changing to memory_limit = 128M solved the problem.

  2. Rodrigo Nobrega 2013-10-15 at 18:21 #

    Ok… I was wrong. The error stopped to happen only for a while. Changing to 128M wasn’t sufficient. I must investigate more.

  3. Nick 2015-01-13 at 03:35 #

    Did you consider trying to run php as [f]CGI? It may separate the processes enough to avoid the issue, or at very least not bring Apache down.

    • Jason Judge 2015-01-13 at 11:09 #

      We do run it as FastCGI for some legacy applications, but in general just try to keep up with the latest PHP version these days.

Leave a Reply