Bug 232373 - security/clamav shuts down - Can't allocate memory ERROR
Summary: security/clamav shuts down - Can't allocate memory ERROR
Status: Closed FIXED
Alias: None
Product: Ports & Packages
Classification: Unclassified
Component: Individual Port(s) (show other bugs)
Version: Latest
Hardware: amd64 Any
: --- Affects Only Me
Assignee: Yasuhiro KIMURA
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2018-10-18 07:19 UTC by Marko Cupać
Modified: 2019-03-18 11:02 UTC (History)
2 users (show)

See Also:
bugzilla: maintainer-feedback? (yasu)


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Marko Cupać 2018-10-18 07:19:02 UTC
Hi,

I am using clamav-clamd, called by amavisd-new, to scan emails on postfix mail relay server. Recently it started to shut down with the following message:

Thu Oct 18 07:53:21 2018 -> SelfCheck: Database status OK.
Thu Oct 18 07:56:02 2018 -> /var/amavis/tmp/amavis-20181018T074152-69671-oHlEltOR/parts/p009: reported CL_VIRUS but no virname returned!
Thu Oct 18 07:56:02 2018 -> /var/amavis/tmp/amavis-20181018T074152-69671-oHlEltOR/parts/p009: Can't allocate memory ERROR
Thu Oct 18 07:56:29 2018 -> WARNING: Write to syncpipe failed
Thu Oct 18 07:56:29 2018 -> Waiting for all threads to finish
Thu Oct 18 07:56:30 2018 -> Shutting down the main socket.
Thu Oct 18 07:56:30 2018 -> Pid file removed.
Thu Oct 18 07:56:30 2018 -> --- Stopped at Thu Oct 18 07:56:30 2018

This is the third occurrence in a week, and it causes me quite a headache, as all the emails wait in queue before I manually restart it.

The server it runs on has plenty of RAM during standard operation:

Mem: 2198M Active, 20G Inact, 2936M Wired, 1412M Buf, 22G Free
Swap: 4096M Total, 4096M Free

All the limits are at its defaults:

Thu Oct 18 09:04:18 2018 -> +++ Started at Thu Oct 18 09:04:18 2018
Thu Oct 18 09:04:18 2018 -> Received 0 file descriptor(s) from systemd.
Thu Oct 18 09:04:18 2018 -> clamd daemon 0.100.2 (OS: freebsd11.2, ARCH: amd64, CPU: amd64)
Thu Oct 18 09:04:18 2018 -> Log file size limited to 1048576 bytes.
Thu Oct 18 09:04:18 2018 -> Reading databases from /var/db/clamav
Thu Oct 18 09:04:18 2018 -> Bytecode: Security mode set to "TrustSigned".
Thu Oct 18 09:04:38 2018 -> Loaded 6691209 signatures.
Thu Oct 18 09:04:41 2018 -> LOCAL: Unix socket file /var/run/clamav/clamd.sock
Thu Oct 18 09:04:41 2018 -> LOCAL: Setting connection queue length to 200
Thu Oct 18 09:04:41 2018 -> Limits: Global size limit set to 104857600 bytes.
Thu Oct 18 09:04:41 2018 -> Limits: File size limit set to 26214400 bytes.
Thu Oct 18 09:04:41 2018 -> Limits: Recursion level limit set to 16.
Thu Oct 18 09:04:41 2018 -> Limits: Files limit set to 10000.
Thu Oct 18 09:04:41 2018 -> Limits: Core-dump limit is 9223372036854775807.
Thu Oct 18 09:04:41 2018 -> Limits: MaxEmbeddedPE limit set to 10485760 bytes.
Thu Oct 18 09:04:41 2018 -> Limits: MaxHTMLNormalize limit set to 10485760 bytes.
Thu Oct 18 09:04:41 2018 -> Limits: MaxHTMLNoTags limit set to 2097152 bytes.
Thu Oct 18 09:04:41 2018 -> Limits: MaxScriptNormalize limit set to 5242880 bytes.
Thu Oct 18 09:04:41 2018 -> Limits: MaxZipTypeRcg limit set to 1048576 bytes.
Thu Oct 18 09:04:41 2018 -> Limits: MaxPartitions limit set to 50.
Thu Oct 18 09:04:41 2018 -> Limits: MaxIconsPE limit set to 100.
Thu Oct 18 09:04:41 2018 -> Limits: MaxRecHWP3 limit set to 16.
Thu Oct 18 09:04:41 2018 -> Limits: PCREMatchLimit limit set to 100000.
Thu Oct 18 09:04:41 2018 -> Limits: PCRERecMatchLimit limit set to 5000.
Thu Oct 18 09:04:41 2018 -> Limits: PCREMaxFileSize limit set to 26214400.

Any help much appreciated.
Comment 1 Larry Rosenman freebsd_committer 2018-10-18 11:29:24 UTC
is /var/amavis/tmp/amavis-20181018T074152-69671-oHlEltOR/parts/p009 on disk or memory?  How much space is allowed for that filesystem?
Comment 2 Marko Cupać 2018-10-18 12:01:44 UTC
(In reply to Larry Rosenman from comment #1)

It's on disk. No restrictions on size, plenty of space. All the daemon components of mail server - postfix, amavisd-new, clamav-clamd - run in the same fat jail.

df -h in jail:

pacija@mx1:~ % df -h
Filesystem    Size    Used   Avail Capacity  Mounted on
/dev/da0p2    1.1T    421G    571G    42%    /
Comment 3 Larry Rosenman freebsd_committer 2018-10-18 12:07:33 UTC
do we know how big the mail(s) that cause the issue are, and do we have a parallelism issue here (I.E. many large maile being scanned)?
Comment 4 Marko Cupać 2018-10-18 14:36:50 UTC
(In reply to Larry Rosenman from comment #3)

Email should not be bigger than global postfix limit of ~25Mb:

message_size_limit = 25600000

Concurency should be no more than 4, as I have 4 servers set in both postfix' master.cf for amavisfeed:

amavisfeed	unix	-	-	n	-	4	lmtp
	-o lmtp_data_done_timeout=1200
	-o lmtp_send_xforward_command=yes
	-o disable_dns_lookups=yes
	-o max_use=20

...and in amavisd.conf:

$max_servers = 4;

These are the last lines from amavis.log before clamav died (anonimized ip and email addresses):

Oct 18 07:55:23 mx1 amavis[15164]: (15164-06) Blocked BANNED (application/msword,.rtf,VLS - 30281.doc) {BouncedInbound,Quarantined}, [IP.AD.DR.ESS]:37981 [IP.AD.DR.ESS] ESMTP/LMTP <sender@example.org> -> <recipient1@otherexample.org>,<recipient2@otherexample.org>, (ESMTPS://[IP.AD.DR.ESS]:37981), quarantine: banned-dE_S3_XRtH9L, Queue-ID: 06126630C0E7, Message-ID: <06e5865c85dffeefee6cb6f3ff8ac4bb@example.org>, mail_id: dE_S3_XRtH9L, b: LkfGpXAsC, Hits: -, size: 2422394, Subject: "FW: Scan from a Samsung MFP", From: <sender@example.org>, User-Agent: Roundcube_Webmail/1.3.6, helo=server.example.org, b.key=(?^ix:.\\.(doc|docb|docm|dot|dotm|dotx|jar|pot|potm|ppam|pps|ppsm|ppt)$), b.com=(?^ix:.\\.(doc|docb|docm|dot|dotm|dotx|jar|pot|potm|ppam|pps|ppsm|ppt)$), b.rhs=1, b.parts=P=p004,L=1,M=multipart/mixed | P=p003,L=1/2,M=application/msword,T=rtf,N=VLS - 30281.doc, 2028 ms
Oct 18 07:56:02 mx1 amavis[69671]: (69671-11) (!)run_av (ClamAV-clamd) FAILED - unexpected , output="/var/amavis/tmp/amavis-20181018T074152-69671-oHlEltOR/parts/p009: Can't allocate memory ERROR\n"

So, if this is indeed email that crashes clamav, even more precise answer to your first question is: the mail that causes the issue is 2422394 bytes.

Now when I look at that, why was this sent to clamav for scanning when it was rejected based on amavisd-new "$banned_filename_re" rule, which should come first

Still, clamav shouldn't crash just like that.

Thank you for helping me with this.
Comment 5 Marko Cupać 2018-10-22 08:27:48 UTC
Hi,

I am sorry I have to correct myself, clamd.conf wasn't at defaults at the time of unexpected shutdown.

I reverted it back to default and I haven't experienced the shutdown since.

I can provide clamd.conf with which I experienced shutdown, if needed.

Regards,
Comment 6 Larry Rosenman freebsd_committer 2018-12-04 13:52:25 UTC
Sorry for the delay in responding. 

Yes, the clamd.conf that caused the error would be appreciated.  

Also, did you report this upstream?  (also, does it still happen with 0.101.0?)
Comment 7 Larry Rosenman freebsd_committer 2019-02-05 18:24:50 UTC
-> new maintainer
Comment 8 Yasuhiro KIMURA 2019-03-18 10:35:10 UTC
Hello Marko,

I was assigned this bug report because I took over maintainership of security/clamav from Larry. So would you mind my asking current status of the problem?
Comment 9 Marko Cupać 2019-03-18 10:50:26 UTC
(In reply to Yasuhiro KIMURA from comment #8)

Hi Yasuhiro,

I am calling clamav-clamd from amavisd-new for the purpose of scanning email. I have no problems with it right now. At one point I changed my standard configuration, which is as simple as:

LogTime yes
TemporaryDirectory /var/tmp
DatabaseDirectory /var/db/clamav
LocalSocket /var/run/clamav/clamd.sock
FixStaleSocket yes
User clamav

...to the one where I had set a lot of other options. Unfortunately, I don't have problematic config anymore, and since I reverted to above config I don't experience problems.

As far as I am concerned this bug report can be closed.
Comment 10 Yasuhiro KIMURA 2019-03-18 11:02:39 UTC
(In reply to Marko Cupać from comment #9)

Thank you for quick reply. Then I close this bug report now.