Bug 184215

Summary: New port: sysutils/slurm-hpc
Product: Ports & Packages Reporter: Jason W. Bacon <jwb>
Component: Individual Port(s)Assignee: Boris Samorodov <bsam>
Status: Closed FIXED    
Severity: Affects Only Me    
Priority: Normal    
Version: Latest   
Hardware: Any   
OS: Any   
Attachments:
Description Flags
file.shar none

Description Jason W. Bacon freebsd_committer freebsd_triage 2013-11-24 15:40:01 UTC
SLURM is an open-source resource manager designed for *nix clusters of all
sizes. It provides three key functions. First it allocates exclusive and/or
non-exclusive access to resources (computer nodes) to users for some duration
of time so they can perform work. Second, it provides a framework for starting,
executing, and monitoring work (typically a parallel job) on a set of allocated
nodes. Finally, it arbitrates contention for resources by managing a queue of
pending work.

This PR deprecates PR #177753.

Fix: Patch attached with submission follows:
Comment 1 Boris Samorodov freebsd_committer freebsd_triage 2013-11-24 17:43:52 UTC
Responsible Changed
From-To: freebsd-ports-bugs->bsam

Take.
Comment 2 dfilter service freebsd_committer freebsd_triage 2013-11-24 20:08:22 UTC
Author: bsam
Date: Sun Nov 24 20:08:15 2013
New Revision: 334785
URL: http://svnweb.freebsd.org/changeset/ports/334785

Log:
  Reserve group and user IDs for the upcomming sysutils/slurm-hpc port.
  
  PR:		ports/184215
  Submitted by:	Jason Bacon <jwbacon@tds.net>

Modified:
  head/GIDs
  head/UIDs

Modified: head/GIDs
==============================================================================
--- head/GIDs	Sun Nov 24 20:03:26 2013	(r334784)
+++ head/GIDs	Sun Nov 24 20:08:15 2013	(r334785)
@@ -161,6 +161,7 @@ callweaver:*:444:
 courier:*:465:
 condor:*:466:
 netmon:*:467:
+slurm:*:468:
 _bbstored:*:505:
 radmind:*:506:
 skkserv:*:507:

Modified: head/UIDs
==============================================================================
--- head/UIDs	Sun Nov 24 20:03:26 2013	(r334784)
+++ head/UIDs	Sun Nov 24 20:08:15 2013	(r334785)
@@ -169,6 +169,7 @@ callweaver:*:444:444::0:0:Callweaver acc
 courier:*:465:465::0:0:Courier Mail Server:/nonexistent:/usr/sbin/nologin
 condor:*:466:466::0:0:& user:/home/condor:/usr/sbin/nologin
 netmon:*:467:467::0:0:Network monitor account:/var/netmon:/usr/sbin/nologin
+slurm:*:468:468::0:0:SLURM Daemon:/home/slurm:/usr/sbin/nologin
 _bbstored:*:505:505::0:0:Box Backup Store Daemon:/nonexistent:/usr/sbin/nologin
 radmind:*:506:506::0:0:radmind User:/var/radmind:/usr/sbin/nologin
 skkserv:*:507:507::0:0:skkserv User:/nonexistent:/usr/sbin/nologin
_______________________________________________
svn-ports-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-ports-all
To unsubscribe, send any mail to "svn-ports-all-unsubscribe@freebsd.org"
Comment 3 dfilter service freebsd_committer freebsd_triage 2013-11-24 20:23:13 UTC
Author: bsam
Date: Sun Nov 24 20:23:02 2013
New Revision: 334788
URL: http://svnweb.freebsd.org/changeset/ports/334788

Log:
  SLURM is an open-source resource manager designed for *nix clusters of all
  sizes. It provides three key functions. First it allocates exclusive and/or
  non-exclusive access to resources (computer nodes) to users for some duration
  of time so they can perform work. Second, it provides a framework for starting,
  executing, and monitoring work (typically a parallel job) on a set of allocated
  nodes. Finally, it arbitrates contention for resources by managing a queue of
  pending work.
  
  WWW: https://computing.llnl.gov/linux/slurm/
  
  PR:		ports/184215
  Submitted by:	Jason Bacon <jwbacon@tds.net>

Added:
  head/sysutils/slurm-hpc/
  head/sysutils/slurm-hpc/Makefile   (contents, props changed)
  head/sysutils/slurm-hpc/distinfo   (contents, props changed)
  head/sysutils/slurm-hpc/files/
  head/sysutils/slurm-hpc/files/patch-configure   (contents, props changed)
  head/sysutils/slurm-hpc/files/patch-src-plugins-acct_gather_filesystem-lustre-acct_gather_filesystem_lustre.c   (contents, props changed)
  head/sysutils/slurm-hpc/files/patch-src-plugins-select-cons_res-dist_tasks.c   (contents, props changed)
  head/sysutils/slurm-hpc/files/patch-src-plugins-task-cgroup-task_cgroup_cpuset.c   (contents, props changed)
  head/sysutils/slurm-hpc/files/slurm.conf.in   (contents, props changed)
  head/sysutils/slurm-hpc/files/slurmctld.in   (contents, props changed)
  head/sysutils/slurm-hpc/files/slurmd.in   (contents, props changed)
  head/sysutils/slurm-hpc/pkg-descr   (contents, props changed)
  head/sysutils/slurm-hpc/pkg-plist   (contents, props changed)
Modified:
  head/sysutils/Makefile

Modified: head/sysutils/Makefile
==============================================================================
--- head/sysutils/Makefile	Sun Nov 24 20:20:49 2013	(r334787)
+++ head/sysutils/Makefile	Sun Nov 24 20:23:02 2013	(r334788)
@@ -871,6 +871,7 @@
     SUBDIR += slmon
     SUBDIR += sloth
     SUBDIR += slst
+    SUBDIR += slurm-hpc
     SUBDIR += smartmontools
     SUBDIR += smp_utils
     SUBDIR += snap

Added: head/sysutils/slurm-hpc/Makefile
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/Makefile	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,87 @@
+# Created by: Jason Bacon <jwbacon@tds.net>
+# $FreeBSD$
+
+PORTNAME=	slurm
+PORTVERSION=	2.6.4
+CATEGORIES=	sysutils
+MASTER_SITES=	http://www.schedmd.com/download/archive/ \
+		http://www.schedmd.com/download/latest/ \
+		http://www.schedmd.com/download/development/
+
+MAINTAINER=	jwbacon@tds.net
+COMMENT=	Simple Linux Utility for Resource Management
+
+LICENSE=	GPLv1
+
+LIB_DEPENDS=	libsysinfo.so:${PORTSDIR}/devel/libsysinfo \
+		libhwloc.so:${PORTSDIR}/devel/hwloc \
+		libmunge.so:${PORTSDIR}/security/munge \
+		librrd.so:${PORTSDIR}/databases/rrdtool
+# Testing for hdf5.so is insufficient.  It will accept hdf5 1.6 and
+# slurm requires hdf5 1.8.  h5copy is present only in 1.8.
+BUILD_DEPENDS+=	${LOCALBASE}/bin/h5copy:${PORTSDIR}/science/hdf5-18
+RUN_DEPENDS+=	${BUILD_DEPENDS}
+
+USE_BZIP2=	yes
+USE_LDCONFIG=	yes
+GNU_CONFIGURE=	yes
+USE_PYTHON=	yes
+USES=		perl5 gmake
+
+OPTIONS_DEFINE=	DOCS MYSQL PGSQL GTK2
+
+USERS=		slurm
+GROUPS=		${USERS}
+
+USE_RC_SUBR=	slurmctld slurmd
+SUB_FILES+=	slurm.conf
+
+# This is a new and complex port.  Allow debugging.
+STRIP_CMD=	# NONE
+CFLAGS+=	-I${LOCALBASE}/include -g -O1
+LDFLAGS+=	-L${LOCALBASE}/lib -lsysinfo -lkvm
+
+post-install:
+	${INSTALL_DATA} ${WRKDIR}/slurm.conf ${STAGEDIR}${PREFIX}/etc/slurm.conf.sample
+
+.include <bsd.port.options.mk>
+
+.if ${PORT_OPTIONS:MMYSQL}
+USE_MYSQL=	yes	# Job accounting
+PLIST_FILES+=	lib/slurm/accounting_storage_mysql.a \
+		lib/slurm/accounting_storage_mysql.la \
+		lib/slurm/accounting_storage_mysql.so \
+		lib/slurm/jobcomp_mysql.a \
+		lib/slurm/jobcomp_mysql.la \
+		lib/slurm/jobcomp_mysql.so
+.else
+# Can't disable configure test, so make it fail
+CONFIGURE_ARGS+=--with-mysql_config=/nomysql
+.endif
+
+.if ${PORT_OPTIONS:MPGSQL}
+USE_PGSQL=	yes	# Job accounting
+PLIST_FILES+=	lib/slurm/accounting_storage_pgsql.a \
+		lib/slurm/accounting_storage_pgsql.la \
+		lib/slurm/accounting_storage_pgsql.so \
+		lib/slurm/jobcomp_pgsql.a \
+		lib/slurm/jobcomp_pgsql.la \
+		lib/slurm/jobcomp_pgsql.so
+.else
+# Can't disable configure test, so make it fail
+CONFIGURE_ARGS+=--with-pg_config=/nopostgres
+.endif
+
+.if ${PORT_OPTIONS:MGTK2}
+# Note: Configure could not find pcre when building with no ports
+# preinstalled on 9.1-RELEASE.  Worked fine on second try.
+USE_GNOME=	glib20 gtk20	# sview
+PLIST_FILES+=	bin/sview
+.else
+# Can't disable configure test, so make it fail
+post-patch:
+	${REINPLACE_CMD} -e 's|min_gtk_version=2.7.1|min_gtk_version=200.7.1|' \
+		${WRKSRC}/configure
+.endif
+
+.include <bsd.port.mk>

Added: head/sysutils/slurm-hpc/distinfo
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/distinfo	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,2 @@
+SHA256 (slurm-2.6.4.tar.bz2) = f44a9a80c502dba9809127dc2a04069fd7c87d6b1e10824fe254b2077f9adac8
+SIZE (slurm-2.6.4.tar.bz2) = 5954130

Added: head/sysutils/slurm-hpc/files/patch-configure
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/files/patch-configure	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,41 @@
+--- configure.orig	2013-09-10 16:44:33.000000000 -0500
++++ configure	2013-11-14 10:23:02.000000000 -0600
+@@ -21594,12 +21594,9 @@
+ main ()
+ {
+ 
+-					int main()
+-					{
+ 						MYSQL mysql;
+ 						(void) mysql_init(&mysql);
+ 						(void) mysql_close(&mysql);
+-					}
+ 
+   ;
+   return 0;
+@@ -21636,12 +21633,9 @@
+ main ()
+ {
+ 
+-						int main()
+-						{
+ 							MYSQL mysql;
+ 							(void) mysql_init(&mysql);
+ 							(void) mysql_close(&mysql);
+-						}
+ 
+   ;
+   return 0;
+@@ -21803,12 +21797,9 @@
+ main ()
+ {
+ 
+-				int main()
+-       	  			{
+ 					PGconn     *conn;
+ 					conn = PQconnectdb("dbname = postgres");
+        					(void) PQfinish(conn);
+-				}
+ 
+   ;
+   return 0;

Added: head/sysutils/slurm-hpc/files/patch-src-plugins-acct_gather_filesystem-lustre-acct_gather_filesystem_lustre.c
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/files/patch-src-plugins-acct_gather_filesystem-lustre-acct_gather_filesystem_lustre.c	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,11 @@
+--- src/plugins/acct_gather_filesystem/lustre/acct_gather_filesystem_lustre.c.orig	2013-09-10 16:44:33.000000000 -0500
++++ src/plugins/acct_gather_filesystem/lustre/acct_gather_filesystem_lustre.c	2013-11-14 10:23:02.000000000 -0600
+@@ -49,6 +49,8 @@
+ #include <getopt.h>
+ #include <netinet/in.h>
+ 
++#include <limits.h>
++
+ 
+ #include "src/common/slurm_xlator.h"
+ #include "src/common/slurm_acct_gather_filesystem.h"

Added: head/sysutils/slurm-hpc/files/patch-src-plugins-select-cons_res-dist_tasks.c
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/files/patch-src-plugins-select-cons_res-dist_tasks.c	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,68 @@
+--- src/plugins/select/cons_res/dist_tasks.c.orig	2013-09-10 16:44:33.000000000 -0500
++++ src/plugins/select/cons_res/dist_tasks.c	2013-11-14 10:23:02.000000000 -0600
+@@ -271,6 +271,30 @@
+ 	return SLURM_SUCCESS;
+ }
+ 
++// These were nested below, which is not legal in standard C
++
++	/* qsort compare function for ascending int list */
++	int _cmp_int_ascend (const void *a, const void *b)
++	{
++		return (*(int*)a - *(int*)b);
++	}
++
++	/* qsort compare function for descending int list */
++	int _cmp_int_descend (const void *a, const void *b)
++	{
++		return (*(int*)b - *(int*)a);
++	}
++
++	int* sockets_cpu_cnt;
++
++	/* qsort compare function for board combination socket
++	 * list */
++	int _cmp_sock (const void *a, const void *b)
++	{
++		 return (sockets_cpu_cnt[*(int*)b] -
++				 sockets_cpu_cnt[*(int*)a]);
++	}
++
+ /* sync up core bitmap with new CPU count using a best-fit approach
+  * on the available resources on each node
+  *
+@@ -298,7 +322,6 @@
+ 	int elig_idx, comb_brd_idx, sock_list_idx, comb_min, board_num;
+ 	int* boards_cpu_cnt;
+ 	int* sort_brds_cpu_cnt;
+-	int* sockets_cpu_cnt;
+ 	int* board_combs;
+ 	int* socket_list;
+ 	int* elig_brd_combs;
+@@ -316,26 +339,6 @@
+ 	uint64_t ncomb_brd;
+ 	bool sufficient,best_fit_sufficient;
+ 
+-	/* qsort compare function for ascending int list */
+-	int _cmp_int_ascend (const void *a, const void *b)
+-	{
+-		return (*(int*)a - *(int*)b);
+-	}
+-
+-	/* qsort compare function for descending int list */
+-	int _cmp_int_descend (const void *a, const void *b)
+-	{
+-		return (*(int*)b - *(int*)a);
+-	}
+-
+-	/* qsort compare function for board combination socket
+-	 * list */
+-	int _cmp_sock (const void *a, const void *b)
+-	{
+-		 return (sockets_cpu_cnt[*(int*)b] -
+-				 sockets_cpu_cnt[*(int*)a]);
+-	}
+-
+ 	if (!job_res)
+ 		return;
+ 

Added: head/sysutils/slurm-hpc/files/patch-src-plugins-task-cgroup-task_cgroup_cpuset.c
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/files/patch-src-plugins-task-cgroup-task_cgroup_cpuset.c	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,33 @@
+--- src/plugins/task/cgroup/task_cgroup_cpuset.c.orig	2013-11-14 10:56:33.000000000 -0600
++++ src/plugins/task/cgroup/task_cgroup_cpuset.c	2013-11-14 11:10:51.000000000 -0600
+@@ -59,7 +59,12 @@
+ 
+ #ifdef HAVE_HWLOC
+ #include <hwloc.h>
++#if !defined(__FreeBSD__)
+ #include <hwloc/glibc-sched.h>
++#else
++// For cpuset
++#include <pthread_np.h>
++#endif
+ 
+ # if HWLOC_API_VERSION <= 0x00010000
+ /* After this version the cpuset structure and all it's functions
+@@ -714,7 +719,7 @@
+ 	hwloc_obj_type_t req_hwtype;
+ 
+ 	size_t tssize;
+-	cpu_set_t ts;
++	cpuset_t ts;
+ 
+ 	bind_type = job->cpu_bind_type ;
+ 	if (conf->task_plugin_param & CPU_BIND_VERBOSE ||
+@@ -900,7 +905,7 @@
+ 
+ 		hwloc_bitmap_asprintf(&str, cpuset);
+ 
+-		tssize = sizeof(cpu_set_t);
++		tssize = sizeof(cpuset_t);
+ 		if (hwloc_cpuset_to_glibc_sched_affinity(topology,cpuset,
+ 							 &ts,tssize) == 0) {
+ 			fstatus = SLURM_SUCCESS;

Added: head/sysutils/slurm-hpc/files/slurm.conf.in
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/files/slurm.conf.in	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,169 @@
+# slurm.conf file generated by configurator.html.
+# Put this file on all nodes of your cluster.
+# See the slurm.conf man page for more information.
+#
+ControlMachine=%%CONTROL_MACHINE%%
+#ControlAddr=
+#BackupController=%%BACKUP_CONTROL_MACHINE%%
+#BackupAddr=
+#
+AuthType=auth/munge
+CacheGroups=0
+#CheckpointType=checkpoint/none
+CryptoType=crypto/munge
+#DisableRootJobs=NO
+#EnforcePartLimits=NO
+#Epilog=
+#EpilogSlurmctld=
+#FirstJobId=1
+#MaxJobId=999999
+#GresTypes=
+#GroupUpdateForce=0
+#GroupUpdateTime=600
+#JobCheckpointDir=/var/slurm/checkpoint
+#JobCredentialPrivateKey=
+#JobCredentialPublicCertificate=
+#JobFileAppend=0
+#JobRequeue=1
+#JobSubmitPlugins=1
+#KillOnBadExit=0
+#LaunchType=launch/slurm
+#Licenses=foo*4,bar
+MailProg=/usr/bin/mail
+#MaxJobCount=5000
+#MaxStepCount=40000
+#MaxTasksPerNode=128
+MpiDefault=none
+#MpiParams=ports=#-#
+#PluginDir=
+#PlugStackConfig=
+#PrivateData=jobs
+ProctrackType=proctrack/pgid
+#Prolog=
+#PrologSlurmctld=
+#PropagatePrioProcess=0
+#PropagateResourceLimits=
+# Prevent head node limits from being applied to jobs!
+PropagateResourceLimitsExcept=ALL
+#RebootProgram=
+ReturnToService=1
+#SallocDefaultCommand=
+SlurmctldPidFile=/var/run/slurmctld.pid
+SlurmctldPort=6817
+SlurmdPidFile=/var/run/slurmd.pid
+SlurmdPort=6818
+SlurmdSpoolDir=/var/spool/slurmd
+SlurmUser=slurm
+#SlurmdUser=root
+#SrunEpilog=
+#SrunProlog=
+StateSaveLocation=/home/slurm/slurmctld
+SwitchType=switch/none
+#TaskEpilog=
+TaskPlugin=task/none
+#TaskPluginParam=
+#TaskProlog=
+#TopologyPlugin=topology/tree
+#TmpFs=/tmp
+#TrackWCKey=no
+#TreeWidth=
+#UnkillableStepProgram=
+#UsePAM=0
+#
+#
+# TIMERS
+#BatchStartTimeout=10
+#CompleteWait=0
+#EpilogMsgTime=2000
+#GetEnvTimeout=2
+#HealthCheckInterval=0
+#HealthCheckProgram=
+InactiveLimit=0
+KillWait=30
+#MessageTimeout=10
+#ResvOverRun=0
+MinJobAge=300
+#OverTimeLimit=0
+SlurmctldTimeout=120
+SlurmdTimeout=300
+#UnkillableStepTimeout=60
+#VSizeFactor=0
+Waittime=0
+#
+#
+# SCHEDULING
+#DefMemPerCPU=0
+FastSchedule=1
+#MaxMemPerCPU=0
+#SchedulerRootFilter=1
+#SchedulerTimeSlice=30
+SchedulerType=sched/backfill
+SchedulerPort=7321
+SelectType=select/cons_res
+#SelectTypeParameters=
+#
+#
+# JOB PRIORITY
+#PriorityType=priority/basic
+#PriorityDecayHalfLife=
+#PriorityCalcPeriod=
+#PriorityFavorSmall=
+#PriorityMaxAge=
+#PriorityUsageResetPeriod=
+#PriorityWeightAge=
+#PriorityWeightFairshare=
+#PriorityWeightJobSize=
+#PriorityWeightPartition=
+#PriorityWeightQOS=
+#
+#
+# LOGGING AND ACCOUNTING
+#AccountingStorageEnforce=0
+#AccountingStorageHost=
+#AccountingStorageLoc=
+#AccountingStoragePass=
+#AccountingStoragePort=
+AccountingStorageType=accounting_storage/none
+#AccountingStorageUser=
+AccountingStoreJobComment=YES
+ClusterName=cluster
+#DebugFlags=
+#JobCompHost=
+#JobCompLoc=
+#JobCompPass=
+#JobCompPort=
+JobCompType=jobcomp/none
+#JobCompUser=
+JobAcctGatherFrequency=30
+JobAcctGatherType=jobacct_gather/none
+SlurmctldDebug=5
+SlurmctldLogFile=/var/log/slurmctld
+SlurmdDebug=5
+SlurmdLogFile=/var/log/slurmd
+#SlurmSchedLogFile=
+#SlurmSchedLogLevel=
+#
+#
+# POWER SAVE SUPPORT FOR IDLE NODES (optional)
+#SuspendProgram=
+#ResumeProgram=
+#SuspendTimeout=
+#ResumeTimeout=
+#ResumeRate=
+#SuspendExcNodes=
+#SuspendExcParts=
+#SuspendRate=
+#SuspendTime=
+#
+#
+# COMPUTE NODES
+
+#############################################################################
+# Note: Using CPUs=2 or Sockets=2 causes slurmctld to seg fault on FreeBSD.
+#       Use Sockets=1, CoresPerSocket=total-cores-in-node, and
+#       ThreadsPerCore=N, even if your motherboard has more than 1 socket.
+#	This issue is related to get_cpuinfo() and is being investigated.
+#############################################################################
+
+NodeName=compute-[001-002] Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN
+PartitionName=default-partition Nodes=compute-[001-002] Default=YES MaxTime=INFINITE State=UP

Added: head/sysutils/slurm-hpc/files/slurmctld.in
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/files/slurmctld.in	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,43 @@
+#!/bin/sh
+
+# PROVIDE: slurmctld
+# REQUIRE: DAEMON munge
+# BEFORE: LOGIN
+# KEYWORD: shutdown
+#
+# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
+# to enable this service:
+#
+# slurmctld_enable (bool):   Set to NO by default.
+#               Set it to YES to enable slurmctld.
+#
+
+. /etc/rc.subr
+
+name="slurmctld"
+rcvar=slurmctld_enable
+
+pidfile=/var/run/$name.pid
+
+load_rc_config $name
+
+: ${slurmctld_enable="NO"}
+
+start_cmd=slurmctld_start
+stop_cmd=slurmctld_stop
+
+slurmctld_start() {
+    checkyesno slurmctld_enable && echo "Starting $name." && \
+	%%PREFIX%%/sbin/$name $slurmctld_flags
+}
+
+slurmctld_stop() {
+    if [ -e $pidfile ]; then
+        checkyesno slurmctld_enable && echo "Stopping $name." && \
+	    kill `cat $pidfile`
+    else
+	killall $name
+    fi
+}
+
+run_rc_command "$1"

Added: head/sysutils/slurm-hpc/files/slurmd.in
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/files/slurmd.in	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,43 @@
+#!/bin/sh
+
+# PROVIDE: slurmd
+# REQUIRE: DAEMON munge
+# BEFORE: LOGIN
+# KEYWORD: shutdown
+#
+# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
+# to enable this service:
+#
+# slurmd_enable (bool):   Set to NO by default.
+#               Set it to YES to enable slurmd.
+#
+
+. /etc/rc.subr
+
+name="slurmd"
+rcvar=slurmd_enable
+
+pidfile=/var/run/$name.pid
+
+load_rc_config $name
+
+: ${slurmd_enable="NO"}
+
+start_cmd=slurmd_start
+stop_cmd=slurmd_stop
+
+slurmd_start() {
+    checkyesno slurmd_enable && echo "Starting $name." && \
+	%%PREFIX%%/sbin/$name $slurmd_flags
+}
+
+slurmd_stop() {
+    if [ -e $pidfile ]; then
+        checkyesno slurmd_enable && echo "Stopping $name." && \
+	    kill `cat $pidfile`
+    else
+        killall $name
+    fi
+}
+
+run_rc_command "$1"

Added: head/sysutils/slurm-hpc/pkg-descr
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/pkg-descr	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,9 @@
+SLURM is an open-source resource manager designed for *nix clusters of all
+sizes. It provides three key functions. First it allocates exclusive and/or
+non-exclusive access to resources (computer nodes) to users for some duration
+of time so they can perform work. Second, it provides a framework for starting,
+executing, and monitoring work (typically a parallel job) on a set of allocated
+nodes. Finally, it arbitrates contention for resources by managing a queue of
+pending work.
+
+WWW: https://computing.llnl.gov/linux/slurm/

Added: head/sysutils/slurm-hpc/pkg-plist
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/sysutils/slurm-hpc/pkg-plist	Sun Nov 24 20:23:02 2013	(r334788)
@@ -0,0 +1,554 @@
+bin/sacct
+bin/sacctmgr
+bin/salloc
+bin/sattach
+bin/sbatch
+bin/sbcast
+bin/scancel
+bin/scontrol
+bin/sdiag
+bin/sh5util
+bin/sinfo
+bin/smap
+bin/sprio
+bin/squeue
+bin/sreport
+bin/srun
+bin/sshare
+bin/sstat
+bin/strigger
+@unexec if cmp -s %D/etc/slurm.conf.sample %D/etc/slurm.conf; then rm -f %D/etc/slurm.conf; fi
+etc/slurm.conf.sample
+@exec if [ ! -f %D/etc/slurm.conf ] ; then cp -p %D/%F %B/slurm.conf; fi
+include/slurm/pmi.h
+include/slurm/slurm.h
+include/slurm/slurm_errno.h
+include/slurm/slurmdb.h
+include/slurm/spank.h
+lib/libpmi.a
+lib/libpmi.la
+lib/libpmi.so
+lib/libpmi.so.0
+lib/libslurm.a
+lib/libslurm.la
+lib/libslurm.so
+lib/libslurm.so.26
+lib/libslurmdb.a
+lib/libslurmdb.la
+lib/libslurmdb.so
+lib/libslurmdb.so.26
+lib/slurm/accounting_storage_filetxt.a
+lib/slurm/accounting_storage_filetxt.la
+lib/slurm/accounting_storage_filetxt.so
+lib/slurm/accounting_storage_none.a
+lib/slurm/accounting_storage_none.la
+lib/slurm/accounting_storage_none.so
+lib/slurm/accounting_storage_slurmdbd.a
+lib/slurm/accounting_storage_slurmdbd.la
+lib/slurm/accounting_storage_slurmdbd.so
+lib/slurm/acct_gather_energy_none.a
+lib/slurm/acct_gather_energy_none.la
+lib/slurm/acct_gather_energy_none.so
+lib/slurm/acct_gather_energy_rapl.a
+lib/slurm/acct_gather_energy_rapl.la
+lib/slurm/acct_gather_energy_rapl.so
+lib/slurm/acct_gather_filesystem_lustre.a
+lib/slurm/acct_gather_filesystem_lustre.la
+lib/slurm/acct_gather_filesystem_lustre.so
+lib/slurm/acct_gather_filesystem_none.a
+lib/slurm/acct_gather_filesystem_none.la
+lib/slurm/acct_gather_filesystem_none.so
+lib/slurm/acct_gather_infiniband_none.a
+lib/slurm/acct_gather_infiniband_none.la
+lib/slurm/acct_gather_infiniband_none.so
+lib/slurm/acct_gather_profile_hdf5.a
+lib/slurm/acct_gather_profile_hdf5.la
+lib/slurm/acct_gather_profile_hdf5.so
+lib/slurm/acct_gather_profile_none.a
+lib/slurm/acct_gather_profile_none.la
+lib/slurm/acct_gather_profile_none.so
+lib/slurm/auth_munge.a
+lib/slurm/auth_munge.la
+lib/slurm/auth_munge.so
+lib/slurm/auth_none.a
+lib/slurm/auth_none.la
+lib/slurm/auth_none.so
+lib/slurm/checkpoint_none.a
+lib/slurm/checkpoint_none.la
+lib/slurm/checkpoint_none.so
+lib/slurm/checkpoint_ompi.a
+lib/slurm/checkpoint_ompi.la
+lib/slurm/checkpoint_ompi.so
+lib/slurm/crypto_munge.a
+lib/slurm/crypto_munge.la
+lib/slurm/crypto_munge.so
+lib/slurm/crypto_openssl.a
+lib/slurm/crypto_openssl.la
+lib/slurm/crypto_openssl.so
+lib/slurm/ext_sensors_none.a
+lib/slurm/ext_sensors_none.la
+lib/slurm/ext_sensors_none.so
+lib/slurm/ext_sensors_rrd.a
+lib/slurm/ext_sensors_rrd.la
+lib/slurm/ext_sensors_rrd.so
+lib/slurm/gres_gpu.a
+lib/slurm/gres_gpu.la
+lib/slurm/gres_gpu.so
+lib/slurm/gres_mic.a
+lib/slurm/gres_mic.la
+lib/slurm/gres_mic.so
+lib/slurm/gres_nic.a
+lib/slurm/gres_nic.la
+lib/slurm/gres_nic.so
+lib/slurm/job_submit_all_partitions.a
+lib/slurm/job_submit_all_partitions.la
+lib/slurm/job_submit_all_partitions.so
+lib/slurm/job_submit_cnode.a
+lib/slurm/job_submit_cnode.la
+lib/slurm/job_submit_cnode.so
+lib/slurm/job_submit_defaults.a
+lib/slurm/job_submit_defaults.la
+lib/slurm/job_submit_defaults.so
+lib/slurm/job_submit_logging.a
+lib/slurm/job_submit_logging.la
+lib/slurm/job_submit_logging.so
+lib/slurm/job_submit_partition.a
+lib/slurm/job_submit_partition.la
+lib/slurm/job_submit_partition.so
+lib/slurm/job_submit_pbs.a
+lib/slurm/job_submit_pbs.la
+lib/slurm/job_submit_pbs.so
+lib/slurm/job_submit_require_timelimit.a
+lib/slurm/job_submit_require_timelimit.la
+lib/slurm/job_submit_require_timelimit.so
+lib/slurm/jobacct_gather_aix.a
+lib/slurm/jobacct_gather_aix.la
+lib/slurm/jobacct_gather_aix.so
+lib/slurm/jobacct_gather_cgroup.a
+lib/slurm/jobacct_gather_cgroup.la
+lib/slurm/jobacct_gather_cgroup.so
+lib/slurm/jobacct_gather_linux.a
+lib/slurm/jobacct_gather_linux.la
+lib/slurm/jobacct_gather_linux.so
+lib/slurm/jobacct_gather_none.a
+lib/slurm/jobacct_gather_none.la
+lib/slurm/jobacct_gather_none.so
+lib/slurm/jobcomp_filetxt.a
+lib/slurm/jobcomp_filetxt.la
+lib/slurm/jobcomp_filetxt.so
+lib/slurm/jobcomp_none.a
+lib/slurm/jobcomp_none.la
+lib/slurm/jobcomp_none.so
+lib/slurm/jobcomp_script.a
+lib/slurm/jobcomp_script.la
+lib/slurm/jobcomp_script.so
+lib/slurm/launch_slurm.a
+lib/slurm/launch_slurm.la
+lib/slurm/launch_slurm.so
+lib/slurm/mpi_lam.a
+lib/slurm/mpi_lam.la
+lib/slurm/mpi_lam.so
+lib/slurm/mpi_mpich1_p4.a
+lib/slurm/mpi_mpich1_p4.la
+lib/slurm/mpi_mpich1_p4.so
+lib/slurm/mpi_mpich1_shmem.a
+lib/slurm/mpi_mpich1_shmem.la
+lib/slurm/mpi_mpich1_shmem.so
+lib/slurm/mpi_mpichgm.a
+lib/slurm/mpi_mpichgm.la
+lib/slurm/mpi_mpichgm.so
+lib/slurm/mpi_mpichmx.a
+lib/slurm/mpi_mpichmx.la
+lib/slurm/mpi_mpichmx.so
+lib/slurm/mpi_mvapich.a
+lib/slurm/mpi_mvapich.la
+lib/slurm/mpi_mvapich.so
+lib/slurm/mpi_none.a
+lib/slurm/mpi_none.la
+lib/slurm/mpi_none.so
+lib/slurm/mpi_openmpi.a
+lib/slurm/mpi_openmpi.la
+lib/slurm/mpi_openmpi.so
+lib/slurm/mpi_pmi2.a
+lib/slurm/mpi_pmi2.la
+lib/slurm/mpi_pmi2.so
+lib/slurm/preempt_none.a
+lib/slurm/preempt_none.la
+lib/slurm/preempt_none.so
+lib/slurm/preempt_partition_prio.a
+lib/slurm/preempt_partition_prio.la
+lib/slurm/preempt_partition_prio.so
+lib/slurm/preempt_qos.a
+lib/slurm/preempt_qos.la
+lib/slurm/preempt_qos.so
+lib/slurm/priority_basic.a
+lib/slurm/priority_basic.la
+lib/slurm/priority_basic.so
+lib/slurm/priority_multifactor.a
+lib/slurm/priority_multifactor.la
+lib/slurm/priority_multifactor.so
+lib/slurm/proctrack_cgroup.a
+lib/slurm/proctrack_cgroup.la
+lib/slurm/proctrack_cgroup.so
+lib/slurm/proctrack_linuxproc.a
+lib/slurm/proctrack_linuxproc.la
+lib/slurm/proctrack_linuxproc.so
+lib/slurm/proctrack_pgid.a
+lib/slurm/proctrack_pgid.la
+lib/slurm/proctrack_pgid.so
+lib/slurm/sched_backfill.a
+lib/slurm/sched_backfill.la
+lib/slurm/sched_backfill.so
+lib/slurm/sched_builtin.a
+lib/slurm/sched_builtin.la
+lib/slurm/sched_builtin.so
+lib/slurm/sched_hold.a
+lib/slurm/sched_hold.la
+lib/slurm/sched_hold.so
+lib/slurm/sched_wiki.a
+lib/slurm/sched_wiki.la
+lib/slurm/sched_wiki.so
+lib/slurm/sched_wiki2.a
+lib/slurm/sched_wiki2.la
+lib/slurm/sched_wiki2.so
+lib/slurm/select_cons_res.a
+lib/slurm/select_cons_res.la
+lib/slurm/select_cons_res.so
+lib/slurm/select_cray.a
+lib/slurm/select_cray.la
+lib/slurm/select_cray.so
+lib/slurm/select_linear.a
+lib/slurm/select_linear.la
+lib/slurm/select_linear.so
+lib/slurm/select_serial.a
+lib/slurm/select_serial.la
+lib/slurm/select_serial.so
+lib/slurm/spank_pbs.a
+lib/slurm/spank_pbs.la
+lib/slurm/spank_pbs.so
+lib/slurm/src/sattach/sattach.wrapper.c
+lib/slurm/src/srun/srun.wrapper.c
+lib/slurm/switch_none.a
+lib/slurm/switch_none.la
+lib/slurm/switch_none.so
+lib/slurm/task_cgroup.a
+lib/slurm/task_cgroup.la
+lib/slurm/task_cgroup.so
+lib/slurm/task_none.a
+lib/slurm/task_none.la
+lib/slurm/task_none.so
+lib/slurm/topology_3d_torus.a
+lib/slurm/topology_3d_torus.la
+lib/slurm/topology_3d_torus.so
+lib/slurm/topology_node_rank.a
+lib/slurm/topology_node_rank.la
+lib/slurm/topology_node_rank.so
+lib/slurm/topology_none.a
+lib/slurm/topology_none.la
+lib/slurm/topology_none.so
+lib/slurm/topology_tree.a
+lib/slurm/topology_tree.la
+lib/slurm/topology_tree.so
+man/man1/sacct.1.gz
+man/man1/sacctmgr.1.gz
+man/man1/salloc.1.gz
+man/man1/sattach.1.gz
+man/man1/sbatch.1.gz
+man/man1/sbcast.1.gz
+man/man1/scancel.1.gz
+man/man1/scontrol.1.gz
+man/man1/sdiag.1.gz
+man/man1/sh5util.1.gz
+man/man1/sinfo.1.gz
+man/man1/slurm.1.gz
+man/man1/smap.1.gz
+man/man1/sprio.1.gz
+man/man1/squeue.1.gz
+man/man1/sreport.1.gz
+man/man1/srun.1.gz
+man/man1/srun_cr.1.gz
+man/man1/sshare.1.gz
+man/man1/sstat.1.gz
+man/man1/strigger.1.gz
+man/man1/sview.1.gz
+man/man3/slurm_allocate_resources.3.gz
+man/man3/slurm_allocate_resources_blocking.3.gz
+man/man3/slurm_allocation_lookup.3.gz
+man/man3/slurm_allocation_lookup_lite.3.gz
+man/man3/slurm_allocation_msg_thr_create.3.gz
+man/man3/slurm_allocation_msg_thr_destroy.3.gz
+man/man3/slurm_api_version.3.gz
+man/man3/slurm_checkpoint.3.gz
+man/man3/slurm_checkpoint_able.3.gz
+man/man3/slurm_checkpoint_complete.3.gz
+man/man3/slurm_checkpoint_create.3.gz
+man/man3/slurm_checkpoint_disable.3.gz
+man/man3/slurm_checkpoint_enable.3.gz
+man/man3/slurm_checkpoint_error.3.gz
+man/man3/slurm_checkpoint_failed.3.gz
+man/man3/slurm_checkpoint_restart.3.gz
+man/man3/slurm_checkpoint_task_complete.3.gz
+man/man3/slurm_checkpoint_tasks.3.gz
+man/man3/slurm_checkpoint_vacate.3.gz
+man/man3/slurm_clear_trigger.3.gz
+man/man3/slurm_complete_job.3.gz
+man/man3/slurm_confirm_allocation.3.gz
+man/man3/slurm_create_partition.3.gz
+man/man3/slurm_create_reservation.3.gz
+man/man3/slurm_delete_partition.3.gz
+man/man3/slurm_delete_reservation.3.gz
+man/man3/slurm_free_ctl_conf.3.gz
+man/man3/slurm_free_front_end_info_msg.3.gz
+man/man3/slurm_free_job_alloc_info_response_msg.3.gz
+man/man3/slurm_free_job_info_msg.3.gz
+man/man3/slurm_free_job_step_create_response_msg.3.gz
+man/man3/slurm_free_job_step_info_response_msg.3.gz
+man/man3/slurm_free_node_info.3.gz
+man/man3/slurm_free_node_info_msg.3.gz
+man/man3/slurm_free_partition_info.3.gz
+man/man3/slurm_free_partition_info_msg.3.gz
+man/man3/slurm_free_reservation_info_msg.3.gz
+man/man3/slurm_free_resource_allocation_response_msg.3.gz
+man/man3/slurm_free_slurmd_status.3.gz
+man/man3/slurm_free_submit_response_response_msg.3.gz
+man/man3/slurm_free_trigger_msg.3.gz
+man/man3/slurm_get_end_time.3.gz
+man/man3/slurm_get_errno.3.gz
+man/man3/slurm_get_job_steps.3.gz
+man/man3/slurm_get_rem_time.3.gz
+man/man3/slurm_get_select_jobinfo.3.gz
+man/man3/slurm_get_triggers.3.gz
+man/man3/slurm_hostlist_create.3.gz
+man/man3/slurm_hostlist_destroy.3.gz
+man/man3/slurm_hostlist_shift.3.gz
+man/man3/slurm_init_job_desc_msg.3.gz
+man/man3/slurm_init_part_desc_msg.3.gz
+man/man3/slurm_init_resv_desc_msg.3.gz
+man/man3/slurm_init_trigger_msg.3.gz
+man/man3/slurm_init_update_front_end_msg.3.gz
+man/man3/slurm_init_update_node_msg.3.gz
+man/man3/slurm_init_update_step_msg.3.gz
+man/man3/slurm_job_cpus_allocated_on_node.3.gz
+man/man3/slurm_job_cpus_allocated_on_node_id.3.gz
+man/man3/slurm_job_step_create.3.gz
+man/man3/slurm_job_step_launch_t_init.3.gz
+man/man3/slurm_job_step_layout_free.3.gz
+man/man3/slurm_job_step_layout_get.3.gz
+man/man3/slurm_job_will_run.3.gz
+man/man3/slurm_jobinfo_ctx_get.3.gz
+man/man3/slurm_kill_job.3.gz
+man/man3/slurm_kill_job_step.3.gz
+man/man3/slurm_load_ctl_conf.3.gz
+man/man3/slurm_load_front_end.3.gz
+man/man3/slurm_load_job.3.gz
+man/man3/slurm_load_job_user.3.gz
+man/man3/slurm_load_jobs.3.gz
+man/man3/slurm_load_node.3.gz
+man/man3/slurm_load_node_single.3.gz
+man/man3/slurm_load_partitions.3.gz
+man/man3/slurm_load_reservations.3.gz
+man/man3/slurm_load_slurmd_status.3.gz
+man/man3/slurm_notify_job.3.gz
+man/man3/slurm_perror.3.gz
+man/man3/slurm_pid2jobid.3.gz
+man/man3/slurm_ping.3.gz
+man/man3/slurm_print_ctl_conf.3.gz
+man/man3/slurm_print_front_end_info_msg.3.gz
+man/man3/slurm_print_front_end_table.3.gz
+man/man3/slurm_print_job_info.3.gz
+man/man3/slurm_print_job_info_msg.3.gz
+man/man3/slurm_print_job_step_info.3.gz
+man/man3/slurm_print_job_step_info_msg.3.gz
+man/man3/slurm_print_node_info_msg.3.gz
+man/man3/slurm_print_node_table.3.gz
+man/man3/slurm_print_partition_info.3.gz
+man/man3/slurm_print_partition_info_msg.3.gz
+man/man3/slurm_print_reservation_info.3.gz
+man/man3/slurm_print_reservation_info_msg.3.gz
+man/man3/slurm_print_slurmd_status.3.gz
+man/man3/slurm_read_hostfile.3.gz
+man/man3/slurm_reconfigure.3.gz
+man/man3/slurm_requeue.3.gz
+man/man3/slurm_resume.3.gz
+man/man3/slurm_set_debug_level.3.gz
+man/man3/slurm_set_trigger.3.gz
+man/man3/slurm_shutdown.3.gz
+man/man3/slurm_signal_job.3.gz
+man/man3/slurm_signal_job_step.3.gz
+man/man3/slurm_slurmd_status.3.gz
+man/man3/slurm_sprint_front_end_table.3.gz
+man/man3/slurm_sprint_job_info.3.gz
+man/man3/slurm_sprint_job_step_info.3.gz
+man/man3/slurm_sprint_node_table.3.gz
+man/man3/slurm_sprint_partition_info.3.gz
+man/man3/slurm_sprint_reservation_info.3.gz
+man/man3/slurm_step_ctx_create.3.gz
+man/man3/slurm_step_ctx_create_no_alloc.3.gz
+man/man3/slurm_step_ctx_daemon_per_node_hack.3.gz
+man/man3/slurm_step_ctx_destroy.3.gz
+man/man3/slurm_step_ctx_get.3.gz
+man/man3/slurm_step_ctx_params_t_init.3.gz
+man/man3/slurm_step_launch.3.gz
+man/man3/slurm_step_launch_abort.3.gz
+man/man3/slurm_step_launch_fwd_signal.3.gz
+man/man3/slurm_step_launch_wait_finish.3.gz
+man/man3/slurm_step_launch_wait_start.3.gz
+man/man3/slurm_strerror.3.gz
+man/man3/slurm_submit_batch_job.3.gz
+man/man3/slurm_suspend.3.gz
+man/man3/slurm_takeover.3.gz
+man/man3/slurm_terminate_job.3.gz
+man/man3/slurm_terminate_job_step.3.gz
+man/man3/slurm_update_front_end.3.gz
+man/man3/slurm_update_job.3.gz
+man/man3/slurm_update_node.3.gz
+man/man3/slurm_update_partition.3.gz
+man/man3/slurm_update_reservation.3.gz
+man/man3/slurm_update_step.3.gz
+man/man5/acct_gather.conf.5.gz
+man/man5/bluegene.conf.5.gz
+man/man5/cgroup.conf.5.gz
+man/man5/cray.conf.5.gz
+man/man5/ext_sensors.conf.5.gz
+man/man5/gres.conf.5.gz
+man/man5/slurm.conf.5.gz
+man/man5/slurmdbd.conf.5.gz
+man/man5/topology.conf.5.gz
+man/man5/wiki.conf.5.gz
+man/man8/slurmctld.8.gz
+man/man8/slurmd.8.gz
+man/man8/slurmdbd.8.gz
+man/man8/slurmstepd.8.gz
+man/man8/spank.8.gz
+sbin/slurmctld
+sbin/slurmd
+sbin/slurmdbd
+sbin/slurmstepd
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/Slurm_Entity.pdf
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/Slurm_Individual.pdf
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/accounting.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/accounting_storageplugins.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/acct_gather_energy_plugins.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/acct_gather_profile_plugins.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/add.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/allocation_pies.gif
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/api.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/arch.gif
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/authplugins.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/big_sys.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/bluegene.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/bull.jpg
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/cgroups.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/checkpoint_blcr.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/checkpoint_plugins.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/coding_style.pdf
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/configurator.easy.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/configurator.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/cons_res.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/cons_res_share.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/contributor.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/cpu_management.html
+%%PORTDOCS%%%%DOCSDIR%%-2.6.4/html/cray.html

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***
_______________________________________________
svn-ports-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-ports-all
To unsubscribe, send any mail to "svn-ports-all-unsubscribe@freebsd.org"
Comment 4 Boris Samorodov freebsd_committer freebsd_triage 2013-11-24 21:12:51 UTC
State Changed
From-To: open->closed

Thanks for the port! 

It was really good, but the port's infrastructure is rapidly moving and 
sometimes is not an easy one, so I've done some needed changes: 
. use both name and e-mail address at "Created by" field of the header; 
. use new LIB_DEPENDS syntax; 
. place all non-options macros before including bsd.options.mk; 
. add DOCS to OPTIONS_DEFINE; 
. remove standard option defines (MYSQL and PGSQL); 
. rename option GUI to standard GTK2 (seems more meaningful here); 
. treat configuration file as per The FreeBSD Porters Book 
(7.3. Configuration Files); 
. remove MANx macros from Makefile (they are deprecated with staging); 
. remove check for PORT_OPTIONS:MDOCS -- it's treated automatically 
with staging; 
. do not use option EXAMPLES (it used to install only one configuration 
file); 
. remove showing up a pkg-message at post-install (useless with staging); 
. in the end the pkg-message itself was removed, since it was about removed 
sample configuration file from EXAMPLES.