Bug 191510

Summary: [zfs] ZFS doesn't use all available memory
Product: Base System Reporter: vsjcfm
Component: kernAssignee: Steven Hartland <smh>
Status: Closed FIXED    
Severity: Affects Some People CC: hal, smh, vsasjason
Priority: Normal    
Version: 9.3-RELEASE   
Hardware: amd64   
OS: Any   
Attachments:
Description Flags
Memory graph
none
Memory graph #2
none
arc reclaim refactor (against releng/9.3)
none
arc reclaim refactor (against releng/9.3)
none
arc reclaim refactor (against releng/9.3) none

Description vsjcfm 2014-06-30 08:37:55 UTC
I have a machine that serves some tenths of tebibytes big files over HTTP from AIO on ZFS. Machine has 256 G RAM, but ARC uses only 170-190 G.
Stats below:

root@cs0:~# fgrep " memory " /var/run/dmesg.boot
real memory  = 274877906944 (262144 MB)
avail memory = 265899143168 (253581 MB)
root@cs0:~# zfs-stats -a

------------------------------------------------------------------------
ZFS Subsystem Report                            Mon Jun 30 11:31:57 2014
------------------------------------------------------------------------

System Information:

        Kernel Version:                         902001 (osreldate)
        Hardware Platform:                      amd64
        Processor Architecture:                 amd64

        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5

FreeBSD 9.2-RELEASE-p8 #0 r267147: Fri Jun 6 10:22:17 EEST 2014 root
11:31  up 15 days, 21:12, 1 user, load averages: 1,15 1,56 1,74

------------------------------------------------------------------------

System Memory:

        5.42%   13.47   GiB Active,     0.12%   300.54  MiB Inact
        77.71%  193.03  GiB Wired,      0.00%   0 Cache
        16.74%  41.59   GiB Free,       0.00%   3.00    MiB Gap

        Real Installed:                         256.00  GiB
        Real Available:                 99.98%  255.96  GiB
        Real Managed:                   97.04%  248.38  GiB

        Logical Total:                          256.00  GiB
        Logical Used:                   83.64%  214.12  GiB
        Logical Free:                   16.36%  41.88   GiB

Kernel Memory:                                  183.36  GiB
        Data:                           99.99%  183.35  GiB
        Text:                           0.01%   10.88   MiB

Kernel Memory Map:                              242.40  GiB
        Size:                           74.89%  181.54  GiB
        Free:                           25.11%  60.85   GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                1.14b
        Recycle Misses:                         2.38m
        Mutex Misses:                           3.15m
        Evict Skips:                            229.53m

ARC Size:                               74.90%  185.28  GiB
        Target Size: (Adaptive)         74.90%  185.28  GiB
        Min Size (Hard Limit):          12.50%  30.92   GiB
        Max Size (High Water):          8:1     247.38  GiB

ARC Size Breakdown:
        Recently Used Cache Size:       88.40%  163.78  GiB
        Frequently Used Cache Size:     11.60%  21.50   GiB

ARC Hash Breakdown:
        Elements Max:                           18.24m
        Elements Current:               99.44%  18.14m
        Collisions:                             783.39m
        Chain Max:                              22
        Chains:                                 3.87m

------------------------------------------------------------------------

ARC Efficiency:                                 5.80b
        Cache Hit Ratio:                78.92%  4.58b
        Cache Miss Ratio:               21.08%  1.22b
        Actual Hit Ratio:               58.85%  3.41b

        Data Demand Efficiency:         99.60%  1.55b
        Data Prefetch Efficiency:       43.26%  2.13b

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             23.60%  1.08b
          Most Recently Used:           24.06%  1.10b
          Most Frequently Used:         50.52%  2.31b
          Most Recently Used Ghost:     0.17%   7.61m
          Most Frequently Used Ghost:   1.66%   75.89m

        CACHE HITS BY DATA TYPE:
          Demand Data:                  33.82%  1.55b
          Prefetch Data:                20.16%  922.83m
          Demand Metadata:              34.87%  1.60b
          Prefetch Metadata:            11.14%  510.16m

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  0.50%   6.17m
          Prefetch Data:                98.95%  1.21b
          Demand Metadata:              0.54%   6.61m
          Prefetch Metadata:            0.00%   58.71k

------------------------------------------------------------------------

L2 ARC Summary: (DEGRADED)
        Passed Headroom:                        83.52m
        Tried Lock Failures:                    267.02m
        IO In Progress:                         841
        Low Memory Aborts:                      28
        Free on Write:                          3.35m
        Writes While Full:                      1.40m
        R/W Clashes:                            51.46k
        Bad Checksums:                          16
        IO Errors:                              0
        SPA Mismatch:                           53.09b

L2 ARC Size: (Adaptive)                         1.67    TiB
        Header Size:                    0.18%   3.01    GiB

L2 ARC Evicts:
        Lock Retries:                           63.60k
        Upon Reading:                           173

L2 ARC Breakdown:                               1.22b
        Hit Ratio:                      31.64%  386.94m
        Miss Ratio:                     68.36%  836.05m
        Feeds:                                  2.81m

L2 ARC Buffer:
        Bytes Scanned:                          15.61   PiB
        Buffer Iterations:                      2.81m
        List Iterations:                        151.58m
        NULL List Iterations:                   17.35k

L2 ARC Writes:
        Writes Sent:                    100.00% 2.67m

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:                                 4.69b
        Hit Ratio:                      81.41%  3.82b
        Miss Ratio:                     18.59%  871.72m

        Colinear:                               871.72m
          Hit Ratio:                    0.02%   164.66k
          Miss Ratio:                   99.98%  871.55m

        Stride:                                 2.61b
          Hit Ratio:                    99.90%  2.61b
          Miss Ratio:                   0.10%   2.62m

DMU Misc:
        Reclaim:                                871.55m
          Successes:                    0.91%   7.97m
          Failures:                     99.09%  863.59m

        Streams:                                1.21b
          +Resets:                      0.07%   871.59k
          -Resets:                      99.93%  1.21b
          Bogus:                                0

------------------------------------------------------------------------

VDEV Cache Summary:                             10.23m
        Hit Ratio:                      9.34%   955.87k
        Miss Ratio:                     90.47%  9.26m
        Delegations:                    0.19%   19.15k

------------------------------------------------------------------------

ZFS Tunables (sysctl):
        kern.maxusers                           384
        vm.kmem_size                            266698448896
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        329853485875
        vfs.zfs.arc_max                         265624707072
        vfs.zfs.arc_min                         33203088384
        vfs.zfs.arc_meta_used                   14496156952
        vfs.zfs.arc_meta_limit                  66406176768
        vfs.zfs.l2arc_write_max                 41943040
        vfs.zfs.l2arc_write_boost               83886080
        vfs.zfs.l2arc_headroom                  4
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_noprefetch                0
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.anon_size                       180224
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.mru_size                        158295406592
        vfs.zfs.mru_metadata_lsize              842248704
        vfs.zfs.mru_data_lsize                  156589069824
        vfs.zfs.mru_ghost_size                  35747599360
        vfs.zfs.mru_ghost_metadata_lsize        1232837120
        vfs.zfs.mru_ghost_data_lsize            34514762240
        vfs.zfs.mfu_size                        34995384832
        vfs.zfs.mfu_metadata_lsize              6317844992
        vfs.zfs.mfu_data_lsize                  27855011840
        vfs.zfs.mfu_ghost_size                  162725010432
        vfs.zfs.mfu_ghost_metadata_lsize        24083810304
        vfs.zfs.mfu_ghost_data_lsize            138641200128
        vfs.zfs.l2c_only_size                   1708927019520
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.no_write_throttle               0
        vfs.zfs.write_limit_shift               3
        vfs.zfs.write_limit_min                 33554432
        vfs.zfs.write_limit_max                 34353957888
        vfs.zfs.write_limit_inflated            824494989312
        vfs.zfs.write_limit_override            0
        vfs.zfs.prefetch_disable                0
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.block_cap                256
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.resilver_delay                  2
        vfs.zfs.scrub_delay                     4
        vfs.zfs.scan_idle                       50
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.mg_alloc_failures               36
        vfs.zfs.write_to_degraded               0
        vfs.zfs.check_hostid                    1
        vfs.zfs.recover                         0
        vfs.zfs.deadman_synctime                1000
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.txg.synctime_ms                 1000
        vfs.zfs.txg.timeout                     10
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.cache.size                 20971520
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.max_pending                10
        vfs.zfs.vdev.min_pending                4
        vfs.zfs.vdev.time_shift                 29
        vfs.zfs.vdev.ramp_rate                  2
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.trim_max_bytes             2147483648
        vfs.zfs.vdev.trim_max_pending           64
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zio.use_uma                     0
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.snapshot_list_prefetch          0
        vfs.zfs.super_owner                     0
        vfs.zfs.debug                           0
        vfs.zfs.version.ioctl                   3
        vfs.zfs.version.acl                     1
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.zpl                     5
        vfs.zfs.trim.enabled                    0
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.max_interval               1

------------------------------------------------------------------------

root@cs0:~# top -aSHz -d 1
last pid: 29833;  load averages:  0.60,  1.35,  1.66                                                              up 15+21:12:46  11:32:49
818 processes: 25 running, 733 sleeping, 60 waiting
CPU:     % user,     % nice,     % system,     % interrupt,     % idle
Mem: 13G Active, 300M Inact, 196G Wired, 39G Free
ARC: 188G Total, 35G MFU, 148G MRU, 304K Anon, 4279M Header, 1512M Other
Swap: 2048M Total, 2048M Free

  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
    4 root        -8    -     0K   176K l2arc_ 15  30.8H  6.79% [zfskern{l2arc_feed_threa}]
   12 root       -92    -     0K   960K WAIT   10 499:17  2.88% [intr{irq274: ix0:que }]
   12 root       -92    -     0K   960K WAIT    7 562:34  2.29% [intr{irq271: ix0:que }]
   12 root       -92    -     0K   960K WAIT    8 519:17  2.29% [intr{irq272: ix0:que }]
   12 root       -92    -     0K   960K WAIT    1 556:54  2.10% [intr{irq265: ix0:que }]
65513 root        21    -     0K    16K aiordy  4   0:06  1.76% [aiod5]
29832 root        21    -     0K    16K aiordy 11   0:00  1.76% [aiod1]
   12 root       -92    -     0K   960K WAIT    2 553:50  1.46% [intr{irq266: ix0:que }]
   12 root       -92    -     0K   960K WAIT    0 539:07  1.37% [intr{irq264: ix0:que }]
   12 root       -92    -     0K   960K WAIT    9 527:26  1.17% [intr{irq273: ix0:que }]
79590 www         20    0 25156K  8724K kqread 12  56:58  1.17% nginx: worker process (nginx)
   12 root       -92    -     0K   960K WAIT    5 550:28  1.07% [intr{irq269: ix0:que }]
   13 root        -8    -     0K    48K -      18 491:43  1.07% [geom{g_down}]
79585 www         20    0 25156K  8728K kqread 19  61:40  1.07% nginx: worker process (nginx)
65507 root        20    -     0K    16K aiordy  6   0:11  1.07% [aiod2]
79574 www         20    0 25156K  9260K kqread 17  63:53  0.98% nginx: worker process (nginx)
   12 root       -92    -     0K   960K WAIT    4 565:24  0.88% [intr{irq268: ix0:que }]
   12 root       -92    -     0K   960K WAIT    3 550:34  0.88% [intr{irq267: ix0:que }]
   13 root        -8    -     0K    48K -      18 442:35  0.88% [geom{g_up}]
79583 www         20    0 25156K  9300K kqread 19  60:26  0.88% nginx: worker process (nginx)
   12 root       -68    -     0K   960K WAIT   11 410:05  0.78% [intr{swi2: cambio}]
   12 root       -92    -     0K   960K WAIT    6 550:33  0.68% [intr{irq270: ix0:que }]
79578 www         20    0 25156K  8468K kqread 23  60:54  0.68% nginx: worker process (nginx)
79576 www         20    0 25156K  8792K kqread 11  63:21  0.49% nginx: worker process (nginx)
79572 www         20    0 25156K  8464K kqread  8  62:23  0.49% nginx: worker process (nginx)
   12 root       -92    -     0K   960K WAIT   11 512:14  0.39% [intr{irq275: ix0:que }]
26851 root        20    0 71240K 13868K select  1  64:08  0.39% /usr/local/sbin/snmpd -p /var/run/net_snmpd.pid -c /usr/local/e
79584 www         20    0 25156K  8204K kqread 17  56:42  0.39% nginx: worker process (nginx)
    0 root       -16    0     0K  9904K -      14 212:49  0.29% [kernel{zio_read_intr_12}]
    0 root       -16    0     0K  9904K -       9 212:47  0.29% [kernel{zio_read_intr_5}]
    0 root       -16    0     0K  9904K -       2 212:39  0.29% [kernel{zio_read_intr_1}]
79571 www         20    0 25156K  8460K kqread  1  60:31  0.29% nginx: worker process (nginx)
    0 root       -16    0     0K  9904K -      10 212:45  0.20% [kernel{zio_read_intr_14}]
    0 root       -16    0     0K  9904K -       6 212:45  0.20% [kernel{zio_read_intr_7}]
root@cs0:~# zpool list zdata
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zdata   162T   136T  26,4T    83%  1.00x  ONLINE  -
root@cs0:~# zpool status zdata
  pool: zdata
 state: ONLINE
  scan: resilvered 1,71T in 8h57m with 0 errors on Tue May 27 02:08:16 2014
config:

        NAME             STATE     READ WRITE CKSUM
        zdata            ONLINE       0     0     0
          raidz3-0       ONLINE       0     0     0
            label/zdr00  ONLINE       0     0     0
            label/zdr01  ONLINE       0     0     0
            label/zdr02  ONLINE       0     0     0
            label/zdr03  ONLINE       0     0     0
            label/zdr04  ONLINE       0     0     0
            label/zdr05  ONLINE       0     0     0
            label/zdr06  ONLINE       0     0     0
            label/zdr07  ONLINE       0     0     0
            label/zdr08  ONLINE       0     0     0
          raidz3-1       ONLINE       0     0     0
            label/zdr10  ONLINE       0     0     0
            label/zdr11  ONLINE       0     0     0
            label/zdr12  ONLINE       0     0     0
            label/zdr13  ONLINE       0     0     0
            label/zdr14  ONLINE       0     0     0
            label/zdr15  ONLINE       0     0     0
            label/zdr16  ONLINE       0     0     0
            label/zdr17  ONLINE       0     0     0
            label/zdr18  ONLINE       0     0     0
          raidz3-2       ONLINE       0     0     0
            label/zdr20  ONLINE       0     0     0
            label/zdr21  ONLINE       0     0     0
            label/zdr22  ONLINE       0     0     0
            label/zdr23  ONLINE       0     0     0
            label/zdr24  ONLINE       0     0     0
            label/zdr25  ONLINE       0     0     0
            label/zdr26  ONLINE       0     0     0
            label/zdr27  ONLINE       0     0     0
            label/zdr28  ONLINE       0     0     0
          raidz3-3       ONLINE       0     0     0
            label/zdr30  ONLINE       0     0     0
            label/zdr31  ONLINE       0     0     0
            label/zdr32  ONLINE       0     0     0
            label/zdr33  ONLINE       0     0     0
            label/zdr34  ONLINE       0     0     0
            label/zdr35  ONLINE       0     0     0
            label/zdr36  ONLINE       0     0     0
            label/zdr37  ONLINE       0     0     0
            label/zdr38  ONLINE       0     0     0
          raidz3-4       ONLINE       0     0     0
            label/zdr40  ONLINE       0     0     0
            label/zdr41  ONLINE       0     0     0
            label/zdr42  ONLINE       0     0     0
            label/zdr43  ONLINE       0     0     0
            label/zdr44  ONLINE       0     0     0
            label/zdr45  ONLINE       0     0     0
            label/zdr46  ONLINE       0     0     0
            label/zdr47  ONLINE       0     0     0
            label/zdr48  ONLINE       0     0     0
        cache
          gpt/l2arc0     ONLINE       0     0     0
          gpt/l2arc1     ONLINE       0     0     0
          gpt/l2arc2     ONLINE       0     0     0
          gpt/l2arc3     ONLINE       0     0     0
        spares
          label/spare0   AVAIL

errors: No known data errors
root@cs0:~#
Comment 1 Mark Linimon freebsd_committer freebsd_triage 2014-07-05 23:49:21 UTC
Over to maintainers.
Comment 2 Steven Hartland freebsd_committer freebsd_triage 2014-07-06 14:23:47 UTC
This looks like ZFS has backed off from max usage due to app usage on the machine, which is expected behaviour.
Comment 3 vsjcfm 2014-07-08 11:11:14 UTC
(In reply to Steven Hartland from comment #2)
> This looks like ZFS has backed off from max usage due to app usage on the
> machine, which is expected behaviour.

I don't think so because:

1. ARC memory usage never grows above 188 G.
2. I have no memory-hungry processes on this machine.

root@cs0:~# ps axu
USER    PID   %CPU %MEM    VSZ   RSS TT  STAT STARTED            TIME COMMAND
root     11 2292,8  0,0      0   384 ??  RL   14июн14 784228:58,16 [idle]
root      0   60,2  0,0      0  9904 ??  DLs  14июн14   7988:47,05 [kernel]
root     12   45,3  0,0      0   960 ??  WL   14июн14  10349:33,20 [intr]
root      4   10,6  0,0      0   176 ??  DL   14июн14   3197:31,63 [zfskern]
root     13    6,0  0,0      0    48 ??  DL   14июн14   1325:45,03 [geom]
jason 53722    4,7  0,0  52080  6124 ??  S    12:24           3:59,00 sshd: jason@notty (sshd)
root   1256    3,7  0,0  12008  1588 ??  Ss   14июн14     14:07,96 /usr/sbin/syslogd -ccss
www   79586    3,3  0,0  25156  9184 ??  S    25июн14    148:51,93 nginx: worker process (nginx)
www   79575    2,5  0,0  25156  9792 ??  S    25июн14    156:30,95 nginx: worker process (nginx)
www   79571    2,4  0,0  25156  8764 ??  S    25июн14    154:22,46 nginx: worker process (nginx)
root  33499    2,3  0,0      0    16 ??  DLs  13:57           0:11,76 [aiod3]
www   79588    2,3  0,0  25156  9076 ??  S    25июн14    154:41,30 nginx: worker process (nginx)
www   79591    2,2  0,0  25156  9564 ??  S    25июн14    164:38,45 nginx: worker process (nginx)
root  33496    1,8  0,0      0    16 ??  DLs  13:56           0:13,22 [aiod1]
root  97809    1,8  0,0      0    16 ??  DLs  14:02           0:04,57 [aiod4]
root  35718    1,7  0,0  75336 15312 ??  S    чт16        131:47,49 /usr/local/sbin/snmpd -p /var/run/net_snmpd.pid -c /usr/local/etc/snmpd.con
www   79576    1,6  0,0  25156  8796 ??  S    25июн14    154:48,13 nginx: worker process (nginx)
www   79590    1,6  0,0  25156  8788 ??  S    25июн14    152:07,78 nginx: worker process (nginx)
root  97812    1,4  0,0      0    16 ??  DLs  14:02           0:02,15 [aiod5]
www   79580    1,3  0,0  25156  9048 ??  S    25июн14    160:04,70 nginx: worker process (nginx)
www   79582    1,3  0,0  25156  8544 ??  S    25июн14    157:14,82 nginx: worker process (nginx)
root  55608    1,2  0,0      0    16 ??  DLs  14:00           0:06,40 [aiod6]
root  65838    1,1  0,0      0    16 ??  DLs  14:00           0:07,86 [aiod8]
www   79579    1,1  0,0  25156  9692 ??  S    25июн14    156:07,03 nginx: worker process (nginx)
www   79589    1,1  0,0  25156  8532 ??  S    25июн14    146:36,24 nginx: worker process (nginx)
root  97883    1,1  0,0      0    16 ??  DLs  14:05           0:00,27 [aiod2]
root  97815    1,0  0,0      0    16 ??  DLs  14:04           0:02,10 [aiod9]
www   79573    0,9  0,0  25156  9652 ??  S    25июн14    162:47,38 nginx: worker process (nginx)
www   79570    0,7  0,0  25156  8792 ??  S    25июн14    156:29,25 nginx: worker process (nginx)
www   79577    0,5  0,0  25156  9352 ??  S    25июн14    148:34,48 nginx: worker process (nginx)
www   79585    0,5  0,0  25156  9052 ??  S    25июн14    155:20,44 nginx: worker process (nginx)
www   79569    0,4  0,0  25156  9788 ??  S    25июн14    153:58,32 nginx: worker process (nginx)
www   79592    0,4  0,0  25156  8788 ??  S    25июн14    152:43,48 nginx: worker process (nginx)
www   79572    0,3  0,0  25156  9588 ??  S    25июн14    158:04,14 nginx: worker process (nginx)
root  53765    0,3  0,0  20620  5148  0  S+   12:25           0:32,40 top -aSHz
www   79583    0,2  0,0  25156  9316 ??  S    25июн14    154:24,06 nginx: worker process (nginx)
root      1    0,0  0,0   6280   560 ??  ILs  14июн14      0:01,57 /sbin/init --
root      2    0,0  0,0      0    16 ??  DL   14июн14      0:06,31 [mps_scan0]
root      3    0,0  0,0      0    16 ??  DL   14июн14      0:06,86 [mps_scan1]
root      5    0,0  0,0      0    16 ??  DL   14июн14      0:00,00 [xpt_thrd]
root      6    0,0  0,0      0    16 ??  DL   14июн14      0:00,00 [ipmi0: kcs]
root      7    0,0  0,0      0    16 ??  DL   14июн14      0:01,45 [pagedaemon]
root      8    0,0  0,0      0    16 ??  DL   14июн14      0:00,00 [vmdaemon]
root      9    0,0  0,0      0    16 ??  DL   14июн14      0:00,02 [pagezero]
root     10    0,0  0,0      0    16 ??  DL   14июн14      0:00,00 [audit]
root     14    0,0  0,0      0    16 ??  DL   14июн14     40:02,48 [yarrow]
root     15    0,0  0,0      0   128 ??  DL   14июн14      1:54,87 [usb]
root     16    0,0  0,0      0    16 ??  DL   14июн14      0:07,12 [bufdaemon]
root     17    0,0  0,0      0    16 ??  DL   14июн14    133:39,76 [syncer]
root     18    0,0  0,0      0    16 ??  DL   14июн14      0:07,71 [vnlru]
root     19    0,0  0,0      0    16 ??  DL   14июн14      0:00,04 [g_mirror swap0]
root     20    0,0  0,0      0    16 ??  DL   14июн14      0:00,04 [g_mirror swap1]
root    987    0,0  0,0  14184  1612 ??  Is   14июн14      0:00,00 /usr/sbin/moused -p /dev/ums0 -t auto -I /var/run/moused.ums0.pid
root   1027    0,0  0,0  10372  4456 ??  Is   14июн14      0:00,65 /sbin/devd
root   1267    0,0  0,0  14092  1932 ??  Ss   14июн14      0:02,50 /usr/sbin/rpcbind -h 10.0.8.30 -l
root   1298    0,0  0,0  12008  1852 ??  Is   14июн14      0:00,00 /usr/sbin/mountd -h 10.0.8.30 -l -S /etc/exports /etc/zfs/exports
root   1304    0,0  0,0   9868  1628 ??  Is   14июн14      0:00,26 nfsd: master (nfsd)
root   1305    0,0  0,0   9868  1780 ??  S    14июн14     37:23,54 nfsd: server (nfsd)
root   1308    0,0  0,0 274112  1744 ??  Is   14июн14      0:01,57 /usr/sbin/rpc.statd -h 10.0.8.30
root   1311    0,0  0,0  14092  1744 ??  Ss   14июн14      0:02,36 /usr/sbin/rpc.lockd -h 10.0.8.30
root   1385    0,0  0,0  22216  3508 ??  Ss   14июн14      1:17,27 /usr/sbin/ntpd -g -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.dri
root   1388    0,0  0,0  12004  1528 ??  Ss   14июн14     38:53,23 /usr/sbin/powerd
root   1402    0,0  0,0  28144  5016 ??  I    14июн14      0:12,78 /usr/local/sbin/smartd -c /usr/local/etc/smartd.conf -p /var/run/smartd.pid
root   1448    0,0  0,0  28868  4168 ??  Is   14июн14      0:02,46 /usr/sbin/sshd
root   1451    0,0  0,0  20288  4592 ??  Ss   14июн14      0:27,04 sendmail: accepting connections (sendmail)
smmsp  1454    0,0  0,0  20288  4408 ??  Is   14июн14      0:00,66 sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue (sendmail)
root   1458    0,0  0,0  14096  1760 ??  Is   14июн14      0:06,77 /usr/sbin/cron -s
root  24965    0,0  0,0  52080  4772 ??  Is   12:05           0:00,02 sshd: jason [priv] (sshd)
jason 24967    0,0  0,0  52080  5080 ??  S    12:05           0:01,40 sshd: jason@pts/0 (sshd)
root  35701    0,0  0,0  19152  2292 ??  Is   чт16          0:00,67 /usr/local/bin/rsync --daemon
root  53717    0,0  0,0  52080  4836 ??  Is   12:24           0:00,05 sshd: jason [priv] (sshd)
jason 53723    0,0  0,0  17388  3240 ??  Is   12:24           0:00,01 tcsh -c dd of=/mnt/ztemp/jason/lvhdd.dd.gz bs=128k
jason 53725    0,0  0,0   9872  1916 ??  S    12:24           0:16,87 dd of=/mnt/ztemp/jason/lvhdd.dd.gz bs=128k
root  79568    0,0  0,0  21060  4352 ??  I    25июн14      0:00,02 nginx: master process /usr/local/sbin/nginx
www   79574    0,0  0,0  25156  9308 ??  S    25июн14    159:31,99 nginx: worker process (nginx)
www   79578    0,0  0,0  25156  9688 ??  S    25июн14    156:39,05 nginx: worker process (nginx)
www   79581    0,0  0,0  25156  9048 ??  S    25июн14    152:05,38 nginx: worker process (nginx)
www   79584    0,0  0,0  25156  8792 ??  S    25июн14    149:30,11 nginx: worker process (nginx)
www   79587    0,0  0,0  25156  8740 ??  S    25июн14    157:19,82 nginx: worker process (nginx)
root  97816    0,0  0,0  52080  4772 ??  Is   14:04           0:00,02 sshd: jason [priv] (sshd)
jason 97818    0,0  0,0  52080  5080 ??  S    14:04           0:00,03 sshd: jason@pts/1 (sshd)
root   1520    0,0  0,0  12008  1588 v1  Is+  14июн14      0:00,04 /usr/libexec/getty Pc ttyv1
root   1521    0,0  0,0  12008  1588 v2  Is+  14июн14      0:00,04 /usr/libexec/getty Pc ttyv2
root   1522    0,0  0,0  12008  1588 v3  Is+  14июн14      0:00,04 /usr/libexec/getty Pc ttyv3
jason 24968    0,0  0,0  17388  4148  0  Is   12:05           0:00,02 -tcsh (tcsh)
root  24970    0,0  0,0  46560  3548  0  I    12:05           0:00,02 sudo su -
root  24971    0,0  0,0  43180  2220  0  I    12:06           0:00,00 su -
root  24972    0,0  0,0  17388  3952  0  I    12:06           0:00,02 -su (tcsh)
jason 97819    0,0  0,0  17388  4148  1  Is   14:04           0:00,02 -tcsh (tcsh)
root  97821    0,0  0,0  46560  3548  1  I    14:04           0:00,01 sudo su -
root  97832    0,0  0,0  43180  2220  1  I    14:05           0:00,00 su -
root  97833    0,0  0,0  17388  3952  1  S    14:05           0:00,03 -su (tcsh)
root  97902    0,0  0,0  14144  2092  1  R+   14:06           0:00,00 ps axu
root@cs0:~#
Comment 4 Steven Hartland freebsd_committer freebsd_triage 2014-07-08 12:33:20 UTC
Are you sure that its not reduced over time due to something high memory usage processes which now aren't running?

To check this reboot and check the values then.
Comment 5 Steven Hartland freebsd_committer freebsd_triage 2014-07-08 13:02:49 UTC
sysctl kstat.zfs.misc.arcstats would also be useful
Comment 6 vsjcfm 2014-07-08 13:31:48 UTC
(In reply to Steven Hartland from comment #4)
> Are you sure that its not reduced over time due to something high memory
> usage processes which now aren't running?
> 
> To check this reboot and check the values then.

Absolutely, this machine is just nginx fileserver, without any other tasks. I can collect ARC graph if this information will be useful.


(In reply to Steven Hartland from comment #5)
> sysctl kstat.zfs.misc.arcstats would also be useful

kstat.zfs.misc.arcstats.hits: 6357561684
kstat.zfs.misc.arcstats.misses: 1754865888
kstat.zfs.misc.arcstats.demand_data_hits: 2247259288
kstat.zfs.misc.arcstats.demand_data_misses: 8782100
kstat.zfs.misc.arcstats.demand_metadata_hits: 2211824311
kstat.zfs.misc.arcstats.demand_metadata_misses: 8160603
kstat.zfs.misc.arcstats.prefetch_data_hits: 1283671663
kstat.zfs.misc.arcstats.prefetch_data_misses: 1737864288
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 614806422
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 58897
kstat.zfs.misc.arcstats.mru_hits: 1548194108
kstat.zfs.misc.arcstats.mru_ghost_hits: 10388582
kstat.zfs.misc.arcstats.mfu_hits: 3287889203
kstat.zfs.misc.arcstats.mfu_ghost_hits: 102318783
kstat.zfs.misc.arcstats.allocated: 1774678632
kstat.zfs.misc.arcstats.deleted: 1646058550
kstat.zfs.misc.arcstats.stolen: 1146219028
kstat.zfs.misc.arcstats.recycle_miss: 3101763
kstat.zfs.misc.arcstats.mutex_miss: 4427461
kstat.zfs.misc.arcstats.evict_skip: 284890938
kstat.zfs.misc.arcstats.evict_l2_cached: 188456702423040
kstat.zfs.misc.arcstats.evict_l2_eligible: 32054253667840
kstat.zfs.misc.arcstats.evict_l2_ineligible: 9169893205504
kstat.zfs.misc.arcstats.hash_elements: 18175960
kstat.zfs.misc.arcstats.hash_elements_max: 18488668
kstat.zfs.misc.arcstats.hash_collisions: 1127435975
kstat.zfs.misc.arcstats.hash_chains: 3877554
kstat.zfs.misc.arcstats.hash_chain_max: 24
kstat.zfs.misc.arcstats.p: 145378360667
kstat.zfs.misc.arcstats.c: 197771088957
kstat.zfs.misc.arcstats.c_min: 33203088384
kstat.zfs.misc.arcstats.c_max: 265624707072
kstat.zfs.misc.arcstats.size: 200751138272
kstat.zfs.misc.arcstats.hdr_size: 1329000768                                                                                                      
kstat.zfs.misc.arcstats.data_size: 194945960448                                                                                                   
kstat.zfs.misc.arcstats.other_size: 1720016232                                                                                                    
kstat.zfs.misc.arcstats.l2_hits: 558004136                                                                                                        
kstat.zfs.misc.arcstats.l2_misses: 1196861331                                                                                                     
kstat.zfs.misc.arcstats.l2_feeds: 4171054                                                                                                         
kstat.zfs.misc.arcstats.l2_rw_clash: 72326                                                                                                        
kstat.zfs.misc.arcstats.l2_read_bytes: 73002745243136                                                                                             
kstat.zfs.misc.arcstats.l2_write_bytes: 113011830327808                                                                                           
kstat.zfs.misc.arcstats.l2_writes_sent: 3913397                                                                                                   
kstat.zfs.misc.arcstats.l2_writes_done: 3913396                                                                                                   
kstat.zfs.misc.arcstats.l2_writes_error: 0                                                                                                        
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 117425
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 91025
kstat.zfs.misc.arcstats.l2_evict_reading: 219
kstat.zfs.misc.arcstats.l2_free_on_write: 4702368
kstat.zfs.misc.arcstats.l2_abort_lowmem: 56
kstat.zfs.misc.arcstats.l2_cksum_bad: 18
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 1838076781056
kstat.zfs.misc.arcstats.l2_asize: 1838076145664
kstat.zfs.misc.arcstats.l2_hdr_size: 3164330456
kstat.zfs.misc.arcstats.l2_compress_successes: 144015
kstat.zfs.misc.arcstats.l2_compress_zeros: 0
kstat.zfs.misc.arcstats.l2_compress_failures: 0
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 387445125
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 134983793
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 59111238565
kstat.zfs.misc.arcstats.l2_write_in_l2: 850651818119
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 847
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 36478134670
kstat.zfs.misc.arcstats.l2_write_full: 2008530
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 4171054
kstat.zfs.misc.arcstats.l2_write_pios: 3913397
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 27587228945605632
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 225730988
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 19832
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.duplicate_buffers: 0
kstat.zfs.misc.arcstats.duplicate_buffers_size: 0
kstat.zfs.misc.arcstats.duplicate_reads: 0
Comment 7 Steven Hartland freebsd_committer freebsd_triage 2014-07-08 14:28:05 UTC
Just because the processes are using small amounts of memory doesn't mean others aspects of the kernel aren't spiking up and demanding ram such as mbufs, so just because its only running nginx doesn't mean you haven't seen a memory spike in other areas.


Are you seeing any movement at all over time in:
kstat.zfs.misc.arcstats.c
Comment 8 vsjcfm 2014-07-11 11:25:25 UTC
(In reply to Steven Hartland from comment #7)
> Just because the processes are using small amounts of memory doesn't mean
> others aspects of the kernel aren't spiking up and demanding ram such as
> mbufs, so just because its only running nginx doesn't mean you haven't seen
> a memory spike in other areas.
I'm using static mbuf setting, they're using ~500M RAM. I'm also wondering what part of kernel could use ~50G of RAM for a short period - this amount is always free (not inactive).

> Are you seeing any movement at all over time in:
> kstat.zfs.misc.arcstats.c
I will update machine to 9.3R and build some MRTG graphs for memory usage.
Comment 9 vsjcfm 2014-07-16 17:23:48 UTC
Created attachment 144730 [details]
Memory graph

Green: ARC size
Blue: Free memory
Comment 10 vsjcfm 2014-07-16 17:26:46 UTC
Comment on attachment 144730 [details]
Memory graph

So the ARC target size after the boot is 247G. Then ARC grows to ~185G and immediately after that target size drops down.
BTW, why do you think that I'm experiencing this because of other memory consumers? As you can see, ZFS memory throttle count is zero.
Comment 11 vsjcfm 2014-07-18 08:18:05 UTC
Created attachment 144771 [details]
Memory graph #2

A new graph. Server was rebooted because of power problem.
Comment 12 Steven Hartland freebsd_committer freebsd_triage 2014-08-23 18:25:17 UTC
Please try the latest patch on:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594
Comment 13 Steven Hartland freebsd_committer freebsd_triage 2014-08-28 20:07:28 UTC
Addresssed by: http://svnweb.freebsd.org/changeset/base/270759
Comment 14 vsjcfm 2014-09-03 11:12:51 UTC
(In reply to Steven Hartland from comment #12)
> Please try the latest patch on:
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594

Sorry for the long silence - it's too hard to work in absence of electricity and under mortar fire.

So, which exactly patch should I try? Two last are made against -CURRENT and 10, so they won't apply to 9.
Comment 15 Steven Hartland freebsd_committer freebsd_triage 2014-09-03 11:18:37 UTC
They should apply to 9 quite easily tbh, code in that area hasnt changed much I don't believe.
Comment 16 vsjcfm 2014-09-03 11:22:00 UTC
(In reply to Steven Hartland from comment #15)
> They should apply to 9 quite easily tbh, code in that area hasnt changed
> much I don't believe.

root@jnb:/usr/src# svn info
Path: .
Working Copy Root Path: /usr/src
URL: https://svn0.us-west.freebsd.org/base/releng/9.3
Relative URL: ^/releng/9.3
Repository Root: https://svn0.us-west.freebsd.org/base
Repository UUID: ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
Revision: 271006
Node Kind: directory
Schedule: normal
Last Changed Author: gjb
Last Changed Rev: 268512
Last Changed Date: 2014-07-11 00:53:54 +0300 (Fri, 11 Jul 2014)

root@jnb:/usr/src# svn diff
root@jnb:/usr/src# wget 'https://bugs.freebsd.org/bugzilla/attachment.cgi?id=146456'
--2014-09-03 14:21:19--  https://bugs.freebsd.org/bugzilla/attachment.cgi?id=146456
Resolving bugs.freebsd.org (bugs.freebsd.org)... 8.8.178.110, 2001:1900:2254:206a::50:0
Connecting to bugs.freebsd.org (bugs.freebsd.org)|8.8.178.110|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://bz-attachments.freebsd.org/attachment.cgi?id=146456 [following]
--2014-09-03 14:21:21--  http://bz-attachments.freebsd.org/attachment.cgi?id=146456
Resolving bz-attachments.freebsd.org (bz-attachments.freebsd.org)... 8.8.178.110, 2001:1900:2254:206a::50:0
Connecting to bz-attachments.freebsd.org (bz-attachments.freebsd.org)|8.8.178.110|:80... connected.
HTTP request sent, awaiting response... 302 /attachment.cgi?id=146456
Location: https://bz-attachments.FreeBSD.org/attachment.cgi?id=146456 [following]
--2014-09-03 14:21:22--  https://bz-attachments.freebsd.org/attachment.cgi?id=146456
Connecting to bz-attachments.freebsd.org (bz-attachments.freebsd.org)|8.8.178.110|:443... connected.
HTTP request sent, awaiting response... 200 OK
Cookie coming from bz-attachments.freebsd.org attempted to set domain to bugs.freebsd.org
Length: unspecified [text/plain]
Saving to: 'arc-reclaim-stable10.patch'

    [ <=>                                                                                                     ] 7,665       --.-K/s   in 0.001s  

2014-09-03 14:21:23 (7.89 MB/s) - 'arc-reclaim-stable10.patch' saved [7665]

root@jnb:/usr/src# svn patch arc-reclaim-stable10.patch 
C         sys/cddl/compat/opensolaris/kern/opensolaris_kmem.c
>         rejected hunk @@ -126,18 +126,47 @@
U         sys/cddl/compat/opensolaris/sys/kmem.h
C         sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
>         rejected hunk @@ -204,10 +201,24 @@
>         rejected hunk @@ -217,7 +228,36 @@
>         applied hunk @@ -2421,9 +2461,12 @@ with offset -16
>         applied hunk @@ -2443,8 +2486,11 @@ with offset -16
>         applied hunk @@ -2455,15 +2501,25 @@ with offset -16
>         applied hunk @@ -2507,9 +2563,6 @@ with offset -16
>         applied hunk @@ -2516,6 +2569,8 @@ with offset -16
C         sys/vm/vm_pageout.c
>         rejected hunk @@ -115,10 +115,14 @@
>         applied hunk @@ -126,7 +130,7 @@ with offset -4
>         rejected hunk @@ -1637,15 +1641,11 @@
>         rejected hunk @@ -1691,7 +1691,18 @@
Summary of conflicts:
  Text conflicts: 3
root@jnb:/usr/src#
Comment 17 Steven Hartland freebsd_committer freebsd_triage 2014-09-04 18:25:07 UTC
Created attachment 146813 [details]
arc reclaim refactor (against releng/9.3)

Looks like there are more differences in 9.3 than I thought.

Try the attached patch, its WIP based on the latest review code on https://reviews.freebsd.org/D702

I've only kernel compiled tested I'm afraid as I don't have any 9 boxes.
Comment 18 vsjcfm 2014-09-05 10:12:56 UTC
(In reply to Steven Hartland from comment #17)
> Created attachment 146813 [details]
> arc reclaim refactor (against releng/9.3)
> 
> Looks like there are more differences in 9.3 than I thought.
> 
> Try the attached patch, its WIP based on the latest review code on
> https://reviews.freebsd.org/D702
> 
> I've only kernel compiled tested I'm afraid as I don't have any 9 boxes.

Hmm, this patch breaks world building:

===> cddl/lib/libzpool (all)
cc  -O3 -pipe -fno-strict-aliasing -march=core2 -I/usr/src/cddl/lib/libzpool/../../../sys/cddl/compat/opensolaris -I/usr/src/cddl/lib/libzpool/../../compat/opensolaris/include -I/usr/src/cddl/lib/libzpool/../../compat/opensolaris/lib/libumem -I/usr/src/cddl/lib/libzpool/../../contrib/opensolaris/lib/libzpool/common -I/usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/sys -I/usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs -I/usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/common/zfs -I/usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common -I/usr/src/cddl/lib/libzpool/../../contrib/opensolaris/head -I/usr/src/cddl/lib/libzpool/../../lib/libumem -I/usr/src/cddl/lib/libzpool/../../contrib/opensolaris/lib/libnvpair -DWANTS_MUTEX_OWNED -I/usr/src/cddl/lib/libzpool/../../../lib/libpthread/thread -I/usr/src/cddl/lib/libzpool/../../../lib/libpthread/sys -I/usr/src/cddl/lib/libzpool/../../../lib/libthr/arch/amd64/include -DDEBUG=1 -DNEED_SOLARIS_BOOLEAN -DNDEBUG -std=iso9899:1999 -Qunused-arguments  -fstack-protector -Wno-pointer-sign -Wno-empty-body -Wno-string-plus-int -Wno-unused-const-variable -Wno-tautological-compare -Wno-unused-value -Wno-parentheses-equality -Wno-unused-function -Wno-enum-conversion -Wno-switch -Wno-switch-enum -Wno-knr-promoted-parameter -Wno-parentheses -Wno-unknown-pragmas -c /usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c -o arc.o
/usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:243:8: warning: implicit declaration of function
      'sysctl_handle_int' is invalid in C99 [-Wimplicit-function-declaration]
        err = sysctl_handle_int(oidp, &val, 0, req);
              ^
/usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:247:12: error: use of undeclared identifier 'minfree'
        if (val < minfree)
                  ^
/usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:249:12: error: use of undeclared identifier 'cnt'
        if (val > cnt.v_page_count)
                  ^
1 warning and 2 errors generated.
*** [arc.o] Error code 1

Stop in /usr/src/cddl/lib/libzpool.
*** [all] Error code 1

Stop in /usr/src/cddl/lib.
*** [cddl/lib__L] Error code 1

Stop in /usr/src.
*** [libraries] Error code 1

Stop in /usr/src.
*** [_libraries] Error code 1

Stop in /usr/src.
*** [buildworld] Error code 1

Stop in /usr/src.
Comment 19 Steven Hartland freebsd_committer freebsd_triage 2014-09-05 11:25:22 UTC
Created attachment 146856 [details]
arc reclaim refactor (against releng/9.3)

Fixed world build
Comment 20 Steven Hartland freebsd_committer freebsd_triage 2014-09-05 11:44:11 UTC
Created attachment 146858 [details]
arc reclaim refactor (against releng/9.3)

Added missing machine/vmparam.h include causing all platforms to take the small memory code paths
Comment 21 vsjcfm 2014-09-11 08:40:07 UTC
(In reply to Steven Hartland from comment #20)
> Created attachment 146858 [details]
> arc reclaim refactor (against releng/9.3)
> 
> Added missing machine/vmparam.h include causing all platforms to take the
> small memory code paths

Looks like things are much better now. I'll post some thats here later.
Comment 22 vsjcfm 2014-09-11 14:15:36 UTC
(In reply to vsjcfm from comment #21)
> (In reply to Steven Hartland from comment #20)
> > Created attachment 146858 [details]
> > arc reclaim refactor (against releng/9.3)
> > 
> > Added missing machine/vmparam.h include causing all platforms to take the
> > small memory code paths
> 
> Looks like things are much better now. I'll post some thats here later.

Yep, works as expected.

root@cs0:~# fgrep " memory " /var/run/dmesg.boot
real memory  = 274877906944 (262144 MB)
avail memory = 265898815488 (253580 MB)
root@cs0:~# zfs-stats -a

------------------------------------------------------------------------
ZFS Subsystem Report                            Thu Sep 11 11:23:57 2014
------------------------------------------------------------------------

System Information:

        Kernel Version:                         903000 (osreldate)
        Hardware Platform:                      amd64
        Processor Architecture:                 amd64

        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5

FreeBSD 9.3-RELEASE #0 r271159M: Fri Sep 5 16:38:52 EEST 2014 root
11:23  up  1:46, 1 user, load averages: 0,18 0,87 1,37

------------------------------------------------------------------------

System Memory:

        0.01%   20.14   MiB Active,     0.14%   355.82  MiB Inact
        94.12%  233.77  GiB Wired,      0.03%   66.97   MiB Cache
        5.71%   14.18   GiB Free,       0.00%   2.51    MiB Gap

        Real Installed:                         256.00  GiB
        Real Available:                 99.98%  255.96  GiB
        Real Managed:                   97.04%  248.38  GiB

        Logical Total:                          256.00  GiB
        Logical Used:                   94.30%  241.41  GiB
        Logical Free:                   5.70%   14.59   GiB                                                                                       
                                                                                                                                                  
Kernel Memory:                                  229.70  GiB                                                                                       
        Data:                           100.00% 229.69  GiB                                                                                       
        Text:                           0.00%   11.19   MiB                                                                                       
                                                                                                                                                  
Kernel Memory Map:                              240.93  GiB                                                                                       
        Size:                           95.17%  229.29  GiB                                                                                       
        Free:                           4.83%   11.65   GiB                                                                                       
                                                                                                                                                  
------------------------------------------------------------------------                                                                          
                                                                                                                                                  
ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                450.07k
        Recycle Misses:                         3.10k
        Mutex Misses:                           1.66k
        Evict Skips:                            2.93k

ARC Size:                               92.20%  228.09  GiB
        Target Size: (Adaptive)         92.20%  228.09  GiB
        Min Size (Hard Limit):          12.50%  30.92   GiB
        Max Size (High Water):          8:1     247.38  GiB

ARC Size Breakdown:
        Recently Used Cache Size:       78.46%  178.96  GiB
        Frequently Used Cache Size:     21.54%  49.13   GiB

ARC Hash Breakdown:
        Elements Max:                           2.71m
        Elements Current:               99.94%  2.71m
        Collisions:                             925.34k
        Chain Max:                              8
        Chains:                                 577.91k

------------------------------------------------------------------------

ARC Efficiency:                                 23.85m
        Cache Hit Ratio:                86.86%  20.71m
        Cache Miss Ratio:               13.14%  3.13m
        Actual Hit Ratio:               83.30%  19.87m

        Data Demand Efficiency:         99.39%  14.51m
        Data Prefetch Efficiency:       1.26%   2.95m

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             4.10%   848.28k
          Most Recently Used:           67.23%  13.93m
          Most Frequently Used:         28.67%  5.94m
          Most Recently Used Ghost:     0.00%   18
          Most Frequently Used Ghost:   0.00%   283

        CACHE HITS BY DATA TYPE:
          Demand Data:                  69.61%  14.42m
          Prefetch Data:                0.18%   37.05k
          Demand Metadata:              26.29%  5.45m
          Prefetch Metadata:            3.92%   811.53k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  2.84%   88.93k
          Prefetch Data:                92.84%  2.91m
          Demand Metadata:              3.57%   111.97k
          Prefetch Metadata:            0.75%   23.47k

------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
        Passed Headroom:                        26.15k
        Tried Lock Failures:                    650.09k
        IO In Progress:                         0
        Low Memory Aborts:                      3
        Free on Write:                          1
        Writes While Full:                      0
        R/W Clashes:                            0
        Bad Checksums:                          0
        IO Errors:                              0
        SPA Mismatch:                           171.98m

L2 ARC Size: (Adaptive)                         70.24   MiB
        Header Size:                    0.02%   11.87   KiB

L2 ARC Breakdown:                               3.13m
        Hit Ratio:                      0.01%   204
        Miss Ratio:                     99.99%  3.13m
        Feeds:                                  6.36k

L2 ARC Buffer:
        Bytes Scanned:                          8.77    TiB
        Buffer Iterations:                      6.36k
        List Iterations:                        406.98k
        NULL List Iterations:                   38

L2 ARC Writes:
        Writes Sent:                    100.00% 978

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:                                 50.42m
        Hit Ratio:                      72.29%  36.45m
        Miss Ratio:                     27.71%  13.97m

        Colinear:                               13.97m
          Hit Ratio:                    0.00%   477
          Miss Ratio:                   100.00% 13.97m

        Stride:                                 32.79m
          Hit Ratio:                    100.00% 32.79m
          Miss Ratio:                   0.00%   35

DMU Misc:
        Reclaim:                                13.97m
          Successes:                    0.07%   9.19k
          Failures:                     99.93%  13.96m

        Streams:                                3.66m
          +Resets:                      0.00%   25
          -Resets:                      100.00% 3.66m
          Bogus:                                0

------------------------------------------------------------------------

VDEV Cache Summary:                             177.69k
        Hit Ratio:                      22.87%  40.64k
        Miss Ratio:                     69.06%  122.72k
        Delegations:                    8.07%   14.34k

------------------------------------------------------------------------

ZFS Tunables (sysctl):
        kern.maxusers                           16717
        vm.kmem_size                            266698121216
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        329853485875
        vfs.zfs.arc_max                         265624379392
        vfs.zfs.arc_min                         33203047424
        vfs.zfs.arc_free_target                 1811218
        vfs.zfs.arc_meta_used                   2187249752
        vfs.zfs.arc_meta_limit                  66406094848
        vfs.zfs.l2arc_write_max                 41943040
        vfs.zfs.l2arc_write_boost               83886080
        vfs.zfs.l2arc_headroom                  4
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_noprefetch                0
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.anon_size                       705024
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.mru_size                        192158378496
        vfs.zfs.mru_metadata_lsize              161177088
        vfs.zfs.mru_data_lsize                  191584529408
        vfs.zfs.mru_ghost_size                  52754266112
        vfs.zfs.mru_ghost_metadata_lsize        102661120
        vfs.zfs.mru_ghost_data_lsize            52651604992
        vfs.zfs.mfu_size                        51316666880
        vfs.zfs.mfu_metadata_lsize              18357248
        vfs.zfs.mfu_data_lsize                  51140888064
        vfs.zfs.mfu_ghost_size                  34173508096
        vfs.zfs.mfu_ghost_metadata_lsize        45357056
        vfs.zfs.mfu_ghost_data_lsize            34128151040
        vfs.zfs.l2c_only_size                   1554432
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.prefetch_disable                0
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.block_cap                256
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.resilver_delay                  2
        vfs.zfs.scrub_delay                     4
        vfs.zfs.scan_idle                       50
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.metaslab.gang_bang              131073
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.min_alloc_size         10485760
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.weight_factor_enable   0
        vfs.zfs.condense_pct                    200
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.write_to_degraded               0
        vfs.zfs.check_hostid                    1
        vfs.zfs.recover                         0
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.deadman_checktime_ms            5000
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.txg.timeout                     10
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.cache.size                 41943040
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.trim_max_bytes             2147483648
        vfs.zfs.vdev.trim_max_pending           64
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.min_auto_ashift                 9
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zio.use_uma                     0
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.snapshot_list_prefetch          0
        vfs.zfs.super_owner                     0
        vfs.zfs.debug                           0
        vfs.zfs.version.ioctl                   3
        vfs.zfs.version.acl                     1
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.zpl                     5
        vfs.zfs.trim.enabled                    0
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.max_interval               1

------------------------------------------------------------------------

root@cs0:~# top -aSHz -d 1
root@cs0:~# sysctl vm. vfs.zfs. | grep "target:"
vm.v_free_target: 1726387
vm.v_inactive_target: 2589580
vm.stats.vm.v_free_target: 1726387
vm.stats.vm.v_inactive_target: 2589580
vfs.zfs.arc_free_target: 1811218
root@cs0:~# exit
Comment 23 vsjcfm 2014-09-15 06:46:38 UTC
BTW, why vm.v_free_target: 1726387 isn't equal to vfs.zfs.arc_free_target: 1811218?
Comment 24 Steven Hartland freebsd_committer freebsd_triage 2014-09-15 08:16:05 UTC
(In reply to vsjcfm from comment #23)
> BTW, why vm.v_free_target: 1726387 isn't equal to vfs.zfs.arc_free_target:
> 1811218?

In the version of the patch you have there is cnt.v_free_reserved + cnt.v_cache_min
Comment 25 vsjcfm 2014-09-15 08:26:38 UTC
I see that patch works without any issues. Can we assume it's readiness? If so, will be an MFS to releng/?
Comment 26 Steven Hartland freebsd_committer freebsd_triage 2014-09-15 08:32:24 UTC
At the moment no, there's still discussion going on about the value that should be used for zfs_arc_free_target.
Comment 27 vsjcfm 2014-09-15 16:49:49 UTC
(In reply to Steven Hartland from comment #26)
> At the moment no, there's still discussion going on about the value that
> should be used for zfs_arc_free_target.

I'm not a FreeBSD's VM expert, but I think that value of vm.stats.vm.v_free_target is a best choice.
Comment 28 Anton Saietskii 2014-11-03 17:06:16 UTC
I see that PR is was closed. Can you tell me, please, in which revision problem was fixed?
Comment 29 Steven Hartland freebsd_committer freebsd_triage 2014-11-03 17:15:04 UTC
The head revision was r272483 and stable/10 was r272875
Comment 30 Anton Saietskii 2014-11-03 17:16:59 UTC
(In reply to Steven Hartland from comment #29)
> The head revision was r272483 and stable/10 was r272875

But will be a MFC to stable/9? I will upgrade to releng/10.1 as soon as possible, but others may stay on releng/9.
Comment 31 Steven Hartland freebsd_committer freebsd_triage 2014-11-03 17:19:51 UTC
I don't think these changes will apply cleanly to 9

The changes didn't make it into 10.1 I'm afraid, so you'll either need 10.1 + that commit, stable/10 or 10.2 once that's done.