Perf examples

From Perf Wiki
Revision as of 16:59, 8 November 2009 by Bpetkov (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

First, discovery/enumeration of available counters can be done via 'perf list'. (For the tracepoint events you will need to have debugfs mounted first, e.g. mount -t debugfs none /dbg):

titan:~> perf list
 [...]
 kmem:kmalloc                             [Tracepoint event]
 kmem:kmem_cache_alloc                    [Tracepoint event]
 kmem:kmalloc_node                        [Tracepoint event]
 kmem:kmem_cache_alloc_node               [Tracepoint event]
 kmem:kfree                               [Tracepoint event]
 kmem:kmem_cache_free                     [Tracepoint event]
 kmem:mm_page_free_direct                 [Tracepoint event]
 kmem:mm_pagevec_free                     [Tracepoint event]
 kmem:mm_page_alloc                       [Tracepoint event]
 kmem:mm_page_alloc_zone_locked           [Tracepoint event]
 kmem:mm_page_pcpu_drain                  [Tracepoint event]
 kmem:mm_page_alloc_extfrag               [Tracepoint event]

Then any (or all) of the above event sources can be activated and measured. For example the page alloc/free properties of a 'hackbench run' are:

titan:~> perf stat -e kmem:mm_page_pcpu_drain -e kmem:mm_page_alloc 
-e kmem:mm_pagevec_free -e kmem:mm_page_free_direct ./hackbench 10
Time: 0.575
Performance counter stats for './hackbench 10':
         13857  kmem:mm_page_pcpu_drain 
         27576  kmem:mm_page_alloc      
          6025  kmem:mm_pagevec_free    
         20934  kmem:mm_page_free_direct
   0.613972165  seconds time elapsed

You can observe the statistical properties as well, by using the 'repeat the workload N times' feature of perf stat:

titan:~> perf stat --repeat 5 -e kmem:mm_page_pcpu_drain -e 
  kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 
  kmem:mm_page_free_direct ./hackbench 10
Time: 0.627
Time: 0.644
Time: 0.564
Time: 0.559
Time: 0.626
Performance counter stats for './hackbench 10' (5 runs):
         12920  kmem:mm_page_pcpu_drain    ( +-   3.359% )
         25035  kmem:mm_page_alloc         ( +-   3.783% )
          6104  kmem:mm_pagevec_free       ( +-   0.934% )
         18376  kmem:mm_page_free_direct   ( +-   4.941% )
   0.643954516  seconds time elapsed   ( +-   2.363% )

Furthermore, these tracepoints can be used to sample the workload as well. For example the page allocations done by a 'git gc' can be captured the following way:

titan:~/git> perf record -f -e kmem:mm_page_alloc -c 1 ./git gc
Counting objects: 1148, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (450/450), done.
Writing objects: 100% (1148/1148), done.
Total 1148 (delta 690), reused 1148 (delta 690)
[ perf record: Captured and wrote 0.267 MB perf.data (~11679 samples) ]

To check which functions generated page allocations:

titan:~/git> perf report
# Samples: 10646
#
# Overhead          Command               Shared Object
# ........  ...............  ..........................
#
   23.57%       git-repack  /lib64/libc-2.5.so        
   21.81%              git  /lib64/libc-2.5.so        
   14.59%              git  ./git                     
   11.79%       git-repack  ./git                     
    7.12%              git  /lib64/ld-2.5.so          
    3.16%       git-repack  /lib64/libpthread-2.5.so  
    2.09%       git-repack  /bin/bash                 
    1.97%               rm  /lib64/libc-2.5.so        
    1.39%               mv  /lib64/ld-2.5.so          
    1.37%               mv  /lib64/libc-2.5.so        
    1.12%       git-repack  /lib64/ld-2.5.so          
    0.95%               rm  /lib64/ld-2.5.so          
    0.90%  git-update-serv  /lib64/libc-2.5.so        
    0.73%  git-update-serv  /lib64/ld-2.5.so          
    0.68%             perf  /lib64/libpthread-2.5.so  
    0.64%       git-repack  /usr/lib64/libz.so.1.2.3  

Or to see it on a more finegrained level:

titan:~/git> perf report --sort comm,dso,symbol

# Samples: 10646
#
# Overhead          Command               Shared Object  Symbol
# ........  ...............  ..........................  ......
#
    9.35%       git-repack  ./git                       [.] insert_obj_hash
    9.12%              git  ./git                       [.] insert_obj_hash
    7.31%              git  /lib64/libc-2.5.so          [.] memcpy
    6.34%       git-repack  /lib64/libc-2.5.so          [.] _int_malloc
    6.24%       git-repack  /lib64/libc-2.5.so          [.] memcpy
    5.82%       git-repack  /lib64/libc-2.5.so          [.] __GI___fork
    5.47%              git  /lib64/libc-2.5.so          [.] _int_malloc
    2.99%              git  /lib64/libc-2.5.so          [.] memset

Furthermore, call-graph sampling can be done too, of page allocations - to see precisely what kind of page allocations there are:

titan:~/git> perf record -f -g -e kmem:mm_page_alloc -c 1 ./git gc
Counting objects: 1148, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (450/450), done.
Writing objects: 100% (1148/1148), done.
Total 1148 (delta 690), reused 1148 (delta 690)
[ perf record: Captured and wrote 0.963 MB perf.data (~42069 samples) ]
titan:~/git> perf report -g
# Samples: 10686
#
# Overhead          Command               Shared Object
# ........  ...............  ..........................
#
   23.25%       git-repack  /lib64/libc-2.5.so        
               |          
               |--50.00%-- _int_free
               |          
               |--37.50%-- __GI___fork
               |          make_child
               |          
               |--12.50%-- ptmalloc_unlock_all2
               |          make_child
               |          
                --6.25%-- __GI_strcpy
   21.61%              git  /lib64/libc-2.5.so        
               |          
               |--30.00%-- __GI_read
               |          |          
               |           --83.33%-- git_config_from_file
               |                     git_config
               |                     |          
  [...]

Or you can observe the whole system's page allocations for 10 seconds:

titan:~/git> perf stat -a -e kmem:mm_page_pcpu_drain -e kmem:mm_page_alloc -e kmem:mm_pagevec_free -e kmem:mm_page_free_direct sleep 10

Performance counter stats for 'sleep 10':
        171585  kmem:mm_page_pcpu_drain 
        322114  kmem:mm_page_alloc      
         73623  kmem:mm_pagevec_free    
        254115  kmem:mm_page_free_direct
  10.000591410  seconds time elapsed

Or observe how fluctuating the page allocations are, via statistical analysis done over ten 1-second intervals:

titan:~/git> perf stat --repeat 10 -a -e kmem:mm_page_pcpu_drain -e 
  kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 
  kmem:mm_page_free_direct sleep 1
Performance counter stats for 'sleep 1' (10 runs):
         17254  kmem:mm_page_pcpu_drain    ( +-   3.709% )
         34394  kmem:mm_page_alloc         ( +-   4.617% )
          7509  kmem:mm_pagevec_free       ( +-   4.820% )
         25653  kmem:mm_page_free_direct   ( +-   3.672% )
   1.058135029  seconds time elapsed   ( +-   3.089% )

Or you can annotate the recorded 'git gc' run on a per symbol basis and check which instructions/source-code generated page allocations:

titan:~/git> perf annotate __GI___fork
------------------------------------------------
 Percent |      Source code & Disassembly of libc-2.5.so
------------------------------------------------
         :
         :
         :      Disassembly of section .plt:
         :      Disassembly of section .text:
         :
         :      00000031a2e95560 <__fork>:
[...]
    0.00 :        31a2e95602:   b8 38 00 00 00          mov    $0x38,%eax
    0.00 :        31a2e95607:   0f 05                   syscall 
   83.42 :        31a2e95609:   48 3d 00 f0 ff ff       cmp    $0xfffffffffffff000,%rax
    0.00 :        31a2e9560f:   0f 87 4d 01 00 00       ja     31a2e95762 <__fork+0x202>
    0.00 :        31a2e95615:   85 c0                   test   %eax,%eax

( this shows that 83.42% of __GI___fork's page allocations come from the 0x38 system call it performs. )

Is it worth optimizing some particular function?

Suppose you want to know whether optimizing some particular function in your program is worth the effort. For example, in the git mailing list the idea of optimizing the SHA1 encryption routine came up, and one can use 'perf' to decide about it. See Linus' reply [1]:

"perf report --sort comm,dso,symbol" profiling shows the following for 
'git fsck --full' on the kernel repo, using the Mozilla SHA1:
   47.69%               git  /home/torvalds/git/git     [.] moz_SHA1_Update
   22.98%               git  /lib64/libz.so.1.2.3       [.] inflate_fast
    7.32%               git  /lib64/libc-2.10.1.so      [.] __GI_memcpy
    4.66%               git  /lib64/libz.so.1.2.3       [.] inflate
    3.76%               git  /lib64/libz.so.1.2.3       [.] adler32
    2.86%               git  /lib64/libz.so.1.2.3       [.] inflate_table
    2.41%               git  /home/torvalds/git/git     [.] lookup_object
    1.31%               git  /lib64/libc-2.10.1.so      [.] _int_malloc
    0.84%               git  /home/torvalds/git/git     [.] patch_delta
    0.78%               git  [kernel]                   [k] hpet_next_event
so yeah, SHA1 performance matters.

Measuring latency

(from http://marc.info/?l=linux-kernel&m=125236192700449&w=2)

btw., if you run -tip and have these enabled:

 CONFIG_PERF_COUNTER=y 
 CONFIG_EVENT_TRACING=y
 cd tools/perf/
 make -j install

... then you can use a couple of new perfcounters features to measure scheduler latencies. For example:

 perf stat -e sched:sched_stat_wait -e task-clock ./hackbench 20

Will tell you how many times this workload got delayed by waiting for CPU time.

You can repeat the workload as well and see the statistical properties of those metrics:

aldebaran:/home/mingo> perf stat --repeat 10 -e \
             sched:sched_stat_wait:r -e task-clock ./hackbench 20
Time: 0.251
Time: 0.214
Time: 0.254
Time: 0.278
Time: 0.245
Time: 0.308
Time: 0.242
Time: 0.222
Time: 0.268
Time: 0.244
Performance counter stats for './hackbench 20' (10 runs):
         59826  sched:sched_stat_wait    #      0.026 M/sec   ( +-   5.540% )
   2280.099643  task-clock-msecs         #      7.525 CPUs    ( +-   1.620% )
   0.303013390  seconds time elapsed   ( +-   3.189% )

To get scheduling events, do:

# perf list 2>&1 | grep sched:
 sched:sched_kthread_stop                   [Tracepoint event]
 sched:sched_kthread_stop_ret               [Tracepoint event]
 sched:sched_wait_task                      [Tracepoint event]
 sched:sched_wakeup                         [Tracepoint event]
 sched:sched_wakeup_new                     [Tracepoint event]
 sched:sched_switch                         [Tracepoint event]
 sched:sched_migrate_task                   [Tracepoint event]
 sched:sched_process_free                   [Tracepoint event]
 sched:sched_process_exit                   [Tracepoint event]
 sched:sched_process_wait                   [Tracepoint event]
 sched:sched_process_fork                   [Tracepoint event]
 sched:sched_signal_send                    [Tracepoint event]
 sched:sched_stat_wait                      [Tracepoint event]
 sched:sched_stat_sleep                     [Tracepoint event]
 sched:sched_stat_iowait                    [Tracepoint event]

stat_wait/sleep/iowait would be the interesting ones, for latency analysis.

Or, if you want to see all the specific delays and want to see min/max/avg, you can do:

 perf record -e sched:sched_stat_wait:r -f -R -c 1 ./hackbench 20
 perf trace
Personal tools