Perf examples

From Perf Wiki
Revision as of 08:58, 6 August 2009 by Crmafra (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

First, discovery/enumeration of available counters can be done via 'perf list':

titan:~> perf list

 [...]
 kmem:kmalloc                             [Tracepoint event]
 kmem:kmem_cache_alloc                    [Tracepoint event]
 kmem:kmalloc_node                        [Tracepoint event]
 kmem:kmem_cache_alloc_node               [Tracepoint event]
 kmem:kfree                               [Tracepoint event]
 kmem:kmem_cache_free                     [Tracepoint event]
 kmem:mm_page_free_direct                 [Tracepoint event]
 kmem:mm_pagevec_free                     [Tracepoint event]
 kmem:mm_page_alloc                       [Tracepoint event]
 kmem:mm_page_alloc_zone_locked           [Tracepoint event]
 kmem:mm_page_pcpu_drain                  [Tracepoint event]
 kmem:mm_page_alloc_extfrag               [Tracepoint event]

Then any (or all) of the above event sources can be activated and measured. For example the page alloc/free properties of a 'hackbench run' are:

titan:~> perf stat -e kmem:mm_page_pcpu_drain -e kmem:mm_page_alloc 
-e kmem:mm_pagevec_free -e kmem:mm_page_free_direct ./hackbench 10
Time: 0.575
Performance counter stats for './hackbench 10':
         13857  kmem:mm_page_pcpu_drain 
         27576  kmem:mm_page_alloc      
          6025  kmem:mm_pagevec_free    
         20934  kmem:mm_page_free_direct
   0.613972165  seconds time elapsed

You can observe the statistical properties as well, by using the 'repeat the workload N times' feature of perf stat:

titan:~> perf stat --repeat 5 -e kmem:mm_page_pcpu_drain -e 
  kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 
  kmem:mm_page_free_direct ./hackbench 10
Time: 0.627
Time: 0.644
Time: 0.564
Time: 0.559
Time: 0.626
Performance counter stats for './hackbench 10' (5 runs):
         12920  kmem:mm_page_pcpu_drain    ( +-   3.359% )
         25035  kmem:mm_page_alloc         ( +-   3.783% )
          6104  kmem:mm_pagevec_free       ( +-   0.934% )
         18376  kmem:mm_page_free_direct   ( +-   4.941% )
   0.643954516  seconds time elapsed   ( +-   2.363% )

Furthermore, these tracepoints can be used to sample the workload as well. For example the page allocations done by a 'git gc' can be captured the following way:

titan:~/git> perf record -f -e kmem:mm_page_alloc -c 1 ./git gc
Counting objects: 1148, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (450/450), done.
Writing objects: 100% (1148/1148), done.
Total 1148 (delta 690), reused 1148 (delta 690)
[ perf record: Captured and wrote 0.267 MB perf.data (~11679 samples) ]

To check which functions generated page allocations:

titan:~/git> perf report
# Samples: 10646
#
# Overhead          Command               Shared Object
# ........  ...............  ..........................
#
   23.57%       git-repack  /lib64/libc-2.5.so        
   21.81%              git  /lib64/libc-2.5.so        
   14.59%              git  ./git                     
   11.79%       git-repack  ./git                     
    7.12%              git  /lib64/ld-2.5.so          
    3.16%       git-repack  /lib64/libpthread-2.5.so  
    2.09%       git-repack  /bin/bash                 
    1.97%               rm  /lib64/libc-2.5.so        
    1.39%               mv  /lib64/ld-2.5.so          
    1.37%               mv  /lib64/libc-2.5.so        
    1.12%       git-repack  /lib64/ld-2.5.so          
    0.95%               rm  /lib64/ld-2.5.so          
    0.90%  git-update-serv  /lib64/libc-2.5.so        
    0.73%  git-update-serv  /lib64/ld-2.5.so          
    0.68%             perf  /lib64/libpthread-2.5.so  
    0.64%       git-repack  /usr/lib64/libz.so.1.2.3  

Or to see it on a more finegrained level:

titan:~/git> perf report --sort comm,dso,symbol

# Samples: 10646
#
# Overhead          Command               Shared Object  Symbol
# ........  ...............  ..........................  ......
#
    9.35%       git-repack  ./git                       [.] insert_obj_hash
    9.12%              git  ./git                       [.] insert_obj_hash
    7.31%              git  /lib64/libc-2.5.so          [.] memcpy
    6.34%       git-repack  /lib64/libc-2.5.so          [.] _int_malloc
    6.24%       git-repack  /lib64/libc-2.5.so          [.] memcpy
    5.82%       git-repack  /lib64/libc-2.5.so          [.] __GI___fork
    5.47%              git  /lib64/libc-2.5.so          [.] _int_malloc
    2.99%              git  /lib64/libc-2.5.so          [.] memset

Furthermore, call-graph sampling can be done too, of page allocations - to see precisely what kind of page allocations there are:

titan:~/git> perf record -f -g -e kmem:mm_page_alloc -c 1 ./git gc
Counting objects: 1148, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (450/450), done.
Writing objects: 100% (1148/1148), done.
Total 1148 (delta 690), reused 1148 (delta 690)
[ perf record: Captured and wrote 0.963 MB perf.data (~42069 samples) ]
titan:~/git> perf report -g
# Samples: 10686
#
# Overhead          Command               Shared Object
# ........  ...............  ..........................
#
   23.25%       git-repack  /lib64/libc-2.5.so        
               |          
               |--50.00%-- _int_free
               |          
               |--37.50%-- __GI___fork
               |          make_child
               |          
               |--12.50%-- ptmalloc_unlock_all2
               |          make_child
               |          
                --6.25%-- __GI_strcpy
   21.61%              git  /lib64/libc-2.5.so        
               |          
               |--30.00%-- __GI_read
               |          |          
               |           --83.33%-- git_config_from_file
               |                     git_config
               |                     |          
  [...]

Or you can observe the whole system's page allocations for 10 seconds:

titan:~/git> perf stat -a -e kmem:mm_page_pcpu_drain -e kmem:mm_page_alloc -e kmem:mm_pagevec_free -e kmem:mm_page_free_direct sleep 10

Performance counter stats for 'sleep 10':
        171585  kmem:mm_page_pcpu_drain 
        322114  kmem:mm_page_alloc      
         73623  kmem:mm_pagevec_free    
        254115  kmem:mm_page_free_direct
  10.000591410  seconds time elapsed

Or observe how fluctuating the page allocations are, via statistical analysis done over ten 1-second intervals:

titan:~/git> perf stat --repeat 10 -a -e kmem:mm_page_pcpu_drain -e 
  kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 
  kmem:mm_page_free_direct sleep 1
Performance counter stats for 'sleep 1' (10 runs):
         17254  kmem:mm_page_pcpu_drain    ( +-   3.709% )
         34394  kmem:mm_page_alloc         ( +-   4.617% )
          7509  kmem:mm_pagevec_free       ( +-   4.820% )
         25653  kmem:mm_page_free_direct   ( +-   3.672% )
   1.058135029  seconds time elapsed   ( +-   3.089% )

Or you can annotate the recorded 'git gc' run on a per symbol basis and check which instructions/source-code generated page allocations:

titan:~/git> perf annotate __GI___fork
------------------------------------------------
 Percent |      Source code & Disassembly of libc-2.5.so
------------------------------------------------
         :
         :
         :      Disassembly of section .plt:
         :      Disassembly of section .text:
         :
         :      00000031a2e95560 <__fork>:
[...]
    0.00 :        31a2e95602:   b8 38 00 00 00          mov    $0x38,%eax
    0.00 :        31a2e95607:   0f 05                   syscall 
   83.42 :        31a2e95609:   48 3d 00 f0 ff ff       cmp    $0xfffffffffffff000,%rax
    0.00 :        31a2e9560f:   0f 87 4d 01 00 00       ja     31a2e95762 <__fork+0x202>
    0.00 :        31a2e95615:   85 c0                   test   %eax,%eax

( this shows that 83.42% of __GI___fork's page allocations come from the 0x38 system call it performs. )

Personal tools