Upgrade to Pro — share decks privately, control downloads, hide ads and more …

James Reinders. INTEL, United States

James Reinders. INTEL, United States

KEYNOTE: Parallel Programming for C and C++ done right.
Please note that the slides are ALL BLACK (a kind recognition from the speaker towards New Zealand winning the Rugby World Cup in 2011)

Multicore World

July 18, 2012
Tweet

More Decks by Multicore World

Other Decks in Programming

Transcript

  1. ©  Intel  2012,  All  Rights  Reserved   Parallel  Programming  for

      C  and  C++  done  right   (a  work  in  progress)     James  Reinders,  Intel  Corp.  
  2. ©  Intel  2012,  All  Rights  Reserved   1957  FORTRAN  

    WRITE (6,7)! 7 FORMAT(13H HELLO, WORLD)! STOP! END! •  Before  microprocessors  
  3. ©  Intel  2012,  All  Rights  Reserved   Fortran   • 

    First  Compiler:  1957   –  Some  of  the  influences  on  design:   •  Punch-­‐cards   •  OpQmizaQon   (users  were  reluctant  to  switch  from  assembly  language)   •  ManipulaQon  of  sense  switches  and  sense  lights   •  MathemaQcal  excepQons  (overflow,  divide  check)   •  Tape  operaQons  (read,  write,  rewind,  backspace)   4   Photos:  Wikimedia  Commons  (hXp://commons.wikimedia.org)  
  4. ©  Intel  2012,  All  Rights  Reserved   Fortran   • 

    Key  adaptaQons  that  came  later:   –  SubrouQnes  and  FuncQons  (Fortran  II,  1958)   –  File  I/O,  characters,  strings  (Fortran  77,  1978)   –  Recursion  (Fortran  90,  1991)   [common  non-­‐standard  extension  available  in  many  Fortran-­‐77  compilers]   –  Free-­‐form  input,  not  based  on  80  column  Punched  Card  (Fortran  90,  1991)   –  Variable  names  up  to  31  characters  instead  of  6  (Fortran  90,  1991)   –  Inline  comments  (Fortran  90,  1991)   –  Array  Nota)ons  (Fortran  90,  1991)   –  Operator  overloading  (Fortran  90,  1991)   –  Dynamic  Memory  AllocaQon  (Fortran  90,  1991)   –  FORALL  (Fortran  95,  1995)   –  OOP  (Fortran  2003,  2003)   –  DO  CONCURRENT  (Fortran  2008,  2010)   –  Co-­‐Array  Fortran  (Fortran  2008,  2010)   5  
  5. ©  Intel  2012,  All  Rights  Reserved   Array  NotaQon  (Fortran90

     [1991])   print  *,  a(:,  3)  !  thirdcolumn   print  *,  a(n,  :)  !  last  row   print  *,  a(:3,  :3)  !  Leading  3-­‐by-­‐3  submatrix   This  is  so  important,  I’ll  come  back  to  it  later.  
  6. ©  Intel  2012,  All  Rights  Reserved   •  A standard,

    explicit notation for data decomposition •  Shared memory and distributed memory systems Sum  in  Fortran,  using  co-­‐array  feature:   REAL SUM[*] CALL SYNC_ALL( WAIT=1 ) DO IMG= 2,NUM_IMAGES() IF (IMG==THIS_IMAGE()) THEN SUM = SUM + SUM[IMG-1] ENDIF CALL SYNC_ALL( WAIT=IMG ) ENDDO Coarray  Fortran  (Fortran  2008  [2010])    
  7. ©  Intel  2012,  All  Rights  Reserved   •  Standard used

    by many parallel applications –  Supported by every major compiler for Fortran and C •  OpenMP 4.0 in the works !$omp  parallel  do              do  i=1,10              A(i)  =  B(i)  *  C(i)              enddo   !$omp  end  parallel   OpenMP*  (Open  MulQ-­‐Processing)    
  8. ©  Intel  2012,  All  Rights  Reserved   DO  CONCURRENT(Fortran  2008

     [2010])   •  OpenMP* is a standard used by many parallel applications –  Supported by every major compiler for Fortran, C, and C++ •  OpenMP 4.0 in the works do  concurrent  (i=1:m)          a(k+i)  =  a(k+i)  +  factor*a(l+i)   end  do  
  9. ©  Intel  2012,  All  Rights  Reserved   Fortran  got  “right”

      •  avoids  evil  pointers   – helps  opQmizaQons   •  supports  arrays  directly   – helps  vectorizaQon   •  straight  forward  usage  (no  templates,  etc.)   – helps  mask  composability  issues  with  OpenMP   •  SQll:  C  and  C++  needed  in  the  universe   and  they  need  help  (more)  
  10. ©  Intel  2012,  All  Rights  Reserved   but...    

    I  promised  to  talk  about  C  and  C++  
  11. ©  Intel  2012,  All  Rights  Reserved   1972  C  

    #include <stdio.h>! int main(int argc, char *argv[])! {! !printf("Hello, World!\n");! !return 0;! }! •  Before  microprocessors  
  12. ©  Intel  2012,  All  Rights  Reserved   1983  C++  

    #include<iostream> using namespace std; int main(int argc, const char *argv[]) { cout << "Hello, World!" << endl; return 0; } •  80286  (16  bits)  
  13. ©  Intel  2012,  All  Rights  Reserved   in  order  to

     fix   •  We  need  to  jointly  understand  the  “problem”    
  14. ©  Intel  2012,  All  Rights  Reserved   I  will  moQvate

     more  than   show  the  soluQon   •  I’m  learning  that  SHOWING  SOLUTIONS  is   useless  if  the  PROBLEM  is  not  FELT   •  Our  soluQons  are  heavily  adopted  by  those   who  were  in  pain  already!  
  15. ©  Intel  2012,  All  Rights  Reserved   •   C  

    –  early  key  features  “register”  keyword  out  of  use   –  “volaQle”  fading  in  usage   –  added:  stronger  typing  (ANSI  C,  1989)   –  C11   –  OpenMP*  (1996)   –  Cilk™  Plus  (2010)     •   C++   –  Objected  oriented   –  Intel®  Threading  Building  Blocks  (2006)   –  C++11   –  Cilk™  Plus  (2010)     16   Photo:  Wikimedia  Commons  (hXp://commons.wikimedia.org)  
  16. ©  Intel  2012,  All  Rights  Reserved   C++11  (some  applies

     to  C11  also)   Core  language  runQme  performance   enhancements   •  Rvalue  references  and  move  constructors   •  Generalized  constant  expressions   •  ModificaQon  to  the  definiQon  of  plain  old  data   Core  language  build  Qme  performance   enhancements   •  Extern  template   Core  language  usability  enhancements   •  IniQalizer  lists   •  Uniform  iniQalizaQon   •  Type  inference   •  Range-­‐based  for-­‐loop   •  Lambda  funcQons  and  expressions   •  AlternaQve  funcQon  syntax   •  Object  construcQon  improvement   •  Explicit  overrides  and  final   •  Null  pointer  constant   •  Strongly  typed  enumeraQons   •  Right  angle  bracket   •  Explicit  conversion  operators   •  Alias  templates   •  Unrestricted  unions     Core  language  funcQonality  improvements   •  Variadic  templates   •  New  string  literals   •  User-­‐defined  literals   •  MulQthreading  memory  model   •  Thread-­‐local  storage   •  Explicitly  defaulted  and  deleted  special  member  funcQons   •  Type  long  long  int   •  StaQc  asserQons   •  Allow  sizeof  to  work  on  members  of  classes  without  an   explicit  object   •  Control  and  query  object  alignment   •  Allow  garbage  collected  implementaQons   C++  standard  library  changes   •  Upgrades  to  standard  library  components   •  Threading  faciliQes   •  Tuple  types   •  Hash  tables   •  Regular  expressions   •  General-­‐purpose  smart  pointers   •  Extensible  random  number  facility   •  Wrapper  reference   •  Polymorphic  wrappers  for  funcQon  objects   •  Type  traits  for  metaprogramming   •  Uniform  method  for  compuQng  the  return  type  of  funcQon   objects     Some  material  adopted  from  wikipedia.org   futures  &  promises,  async   defining  visibility  of  stores   anonymous   funcQons  
  17. ©  Intel  2012,  All  Rights  Reserved   What  about  futures

     &  promises?   future  :  think  of  as  a  consumer  end  of  a  1-­‐element  produce/consumer   queue   •  A  future  can  be  created  only  from  an  exisQng  promise  object.   •  Producer  computes  the  value:  calls  set_value()  on  the  promise.   •  Consumer  needs  the  future  value:  it  calls  get()  on  the  future.   •  Consumer  blocks  waiQng  on  the  producer  if  producer  has  not  yet   set_value().   •  Futures  can  be  used  via  the  async()  member  funcQon.          double  foo(double  arg);      //  consider  normal  funcQon            //  You  can  execute  foo(x)  asynchronously  by  calling          std::future<double>  result  =  std::async(foo,  x);            …          double  val  =  result.get();  
  18. ©  Intel  2012,  All  Rights  Reserved   What  about  futures

     &  promises?   The  problems  with  the  future/async  model  are  both   linguisQc  and  performance-­‐related.     The  key  flaw  is  that  the  whole  noQon  of  scalability   with  using  futures  was  soundly  refuted  in  the   seminal  1993  paper:   Space-­‐efficient  scheduling  of  mulBthreaded   computaBons  by  Blumofe  and  Leiserson.       This  is  the  paper  that  moQvated  the  development   of  Cilk  in  the  first  place.  
  19. ©  Intel  2012,  All  Rights  Reserved   What  about  futures

     &  promises?   The  linguisQc  problems  are  more  subtle.     The  following  two  statements  that  do  roughly  the   same  thing:            std::future<double>  result  =  std::async(foo,  x);          double  result  =  cilk_spawn  foo(x);     The  first  statement  looks  like  a  call  to  async().   The  second  statement  looks  like    a  call  to  foo().    
  20. ©  Intel  2012,  All  Rights  Reserved   What  about  futures

     &  promises?   SemanQcally,  consider  the  following:          std::string  s(“hello”);          int  bar(const  std::string&  s);            std::future<int>  result  =  std::async(bar,  s  +  “  world”);     The  above  statement  is  intended  to  pass   “hello  world”  to  bar  and  run  it  asynchronously.     The  problem  is  that  s  +  “  world”  is  a  temporary   object  that  gets  destroyed  as  soon  as  the  statement   completes.    
  21. ©  Intel  2012,  All  Rights  Reserved   What  about  futures

     &  promises?   SemanQcally,  consider  the  following:          std::string  s(“hello”);          int  bar(const  std::string&  s);            std::future<int>  result  =  std::async(bar,  s  +  “  world”);     The  above  statement  is  intended  to  pass   “hello  world”  to  bar  and  run  it  asynchronously.     The  problem  is  that  s  +  “  world”  is  a  temporary   object  that  gets  destroyed  as  soon  as  the  statement   completes.    
  22. ©  Intel  2012,  All  Rights  Reserved   What  about  futures

     &  promises?   std::string  s(“hello”);   int  bar(const  std::string&  s);     std::future<int>  result  =  std::async(bar,  s  +  “  world”);     Boosters  of  std::async  will  counter  that  all  you  need  is  to   add  a  lambda:            std::future<int>  result  =  std::async([&]{  bar(s  +  “  world”);  });     Without  the  lambda  -­‐  it  is  a  race  condiBon  that  should   not  exist  in  a  linguisQcally  sound  parallel  construct,  but  it   is  preXy  much  unavoidable  in  a  library-­‐only  specificaQon.      
  23. ©  Intel  2012,  All  Rights  Reserved   Let’s  fix  C

     and  C++   and  the  soluQon  is  NOT  OpenMP   and  the  soluQon  is  NOT  CUDA   and  the  soluQon  is  NOT  OpenCL        
  24. ©  Intel  2012,  All  Rights  Reserved   Let’s  fix  C

     and  C++   and  the  soluQon  is  NOT  OpenMP   and  the  soluQon  is  NOT  CUDA   and  the  soluQon  is  NOT  OpenCL       for  starters,  none  of  them  are  “composable”  
  25. ©  Intel  2012,  All  Rights  Reserved   Problem  Statement  

    •  Parallel  Hardware   – Scale   – Vectorize   – SpecializaQon  
  26. ©  Intel  2012,  All  Rights  Reserved   Problem  Statement  

    •  Parallel  Hardware   – Scale   – Vectorize   – SpecializaQon   Let’s  think  about  HARDWARE  TRENDS.  
  27. ©  Intel  2012,  All  Rights  Reserved   Problem  Statement  

    •  Parallel  Hardware   – Scale   – Vectorize   – SpecializaQon     Scale:  cores,  execuQon  units  
  28. ©  Intel  2012,  All  Rights  Reserved   TransacQonal  SynchronizaQon  Extensions

      •  a  beauQful  example  of  HARDWARE  making  life  “simple  again”   helping  CONCURRENCY  /  PARALLEL  PROGRAMMING   •  HLE  is  a  hint  inserted  in  front  of  a  LOCK  operaQon  to  indicate  a   region  is  a  candidate  for  lock  elision   –  XACQUIRE  (0xF2)  and  XRELEASE  (0xF3)  prefixes   –  Don’t  actually  acquire  lock,  but  execute  region  speculaQvely   –  Hardware  buffers  loads  and  stores,  checkpoints  registers   –  Hardware  aXempts  to  commit  atomically  without  locks   –  If  cannot  do  without  locks,  restart,  execute  non-­‐speculaQvely   •  RTM  is  three  new  instrucQons  (XBEGIN,  XEND,  XABORT)   –  Similar  operaQon  as  HLE  (except  no  locks,  new  ISA)   –  If  cannot  commit  atomically,  go  to  handler  indicated  by  XBEGIN   –  Provides  so•ware  addiQonal  capabiliQes  over  HLE  
  29. ©  Intel  2012,  All  Rights  Reserved   Hardware  Threads  ¢

     &  Cores  n   ¢   n  +   +   Knights   Corner  
  30. ©  Intel  2012,  All  Rights  Reserved   1996   1996

     First  System  1  TF/s   Sustained   (with  2/3rd  of  the  system  built…   7264  Intel®  PenQum  Pro  processors)   OS:  Cougar   72  Cabinets     2011  First  Chip  1  TF/s  Sustained   One  22nm  Chip   OS:  Linux*   One  PCI  express  slot   ASCI Red: 1 TeraFlop/sec December 1996 Knights Corner: 1 TeraFlop/sec November 2011 Source and Photo: http://en.wikipedia.org/wiki/ASCI_Red *  Full  system  1.3  TeraFlop/sec,  later  upgraded   to  3.1  TeraFlops/sec  with   9298  Intel®  PenQum  II  Xeon  processors   *  Other  names  and  brands  may  be  claimed  as  the  property  of  others.   Intel:   ShaXering  Barriers     More  than  one  sustained   TeraFlop/sec  
  31. ©  Intel  2012,  All  Rights  Reserved   2011   1996

     First  System  1  TF/s   Sustained   (with  2/3rd  of  the  system  built…   7264  Intel®  PenQum  Pro  processors)   OS:  Cougar   72  Cabinets     2011  First  Chip  1  TF/s  Sustained   One  22nm  Chip   OS:  Linux*   One  PCI  express  slot   ASCI Red: 1 TeraFlop/sec December 1996 Knights Corner: 1 TeraFlop/sec November 2011 Source and Photo: http://en.wikipedia.org/wiki/ASCI_Red *  Full  system  1.3  TeraFlop/sec,  later  upgraded   to  3.1  TeraFlops/sec  with   9298  Intel®  PenQum  II  Xeon  processors   *  Other  names  and  brands  may  be  claimed  as  the  property  of  others.  
  32. ©  Intel  2012,  All  Rights  Reserved   www.threadingbuildingblocks.org   ü 

    Most  popular  C++  abstracQon   ü  Windows*   ü  Linux*     ü  Mac  OS*  X   ü  Xbox  360   ü  Solaris*   ü  FreeBSD*   ü  Intel  processors   ü  AMD  processors   ü  SPARC  processors   ü  IBM  processors   ü  open  source   ü  standard  commiXee  submissions   The most used method to parallelize C++ programs *  Other  names  and  brands  may  be  claimed  as  the  property  of  others.  
  33. ©  Intel  2012,  All  Rights  Reserved   Intel®  Threading  Building

     Blocks   Concurrent  Containers   Common  idioms  for  concurrent  access   -­‐   a  scalable  alternaQve  serial  container     with  a  lock  around  it   Miscellaneous   Thread-­‐safe  Qmers   Generic  Parallel  Algorithms   Efficient  scalable  way  to  exploit  the  power   of  mulQ-­‐core  without  having  to  start   from  scratch   Task  scheduler   The  engine  that  empowers  parallel   algorithms  that  employs  task-­‐stealing   to  maximize  concurrency     SynchronizaQon  PrimiQves   User-­‐level  and  OS  wrappers  for     mutual  exclusion,  ranging  from  atomic   operaQons  to  several  flavors  of     mutexes  and  condiQon  variables   Memory  AllocaQon   Per-­‐thread  scalable  memory  manager  and  false-­‐sharing  free  allocators   Threads   OS  API  wrappers   Thread  Local  Storage   Scalable  implementaQon  of  thread-­‐local   data  that  supports  infinite  number  of  TLS  
  34. ©  Intel  2012,  All  Rights  Reserved   Scale  Forward  

      Intel®  Threading  Building  Blocks  4.0  scales  excepQonally  well  
  35. ©  Intel  2012,  All  Rights  Reserved   Intel®  TBB  Class

     Graph:  Components   New  Feature  as  of  TBB  4.0  Release  (2011)   • Graph  object   –  Contains  a  pointer  to  the  root  task   –  Owns  tasks  created  on  behalf  of  the  graph   –  Users  can  wait  for  the  compleQon  of  all   tasks  of  the  graph   • Graph  nodes   –  Implement  sender  and/or  receiver   interfaces   –  Nodes  manage  messages  and/or  execute   funcQon  objects   • Edges   –  Connect  predecessors  to  successors   Graph  object  ==  graph  handle   Graph  node   Edge  
  36. ©  Intel  2012,  All  Rights  Reserved   threadingbuildingblocks.org   TBB

    for C++ scaling Most popular solution for C++ parallel programming
  37. ©  Intel  2012,  All  Rights  Reserved   threadingbuildingblocks.org   cilkplus.org

      TBB has a “sister” Cilk™ Plus: •  Help for C programmers •  Involve compiler •  Vectorization support
  38. ©  Intel  2012,  All  Rights  Reserved   Scale  Efficiently  

    Intel®  Cilk™  Plus,  three  keywords  to  go  parallel   cilk_for (int i=0; i<n; ++i) {! Foo(a[i]);! }! Open specification at cilkplus.org Parallel loops made easy
  39. ©  Intel  2012,  All  Rights  Reserved   Scale  Efficiently  

    Intel®  Cilk™  Plus,  three  keywords  to  go  parallel   cilk_for (int i=0; i<n; ++i) {! Foo(a[i]);! }! Open specification at cilkplus.org int fib(int n)! {! if (n <= 2)! return n;! else {! int x,y;! x = fib(n-1);! y = fib(n-2);! return x+y;! }! }! int fib(int n)! {! if (n <= 2)! return n;! else {! int x,y;! x = cilk_spawn fib(n-1);! y = fib(n-2);! cilk_sync;! return x+y;! }! }! Turn serial code Into parallel code Parallel loops made easy
  40. ©  Intel  2012,  All  Rights  Reserved   www.cilkplus.org   ü Windows*

      ü Linux*     ü Mac  OS*  X   ü gcc:  experimental   branch   ü open  specificaQon   ü other  compiler  vendors   reviewing   ü standard  commiXee   submissions   *  Other  names  and  brands  may  be  claimed  as  the  property  of  others.  
  41. ©  Intel  2012,  All  Rights  Reserved   Problem  Statement  

    •  Parallel  Hardware   – Scale   – Vectorize   – Tap  specializaQon     Scale:  wider  vectors  instrucQons,  warps…  
  42. ©  Intel  2012,  All  Rights  Reserved   Cray  Supercomputers  

    Photo:  Wikimedia  Commons  (hXp://commons.wikimedia.org)  
  43. ©  Intel  2012,  All  Rights  Reserved   Auto  VectorizaQon:  Useful,

     but  limited  by  language   void v_add (float *c, float *a, float *b) { for (int i=0; i<= MAX; i++) c[i]=a[i]+b[i]; } • C/C++ language implies that vectorizing this loop is “illegal” • Some code can be re-written in a way that the compiler can vectorize • Hard to learn • Impossible to completely automate Consider a Solution: Allow the programmer to express operations without unintended serial execution, using a new syntax.
  44. ©  Intel  2012,  All  Rights  Reserved   What  went  wrong?

      •  Arrays  not  really  in  language   •  Pointers  are,  evil  pointers!  
  45. ©  Intel  2012,  All  Rights  Reserved   What  went  wrong?

      •  Arrays  not  really  in  the  language   •  Pointers  are,  evil  pointers!  
  46. ©  Intel  2012,  All  Rights  Reserved   Cilk™  Plus  soluQon:

      Array  NotaQons  à  Vector  OperaQons   <array base> [<lower bound>:<length>[:<stride>]]+ ! A[:] // All of vector A B[2:6] // Elements 2 to 7 of vector B C[:][5] // Column 5 of matrix C D[0:3:2] // Elements 0,2,4 of vector D + + + + + + + + if  (a[:]  >  b[:])  {          c[:]  =  d[:]  *  e[:];   }  else  {          c[:]  =  d[:]  *  2;   }   A simple and elegant solution: a language construct for vector level parallelism.
  47. ©  Intel  2012,  All  Rights  Reserved   BeXer  than  Intrinsics

     Code   for(j = 0; j <num_pnt-3; j+=4) { v_specularN = _mm_mul_ps(v_specularN, v_cosalpha); cmp = _mm_cmpgt_ps(v_cosalpha, v_zero); v_specularN = _mm_and_ps(v_specularN, cmp); v_intensityR = _mm_add_ps(v_intensityR, v_specularN); v_intensityG = _mm_add_ps(v_intensityG, v_specularN); v_intensityB = _mm_add_ps(v_intensityB, v_specularN); } // compute the leftovers (use scalar C code) for(; j < num_pnt; j++){ if (cosalpha > 0.0){  ...  7  more  lines  !!!  ...           v_specularN[0:num_pnt] *= pow(cosalpha[0:num_pnt], phongconst); if (cosalpha[0:num_pnt] > 0.0){ specularN[0:num_pnt] = specular; intensityR[0:num_pnt] += specularN[0:num_pnt]; intensityG[0:num_pnt] += specularN[0:num_pnt]; intensityB[0:num_pnt] += specularN[0:num_pnt]; } Cilk™  Plus   Intrinsics  
  48. ©  Intel  2012,  All  Rights  Reserved   AddiQonal  Cilk  Plus

     helpful  feature:  Elemental  FuncQons   __declspec  (vector)     __declspec  (vector)  double  option_price_call_black_scholes(                                      double  S,              //  spot  (underlying)  price                                      double  K,              //  strike  (exercise)  price,                                      double  r,              //  interest  rate                                      double  sigma,      //  volatility                                      double  time)        //  time  to  maturity   {          double  time_sqrt  =  sqrt(time);          double  d1  =  (log(S/K)+r*time)/(sigma*time_sqrt)+0.5*sigma*time_sqrt;          double  d2  =  d1-­‐(sigma*time_sqrt);          return  S*N(d1)  -­‐  K*exp(-­‐r*time)*N(d2);   }   //  invoke  calculations  for  call-­‐options   cilk_for  (int  i=0;  i<NUM_OPTIONS;  i++)  {              call[i]  =  option_price_call_black_scholes(S[i],  K[i],  r,  sigma,  time[i]);   }   Use a function to describe the operation on a single element Invoke the function in a data parallel context The compiler generates vector version(s) of the function: Can yield a vector of results as fast as a single result. The secret sauce __declspec  (vector)    
  49. ©  Intel  2012,  All  Rights  Reserved   Op)ons  for  using

     Elemental Functions Construct   Example   Seman)cs   Standard  for  loop   for  (j  =  0;  j  <  N;  j++)  {          a[j]  =  my_ef(b[j]);   }   Single  thread,   auto  vectorizaQon   #pragma  simd   #pragma  simd   for  (j  =  0;  j  <  N;  j++)  {          a[j]  =  my_ef(b[j]);   }   Single  thread,   Guaranteed  to  use  the   vector  version   cilk  for  loop   cilk_for  (j  =  0;  j  <  N;  j++)  {          a[j]  =  my_ef(b[j]);   }   Both  vectorizaQon  and   concurrent  execuQon   Array  notaQon   a[:]  =  my_ef(b[:]);   VectorizaQon.  Concurrency   allowed  (but  not  yet   implemented  in  compilers)   The execution of the elemental functions is serial with respect to the code that follows the invocation.
  50. ©  Intel  2012,  All  Rights  Reserved   #pragma SIMD: forcing

    “auto” vectorization //  vectorizable  outer  loop   #pragma  simd     for  (i=0;  i<n;  i++)  {          complex<float>  c  =  a[i];            complex<float>  z  =  c;          int  j  =  0;          while  ((j  <  255)                      &&  (abs(z)<  limit))   {                    z  =  z*z  +  c;                    j++;          };          color[i]  =  j;   }   •  Combine standard C/C++ syntax with vector semantics. •  This program results in good utilization of vector level parallelism and provides measureable speedups. •  Arguably out of reach of auto vectorizers •  Outlining the loop body can be written as an elemental function. Yet, inline code is normally more efficient.
  51. ©  Intel  2012,  All  Rights  Reserved   TBB  and  Cilk

     Plus  make  a  great  combinaQon   •  Vector  parallelism   –  Cilk  Plus  has  two  syntaxes  for  vector  parallelism   •  Array  NotaQon   •  #pragma simd –  TBB  relies  on  things  outside  TBB  for  vector  parallelism.   •  TBB  +  #pragma simd is  an  aXracQve  combinaQon         •  Thread  parallelism   –  Cilk  Plus  is  a  strict  fork-­‐join  language   •  Straitjacket  enables  strong  guarantees  about  space.   –  TBB  permits  arbitrary  task  graphs   •  “Flexibility  provides  hanging  rope.”   74  
  52. ©  Intel  2012,  All  Rights  Reserved   Problem  Statement  

    •  Parallel  Hardware   – Scale   – Vectorize   – SpecializaQon     Scale:  GPUs,  A/D,  cameras,  co-­‐processors…  
  53. ©  Intel  2012,  All  Rights  Reserved   Problem  Statement  

    •  Parallel  Hardware   –  Scale   –  Vectorize   –  SpecializaQon       We  know  how  to  do  “scale”  and  “vectorize”   so  let’s  do  that.   Tapping  “specializaQon”  is  new,  unproven  and   needs  years  of  pain  before  we  standardize.  
  54. ©  Intel  2012,  All  Rights  Reserved   Moral  of  the

     story   •  Composability  needs  to  be  KING     •  Simplicity  is  ESSSENTIAL     •  Harness  parallelism  in  hardware  by:   – Scaling   – VectorizaQon   – SpecializaQon  
  55. ©  Intel  2012,  All  Rights  Reserved   Resources  to  help

     in  quest  to    “Think  Parallel”  
  56. ©  Intel  2012,  All  Rights  Reserved   Structured  Parallel  Programming

      using  TBB  and  Cilk™  Plus   •  Intel  Threading   Building  Blocks  (TBB)   •  Most  popular  C++   parallel  programming   abstracQon   •  Book  available  in   American  English   www.parallelbook.com  
  57. ©  Intel  2012,  All  Rights  Reserved   Structured  Parallel  Programming

      using  TBB  and  Cilk™  Plus   •  Teaching  structured   parallel  programming   •  Designed  for   programmers  not   computer  architects   •  Teach  best  methods (known  as  paXerns)   Coming:   July  2012   www.parallelbook.com  
  58. ©  Intel  2012,  All  Rights  Reserved   • Use  more  

    Fortran   and     • Let’s  work   together  and   fix  up  C  and  C++   for  the  21st  Century     intel.com/go/parallel  
  59. ©  Intel  2012,  All  Rights  Reserved   Legal  Disclaimer  &

     OpQmizaQon  NoQce   INFORMATION  IN  THIS  DOCUMENT  IS  PROVIDED  “AS  IS”.  NO  LICENSE,  EXPRESS  OR  IMPLIED,  BY  ESTOPPEL  OR   OTHERWISE,  TO  ANY  INTELLECTUAL  PROPERTY  RIGHTS  IS  GRANTED  BY  THIS  DOCUMENT.  INTEL  ASSUMES  NO  LIABILITY   WHATSOEVER  AND  INTEL  DISCLAIMS  ANY  EXPRESS  OR  IMPLIED  WARRANTY,  RELATING  TO  THIS  INFORMATION   INCLUDING  LIABILITY  OR  WARRANTIES  RELATING  TO  FITNESS  FOR  A  PARTICULAR  PURPOSE,  MERCHANTABILITY,  OR   INFRINGEMENT  OF  ANY  PATENT,  COPYRIGHT  OR  OTHER  INTELLECTUAL  PROPERTY  RIGHT.   Performance  tests  and  raQngs  are  measured  using  specific  computer  systems  and/or  components  and  reflect  the   approximate  performance  of  Intel  products  as  measured  by  those  tests.  Any  difference  in  system  hardware  or  so•ware   design  or  configuraQon  may  affect  actual  performance.  Buyers  should  consult  other  sources  of  informaQon  to  evaluate   the  performance  of  systems  or  components  they  are  considering  purchasing.  For  more  informaQon  on  performance  tests   and  on  the  performance  of  Intel  products,  reference  www.intel.com/so•ware/products.   Copyright  ©,  Intel  CorporaQon.  All  rights  reserved.  Intel,  the  Intel  logo,  Xeon,  Core,  VTune,  and  Cilk  are  trademarks  of   Intel  CorporaQon  in  the  U.S.  and  other  countries.  *Other  names  and  brands  may  be  claimed  as  the  property  of  others.         Optimization Notice Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804