PETSc

La librairie est disponible avec plusieurs versions compilées avec la librairie MPT (2.22) de HPE et les compilateurs intel fortran et GNU C/C++.

plusieurs versions sont disponibles :
  • 3.11 avec une version optimisée ('--with-debugging=0') et une version de debug ('--with-debugging=1')
  • 3.13 avec une version optimisée ('--with-debugging=0') et une version de debug ('--with-debugging=1') ; Une version cuda
  • 3.16 avec une version optimisée ('--with-debugging=0') et une version de debug ('--with-debugging=1')

Plusieurs solveurs de calcul et des librairies additionnelles sont aussi disponibles (MUMPS, SCOTCH, HDF5...).

[homer@vision build]$ module av lib/petsc
------------------------------------------------ /zfs/softs/modulefiles -------------------------------------------------
lib/petsc/3.11/dbg/hpmpt_intel20  lib/petsc/3.13/dbg/hpmpt_gcc10  lib/petsc/3.13/opt/hpmpt_intel20
lib/petsc/3.11/opt/hpmpt_gcc10    lib/petsc/3.13/opt/hpmpt_cuda   lib/petsc/3.16/opt/hpmpt_gcc10
lib/petsc/3.11/opt/hpmpt_intel20  lib/petsc/3.13/opt/hpmpt_gcc10 lib/petsc/3.16/dbg/hpmpt_gcc10

Utilisation

[homer@vision codes]$ module help lib/petsc/3.11/opt/hpmpt_intel20

----------- Module Specific Help for 'lib/petsc/3.11/opt/hpmpt_intel20' ---------------------------

loads the  petsc parallel environment
 optimized version, compiled with
 HPE mpt (ifort,icc,icpc) and intel mkl blacs/scalapack
  additionnal lib : HDF5, NETCDF, METIS, PARMETIS, SCOTCH, FFTW, MUMPS, ML, TETGEN

     you need to load compilers/intel/compilers_2020u2 mpi/hpempi-1.6/mpt/2.22 before!

     usage : module load compilers/intel/compilers_2020u2 mpi/hpempi-1.6/mpt/2.22 lib/petsc/3.11/opt/hpmpt_intel20

[homer@vision codes]$ module show lib/petsc/3.11/opt/hpmpt_intel20
-------------------------------------------------------------------
/zfs/softs/modulefiles/lib/petsc/3.11/opt/hpmpt_intel20:

[homer@vision ~]$ module add lib/petsc/3.11/opt/hpmpt_intel20

[homer@vision ~]$ module list

version 3.16 (optimisée)

Compilers:
  C Compiler:         /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3  -march=cascadelake
    Version: gcc (GCC) 10.2.0
  CUDA Compiler:         /usr/local/cuda/bin/nvcc  -Xcompiler -fPIC -O3 -ccbin /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicxx -std=c++17 -Wno-deprecated-gpu-targets
    Version: nvcc: NVIDIA (R) Cuda compiler driver
  C++ Compiler:         /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicxx  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3  -march=cascadelake  -fPIC -std=gnu++17
    Version: g++ (GCC) 10.2.0
  Fortran Compiler:         /opt/hpe/hpc/mpt/mpt-2.22/bin/mpif90  -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -O3
    Version: GNU Fortran (GCC) 10.2.0
Linkers:
  Shared linker:   /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -shared  -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3  -march=cascadelake
  Dynamic linker:   /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -shared  -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3  -march=cascadelake
  Libraries linked against:   -lrt -lquadmath -lstdc++ -ldl
Intel instruction sets utilizable by compiler:
  AVX2
Intel instruction sets utilizable by compiler:
  AVX2
BlasLapack:
  Intel MKL Version:  20200002
  Includes: -I/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64/../../include
  Library:  -Wl,-rpath,/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -L/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread
  uses 4 byte integers
MPI:
  Version:  3
  mpiexec: /opt/hpe/hpc/mpt/mpt-2.22/bin/mpiexec
X:
  Library:  -lX11
pthread:
zlib:
  Version:  1.2.11
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lz
hdf5:
  Version:  1.12.1
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lhdf5_hl -lhdf5
netcdf:
  Version:  4.5.0
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lnetcdf
pnetcdf:
  Version:  1.12.2
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lpnetcdf
cmake:
  Version:  3.18.3
  /zfs/softs/tools/cmake/3.18.3/bin/cmake
hypre:
  Version:  2.23.0
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lHYPRE
Chaco:
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lchaco
metis:
  Version:  5.1.0
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lmetis
SuiteSparse:
  Version:  5.10.1
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lspqr -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig
parmetis:
  Version:  4.0.3
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lparmetis
PTScotch:
  Version:  6.1.1
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr
regex:
MUMPS:
  Version:  5.4.1
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord
scalapack:
  Library:  -Wl,-rpath,/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -L/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -lmkl_scalapack_lp64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -Wl,--end-group -lmkl_blacs_sgimpt_lp64 -lpthread -lm
Triangle:
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -ltriangle
exodusii:
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lexoIIv2for32 -lexodus
fftw:
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lfftw3_mpi -lfftw3
mkl_sparse:
mkl_sparse_optimize:
sundials2:
  Version:  2.5.0
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -lsundials_cvode -lsundials_nvecserial -lsundials_nvecparallel
tetgen:
  Includes: -I/softs/lib/petsc/3.16.6/opt/mpt_gcc10/include
  Library:  -Wl,-rpath,/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -L/softs/lib/petsc/3.16.6/opt/mpt_gcc10/lib -ltet
  Language used to compile PETSc: C
PETSc:
  PETSC_ARCH: gcc10_mpt
  PETSC_DIR: /zfs/home/gueguenm/codes/petsc-3.16.6
  Prefix: /softs/lib/petsc/3.16.6/opt/mpt_gcc10
  Scalar type: real
  Precision: double
  Support for __float128
  Integer size: 4 bytes
  Single library: yes
  Shared libraries: yes
  Memory alignment from malloc(): 16 bytes
  Using GNU make: /usr/bin/gmake
xxx=========================================================================xxx

version 3.11

Installation de la version 3.11 de petsc avec de nombreux composants supplémentaires (icpc/icc/ifort) :

2 versions identiques (optimized et debug) avec intel, plus une compilée avec gcc optimized;

[homer@vision scotch_6.0.3]$ ls /softs/lib/petsc/3.11/???/
/softs/lib/petsc/3.11/dbg/:
mpt_intel

/softs/lib/petsc/3.11/opt/:
mpt_gcc10  mpt_intel
  • Configuration pour la version optimisée (intel):
  C Compiler:         /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -fPIC -wd1572
  C++ Compiler:       /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicxx  -wd1572  -fPIC
  Fortran Compiler:   /opt/hpe/hpc/mpt/mpt-2.22/bin/mpif90  -fPIC
Linkers:
  Shared linker:   /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -shared  -fPIC -wd1572
  Dynamic linker:   /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -shared  -fPIC -wd1572
make:
BLAS/LAPACK: -Wl,-rpath,/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -L/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lmkl_def -lpthread
MPI:
cmake:
zlib:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lz
hdf5:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5
netcdf:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lnetcdf
Chaco:
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lchaco
X:
  Library:  -lX11
pthread:
metis:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lmetis
parmetis:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lparmetis
PTScotch:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
[gueguenm@vision boost_1_70_0]$ ./b2 install cxxstd=11
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr
MUMPS:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord
scalapack:
  Library:  -Wl,-rpath,/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -L/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -Wl,--start-group -Wl,--end-group -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lmkl_blacs_sgimpt_lp64 -lpthread -lm
SuiteSparse:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig
ml:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lml
  Arch:
fftw:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lfftw3_mpi -lfftw3
mkl_sparse:
mkl_sparse_optimize:
sundials:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -lsundials_cvode -lsundials_nvecserial -lsundials_nvecparallel
tetgen:
  Includes: -I/zfs/softs/lib/petsc/3.11/opt/mpt_intel/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -L/zfs/softs/lib/petsc/3.11/opt/mpt_intel/lib -ltet
PETSc:
  PETSC_ARCH: opt_mpt
  PETSC_DIR: /zfs/home/homer/codes/petsc-3.11.3
  Scalar type: real
  Precision: double
  Clanguage: Cxx
  Integer size: 32
  shared libraries: enabled
  Memory alignment: 16

version 3.13

2 versions optimisées disponibles : 1 classique, et une seconde avec interface cuda :

Configuration CUDA

[homer@vision petsc-3.13.5]$ module li
Currently Loaded Modulefiles:
  1) devel/cmake/3.8.2         3) cuda/10.1
  2) mpi/hpempi-1.6/mpt/2.22   4) lib/mkl/2020u2
[homer@vision petsc-3.13.5]$ export MPICXX_CXX=g++ && export MPICC_CC=gcc
#!/usr/bin/env python
import os
mkl_lib_dir = str(os.environ['MKLROOT']) + "/lib/intel64" 
mkl_inc_dir = str(os.environ['MKLROOT']) + "/include" 
os.environ["MPICC_CC"] = "gcc" 
os.environ["MPICXX_CXX"] = "g++" 
print(os.environ['MKLROOT'])
configure_options = [
  '--with-pic=1',
  '--prefix=/zfs/softs/lib/petsc/3.13/opt/mpt_cuda',
  '--COPTFLAGS="-O2"',
  '--CXXOPTFLAGS="-O2"',
  '--FOPTFLAGS="-O2"',
  '--with-cc=/opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc',
  '--with-fc=/opt/hpe/hpc/mpt/mpt-2.22/bin/mpif90',
  '--with-cxx=/opt/hpe/hpc/mpt/mpt-2.22/bin/mpicxx',
  '--with-blas-lapack-dir=' + mkl_lib_dir ,
  '--ldflags=-lirc',
  '--with-fortran=1',
  '--with-debugging=0',
  '--known-mpi-shared-libraries=1',
  '--with-papi=0',
  '--download-zlib=1',
  '--download-ctetgen=1',
  '--with-cuda=1',
  '--with-cudac=nvcc',
  '--with-openmp=1',
  '--download-fftw=/home/hom/codes/fftw-3.3.4.tar.gz',
  '--with-blacs-lib=-L' + mkl_lib_dir + ' -lmkl_blacs_sgimpt_lp64',
  '--with-blacs-include=' + mkl_inc_dir,
  '--with-scalapack-lib='+os.environ['MKLROOT']+ '/lib/intel64/libmkl_scalapack_lp64.a -Wl,--start-group '+os.environ['MKLROOT']+ '/lib/intel64/libmkl_intel_lp64.a '+os.environ['MKLROOT']+ '/lib/intel64/libmkl_core.a '+os.environ['MKLROOT']+ '/lib/intel64/libmkl_sequential.a -Wl,--end-group '+os.environ['MKLROOT']+ '/lib/intel64/libmkl_blacs_sgimpt_lp64.a -lpthread -lm',
  '--with-scalapack-include=' + mkl_inc_dir,
  ]

if __name__ == '__main__':
  import sys,os
  sys.path.insert(0,os.path.abspath('config'))
  import configure
  configure.petsc_configure(configure_options)
Compilers:
  C Compiler:         /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstac
k-protector -fvisibility=hidden "-O2" -fopenmp
    Version: gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)
  CUDA Compiler:      nvcc  -Xcompiler -fPIC -I/opt/hpe/hpc/mpt/mpt-2.22/include -Wno-deprecated-gpu-targets
    Version: nvcc: NVIDIA (R) Cuda compiler driver
  C++ Compiler:       /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicxx  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-pro
tector -fvisibility=hidden "-O2" -fopenmp  -fPIC  -std=gnu++11
    Version: g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)
  Fortran Compiler:   /opt/hpe/hpc/mpt/mpt-2.22/bin/mpif90  -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument "-O2"  -fopenmp
    Version: GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)
Linkers:
  Shared linker:   /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -fopenmp -shared  -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown
-pragmas -fstack-protector -fvisibility=hidden "-O2" -fopenmp
  Dynamic linker:   /opt/hpe/hpc/mpt/mpt-2.22/bin/mpicc  -fopenmp -shared  -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknow
n-pragmas -fstack-protector -fvisibility=hidden "-O2" -fopenmp
  Libraries linked against:   -lquadmath -lstdc++ -ldl
make:
  Version:  3.82
  /usr/bin/gmake
BlasLapack:
  Intel MKL Version:  20200002
  Includes: -I/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64/../../include
  Library:  -Wl,-rpath,/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -L/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -lmkl_inte
l_lp64 -lmkl_core -lmkl_gnu_thread -lmkl_def -lpthread
  uses OpenMP; use export OMP_NUM_THREADS=<p> or -omp_num_threads <p> to control the number of threads
  uses 4 byte integers
MPI:
  Version:  3
  Mpiexec: /opt/hpe/hpc/mpt/mpt-2.22/bin/mpiexec
openmp:
  Version:  201107
pthread:
X:
  Library:  -lX11
zlib:
  Includes: -I/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/lib -L/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/lib -lz
cmake:
  Version:  3.8.2
  /zfs/softs/tools/cmake/3.8.2/bin/cmake
cuda:
  Version:  10.1
  Includes: -I/zfs/softs/cuda/10.1/include
  Library:  -Wl,-rpath,/zfs/softs/cuda/10.1/lib64 -L/zfs/softs/cuda/10.1/lib64 -lcufft -lcublas -lcudart -lcusparse -lcusolver -lcuda
regex:
scalapack:
  Library:  -Wl,-rpath,/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -L/zfs/softs/compilers/intel/2020u2/mkl/lib/intel64 -lmkl_scal
apack_lp64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -Wl,--end-group -lmkl_blacs_sgimpt_lp64 -lpthread -lm
ctetgen:
  Includes: -I/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/lib -L/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/lib -lctetgen
fftw:
  Includes: -I/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/include
  Library:  -Wl,-rpath,/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/lib -L/zfs/softs/lib/petsc/3.13/opt/mpt_cuda/lib -lfftw3_mpi -lfftw3
mkl_sparse:
  uses OpenMP; use export OMP_NUM_THREADS=<p> or -omp_num_threads <p> to control the number of threads
mkl_sparse_optimize:
  uses OpenMP; use export OMP_NUM_THREADS=<p> or -omp_num_threads <p> to control the number of threads
valgrind:
  Language used to compile PETSc: C
PETSc:
  PETSC_ARCH: opt_mpt_gcc_cuda
  PETSC_DIR: /zfs/home/homer/codes/petsc-3.13.5
  Scalar type: real
  Precision: double
  Support for __float128
  Integer size: 4 bytes
  shared libraries: enabled
  Memory alignment from malloc(): 16 bytes