The Dakota toolkit provides a flexible, extensible interface between such simulation codes and its iterative systems analysis methods, which include:

  • optimization with gradient and nongradient-based methods;
  • uncertainty quantification with sampling, reliability, stochastic expansion, and epistemic methods;
  • parameter estimation using nonlinear least squares (deterministic) or Bayesian inference (stochastic); and
  • sensitivity/variance analysis with design of experiments and parameter study methods.

These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty.



  • Provided module : codes/dakota/6.14
  • Show information about dakota module :
    [homer@vision]$ module show codes/dakota/6.14
    module-whatis   {Dakota : a parallel code for optimization and uncertainty quantification.}
    prereq          mpi/hpempi-1.6/mpt/2.22
    setenv          DAKOTA_ROOT /softs/codes/dakota/6.14/mpt_gcc10
    prepend-path    INCLUDE /softs/codes/dakota/6.14/mpt_gcc10/include
    prepend-path    LD_LIBRARY_PATH /softs/codes/dakota/6.14/mpt_gcc10/lib
    prepend-path    CPATH /softs/codes/dakota/6.14/mpt_gcc10/include
    prepend-path    PATH /softs/codes/dakota/6.14/mpt_gcc10/bin
    prepend-path    PATH /softs/codes/dakota/6.14/mpt_gcc10/test
  • 1st Module usage (see options and version)
    [homer@vision]$ module add  mpi/hpempi-1.6/mpt/ codes/dakota/6.14
    [homer@vision optim]$ dakota --help
    usage: dakota [options and <args>]
        -help (Print this summary)
        -version (Print DAKOTA version number)
        -input <$val> (REQUIRED DAKOTA input file $val)
        -preproc [$val] (Pre-process input file with pyprepro or tool $val)
        -output <$val> (Redirect DAKOTA standard output to file $val)
        -error <$val> (Redirect DAKOTA standard error to file $val)
        -parser <$val> (Parsing technology: nidr[strict][:dumpfile])
        -no_input_echo (Do not echo DAKOTA input file)
        -check (Perform input checks)
        -pre_run [$val] (Perform pre-run (variables generation) phase)
        -run [$val] (Perform run (model evaluation) phase)
        -post_run [$val] (Perform post-run (final results) phase)
        -read_restart [$val] (Read an existing DAKOTA restart file $val)
        -stop_restart <$val> (Stop restart file processing at evaluation $val)
        -write_restart [$val] (Write a new DAKOTA restart file $val)
    [homer@vision optim]$ dakota -version
    Dakota version 6.14 released May 17 2021.
    Repository revision 382229e53 (2021-05-12) built Nov 22 2021 14:02:22.
  • Script example : we select 6 chunks to distribute 6 concurrent evaluations from dakota to used solver. Dakota is launched with mpi to do the distribution. 24 OPENMP Threads will be used for each solver evaluation.
    #PBS -N HS2LEvpFFT
    #PBS -l select=6:ncpus=24:mpiprocs=1:ompthreads=24
    #PBS -l walltime=48:00:00
    #PBS -j oe
    #PBS -q calcul_big
    module purge
    module add python/py3.8
    module add  mpi/hpempi-1.6/mpt/ codes/dakota/6.14
    cd ${PBS_O_WORKDIR}
    export MPIPROCS=`cat $PBS_NODEFILE  | wc -l`
    export MPI_OPENMP_INTEROP=enable
    mpiexec_mpt -v  -n ${MPIPROCS} dakota -i hybrid_strat_optpp.in -write_restart hs_1.rst > hybrid_strat_optpp.log.${PBS_JOBID} 2>&1
  • The corresponding output will be like :
    [homer@vision optim]$ more hybrid_strat_optpp.log.18183.vision
    MPT: mpiexec: MPI_BATCH_CMD=/opt/pbs/bin/pbs_attach -j 18183.vision
    MPT: mpiexec: PARAMS
    MPT:     vision 6
    MPT:         dakota -i hybrid_strat_optpp.in -write_restart hs_1.rst
    MPT: libxmpi.so 'HPE MPT 2.22  03/31/20 16:13:52'
    MPT: libmpi_mt.so  'HPE MPT 2.22  03/31/20 16:12:29'
        MPT Environmental Settings
    MPT: MPI_OPENMP_INTEROP (default: disabled) : enabled
    MPT: MPI_VERBOSE (default: disabled) : enabled
    Dakota version 6.14 released May 17 2021.
    Repository revision 382229e53 (2021-05-12) built Nov 22 2021 14:02:22.
    Running MPI Dakota executable in parallel on 6 processors.
    Start time: Mon Nov 22 17:03:17 2021
    Begin DAKOTA input file
    DAKOTA parallel configuration:
    Level            num_servers    procs_per_server    partition
    -----            -----------    ----------------    ---------
    concurrent evaluations         3              1           ded. master
    concurrent analyses         2              1           peer
    multiprocessor analysis         1             N/A       N/A
    Total parallelism levels =   2 (2 dakota, 0 analysis)