Dakota

ref : https://dakota.sandia.gov/

Summary

The Dakota project delivers both state-of-the-art research and robust, usable software for optimization and UQ. Broadly, the Dakota software's advanced parametric analyses enable design exploration, model calibration, risk analysis, and quantification of margins and uncertainty with computational models. The Dakota toolkit provides a flexible, extensible interface between such simulation codes and its iterative systems analysis methods, which include:

  • optimization with gradient and nongradient-based methods;
  • uncertainty quantification with sampling, reliability, stochastic expansion, and epistemic methods;
  • parameter estimation using nonlinear least squares (deterministic) or Bayesian inference (stochastic); and
  • sensitivity/variance analysis with design of experiments and parameter study methods.

These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty.

Version 6.11

Version 6.11 installed from source, compiled with mpi interface (MPT) and intel 18 compiler (mkl librairies for blas/lapack, HDF5 support) ; located in /sw/codes/dakota/6.11/intel_mpt/

Usage

[homer@thor]$ module help codes/dakota/6.11

----------- Module Specific Help for 'codes/dakota/6.11' ----------

    modules - loads the Dakota environment

    Compiled with MPT, and Compiled with Intel 18.

    This adds /sw/codes/dakota/6.11/intel_mpt/* to several of the
    environment variables.

    Version 6.11

    See https://forge.univ-poitiers.fr/projects/mesocentre-spin-git/wiki/Spin_dakota

[homer@thor]$ module add codes/dakota/6.11
[gueguenm@thor grad_least_square_2analyses_weighted]$ mpirun -n 1 dakota -v -h
usage: dakota [options and <args>]
    -help (Print this summary)
    -version (Print DAKOTA version number)
    -input <$val> (REQUIRED DAKOTA input file $val)
    -preproc [$val] (Pre-process input file with pyprepro or tool $val)
    -output <$val> (Redirect DAKOTA standard output to file $val)
    -error <$val> (Redirect DAKOTA standard error to file $val)
    -parser <$val> (Parsing technology: nidr[strict][:dumpfile])
    -no_input_echo (Do not echo DAKOTA input file)
    -check (Perform input checks)
    -pre_run [$val] (Perform pre-run (variables generation) phase)
    -run [$val] (Perform run (model evaluation) phase)
    -post_run [$val] (Perform post-run (final results) phase)
    -read_restart [$val] (Read an existing DAKOTA restart file $val)
    -stop_restart <$val> (Stop restart file processing at evaluation $val)
    -write_restart [$val] (Write a new DAKOTA restart file $val)

Dakota version 6.11 released Nov 15 2019.
Repository revision c3efb375 (2019-11-07) built Mar 26 2020 14:23:58.

Version 6.6

Compilation

Version 6.6 installed from source, compiled with mpi interface and mkl librairies for blas/lapack ; located in /sw/codes/dakota/6.6/gnu/

[homer@thor rebuild]$ module li
Currently Loaded Modulefiles:
  1) mpt/2.12               2) lib/boost/1.55         3) intel-cmkl-16/16.0.1
[homer@thor rebuild]$ cmake -DCMAKE_INSTALL_PREFIX=/sw/codes/dakota/6.6/gnu -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx -DCMAKE_Fortran_COMPILER=mpif90 -DDAKOTA_HAVE_MPI=ON -DMKL_LIBRARY_DIRS="$MKLROOT/lib/intel64" -DTPL_ENABLE_MKL=ON -DTPL_ENABLE_BLAS=ON -DTPL_ENABLE_LAPACK=ON -DTPL_BLAS_LIBRARIES='-mkl' -DTPL_LAPACK_LIBRARIES='-mkl' -DMKL_INCLUDE_DIRS:FILEPATH="${MKLROOT}/include" -DTPL_MKL_LIBRARIES="-mkl" -DHAVE_OPTPP=ON -DBLAS_LIBS="$MKLROOT/lib/intel64/libmkl_rt.so" -DLAPACK_LIBS="$MKLROOT/lib/intel64/libmkl_rt.so" ..
[homer@thor rebuild]$ make && make install
[homer@thor rebuild]$ PATH=/sw/codes/dakota/6.6/gnu/bin:/sw/codes/dakota/6.6/gnu/test:$PATH
[homer@thor rebuild]$ mpirun -np 1 dakota -v
Dakota version 6.6 released May 15 2016.
Repository revision dde6536 (2017-05-09) built Mar 28 2018 16:37:34.

Usage

[homer@thor]$ module help codes/dakota/6.6

----------- Module Specific Help for 'codes/dakota/6.6' -----------

    modules - loads the Dakota environment

    Compiled with MPT, and linked with Intel MKL.

    This adds /sw/codes/dakota/6.6/gnu/* to several of the
    environment variables.

    Version 6.6

    See https://forge.univ-poitiers.fr/projects/mesocentre-spin-git/wiki/Spin_dakota

[homer@thor]$ module add codes/dakota/6.6
[homer@thor]$ mpirun -np 1 dakota -v
Dakota version 6.6 released May 15 2016.
Repository revision dde6536 (2017-05-09) built Mar 28 2018 16:37:34.
[homer@thor]$ mpirun -np 1 dakota -h
usage: dakota [options and <args>]
    -help (Print this summary)
    -version (Print DAKOTA version number)
    -input <$val> (REQUIRED DAKOTA input file $val)
    -output <$val> (Redirect DAKOTA standard output to file $val)
    -error <$val> (Redirect DAKOTA standard error to file $val)
    -parser <$val> (Parsing technology: nidr[strict][:dumpfile])
    -no_input_echo (Do not echo DAKOTA input file)
    -check (Perform input checks)
    -pre_run [$val] (Perform pre-run (variables generation) phase)
    -run [$val] (Perform run (model evaluation) phase)
    -post_run [$val] (Perform post-run (final results) phase)
    -read_restart [$val] (Read an existing DAKOTA restart file $val)
    -stop_restart <$val> (Stop restart file processing at evaluation $val)
    -write_restart [$val] (Write a new DAKOTA restart file $val)

The interface specification for parallelism

Specifying parallelism within an interface can involve the use of the asynchronous, evaluation_concurrency, and analysis_concurrency keywords to specify concurrency local to a processor (i.e., asynchronous local parallelism). This asynchronous specification has dual uses:
  • When running Dakota on a single-processor, the asynchronous keyword specifies the use of asynchronous invocations local to the processor (these jobs then rely on external means to be allocated to other processors). The default behavior is to simultaneously launch all function evaluations available from the iterator as well as all available analyses within each function evaluation. In some cases, the default behavior can overload a machine or violate a usage policy, resulting in the need to limit the number of concurrent jobs using the evaluation_concurrency and analysis_concurrency specifications.
  • When executing Dakota across multiple processors and managing jobs with a message-passing scheduler, the asynchronous keyword specifies the use of asynchronous invocations local to each server processor, resulting in a hybrid parallelism approach (see User's manual Section 17.2.3). In this case, the default behavior is one job per server, which must be overridden with an evaluation_concurrency specification and/or an analysis_concurrency specification. When a hybrid parallelism approach is specified, the capacity of the servers (used in the automatic configuration logic) is defined as the number of servers times the number of asynchronous jobs per server.

In both cases, the scheduling of local evaluations is dynamic by default, but may be explicitly selected or overriden using local_evaluation_scheduling dynamic or static.

In addition, evaluation_servers, processors_per_evaluation, and evaluation_scheduling keywords can be used to override the automatic parallel configuration for concurrent function evaluations. Evaluation scheduling may be selected to be master or peer, where the latter must be further specified to be dynamic or static.
To override the automatic parallelism configuration for concurrent analyses, the analysis_servers and analysis_scheduling keywords may be specified, and the processors_per_analysis keyword can be used to override the automatic parallelism configuration for the size of multiprocessor analyses used in a direct function simulation interface. Scheduling options for this level include master or peer, where the latter is static (no dynamic peer option supported). Each of these keywords appears as part of the interface commands specification in the Dakota Reference Manual [3].

In Dakota, the following parallel algorithms, comprised of iterators and meta-iterators, provide support for coarse- grained algorithmic parallelism. Note that, even if a particular algorithm is serial in terms of its data request concurrency, other concurrency sources (e.g., function evaluation coarse-grained and fine-grained parallelism) may still be available.

  1. Gradient-based optimizers: CONMIN, DOT, NLPQL, NPSOL, and OPT++ can all exploit parallelism through the use of Dakota’s native finite differencing routine (selected with method source dakota in the responses specification), which will perform concurrent evaluations for each of the parameter offsets. For n variables, forward differences result in an n + 1 concurrency and central differences result in a 2n + 1 concurrency. In addition, CONMIN, DOT, and OPT++ can use speculative gradient techniques to obtain better parallel load balancing.
  2. Nongradient-based optimizers: HOPSPACK, JEGA methods, and most SCOLIB methods support parallelism. HOPSPACK and SCOLIB methods exploit parallelism through the use of Dakota’s concurrent function evaluations; however, there are some limitations on the levels of concurrency and asynchrony that can be exploited. These are detailed in the Dakota Reference Manual. Serial SCOLIB methods include Solis-Wets (coliny_solis_wets) and certain exploratory moves options (adaptive_pattern and multi_step) in pattern search (coliny_pattern_search). OPT++ PDS (optpp_pds) and NCSU DIRECT (ncsu_direct) are also currently serial due to incompatibilities in Dakota and OPT++/NCSU parallelism models. Finally, coliny_pattern_search and asynch_pattern_search support dynamic job queues managed with nonblocking synchronization.
  3. Least squares methods: in an identical manner to the gradient-based optimizers, NL2SOL, NLSSOL, and Gauss-Newton can exploit parallelism through the use of Dakota’s native finite differencing routine. In addition, NL2SOL and Gauss-Newton can use speculative gradient techniques to obtain better parallel load balancing. NLSSOL does not use speculative gradients since this approach is superseded by NLSSOL’s gradient-based line search in user-supplied derivative mode.

Examples

Case 1 : Massively Serial with message passing interface

Case 2 : Sequential Parallel Concurrency

Usage of abaqus solver