Removal of Content Advisory - April 2024

Advisory to Gridpoint Statistical Interpolation (GSI) users: As of the beginning of April 2024, all support assets for Gridpoint Statistical Interpolation (GSI) will be removed from the DTC website. Users should download all reference materials of interest prior to April 2024.

Gridpoint Statistical Interpolation (GSI) | FAQ's

Q: How do I reference the GSI User's Guide in publications?

A: Please refer to Citation.

Problems building with a PGI compiler before version 11

Q: I have an older version fo the PGI compiler and I'm experiencing build errors.

A: Because of compiler related issues, it is strongly recommended that you employ the latest version of the PGI compiler available on your system. Should that not be possible, compiler errors may result in the code not building. Please check Compiler Support for compilers the code was tested for.

Q: I'm experiencing build error with the Intel compiler.

A: Two types of errors tend to occur with the Intel compiler.

  1. Errors having to do with the MKL library
  2. Errors having to do with linking to MPI

The first of these will be addressed here, and the second will be discussed in the following section. 

Q: I'd like to build GSI on an AIX IBM using the XLF fortran compiler

A: Unfortunely, the DTC team no longer has access to an AIX IBM machine to test GSI and update the build system. As a result the DTC only provides legacy support for IBM AIX platforms. Users have informed us of issues they have experienced trying to build on the IBM AIX platform and their solution is provided here.

1. The following build system files need to be modified:

  • src/main/makefile_DTC
  • src/libs/gfsio/Makefile
  • src/libs/bufr/Makefile
  • src/libs/sp/Makefile

Change

.F90.o:
        $(CPP) $(CPP_FLAGS) $(CPP_F90FLAGS) $*.F90  > $*.fpp
        $(F90) $(FFLAGS) -c $*.fpp
        $(RM) $*.fpp

​​to

.F90.o:
        $(CPP) $(CPP_FLAGS) $(CPP_F90FLAGS) $*.F90  > $*.f90
        $(F90) $(FFLAGS) -c $*.f90
        $(RM) $*.f90

and

.F.o:
        $(CPP) $(CPP_FLAGS) $(CPP_F90FLAGS) $*.F  > $*.fpp
        $(SFC) -c $(FFLAGS_BUFR) $*.fpp
        $(RM) $*.fpp

to

.F.o:
        $(CPP) $(CPP_FLAGS) $(CPP_F90FLAGS) $*.F  > $*.f
        $(SFC) -c $(FFLAGS_BUFR) $*.f
        $(RM) $*.f

2. Modify the source code file src/libs/gsdcloud/hydro_mxr_thompson.f90

Change

tc0 = MIN(-0.1, tc)    ! the type of these two variables are single and double precision separately.

to

tc0 = MIN(-0.1_r_kind, tc)  ! keep these two variables as the same type.

change

qnr_3d(i,j,k) = max(1.0_r_kind, qnr_3d(i,j,k))    ! the type of these two variables are double and single precision separately.

to

qnr_3d(i,j,k) = max(1.0_r_single, qnr_3d(i,j,k))   ! keep these two variables as the same type.

3. Modify file src/libs/w3/Makefile, by deleting line 16:

$(CP) *.mod $(INCMOD)

as all the three module files (args_mod.mod, GBLEVN_MODULE.mod, nersenne_twister.mod) exist in the include\ directory.

4. when using LAPACK v5.2 rather than the ESSL mathematics libraries, some function names have been changed according to the link http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS213-026 
The way to solve the compile error "ld: 0711-317 ERROR: Undefined symbol: .dgeev; ld: 0711-317 ERROR: Undefined symbol: .dspev" is

  • Change function name DGEEV to DGEEVX in code file src/main/bicglanczos.F90 and src/main/lanczos.F90
  • Change function name dspev to dspevx in code file src/main/lanczos.F90
  • Modify files src/main/makefile_DTC, src/main/Makefile.dependency and src/main/Makefile, by deleting lines related m_dgeevx.F90.

The last of these steps is optional, depending on the lapack version.

Q: I'm experencing build issues related to MPI.

A: The community build system employed by GSI assumes that your computing system comes with a fairly vanilla instalation of MPI. This means that it uses the traditional conventions for naming MPI wrapper scripts. On some newer platforms, with vender supplied versions of MPI, the MPI Fortran wrappers can function differently and/or even have different names. For instance, any of these might be found on a Linux system for invoking the MPI wrapper script to build Fortran code.

  1. mpif90 -f90=pgf90
  2. mpif90
  3. mpiifort
  4. mpfort

The last two are specific to certain brand named clusters with Intel compilers. The community build system assumes the first convention. If a build error states that it can't find mpif90, or that the arguments are unrecognized, chances are that you need to revise the values for

DM_FC
DM_F90
DM_CC

in your configure.gsi file.

If you experience any difficulty building the MPI components of the code, check these issues in the order listed.

  • Does the build complain that it doesn't recognize the "-f90=pgf90" argument to "mpif90"? If so remove it from the "DM_FC" and "DM_F90" variables in the configure.gsi file, and try recompiling.
  • Does mpif90 exist on your system? Check this by typing which mpif90. If the command responds with Command not found, the standard Fortran wrapper for MPI is not being found.
  • Is MPI even in your path? Type "env" or "echo $PATH". Is there a path with the letters M-P-I? If it exists, check the contents of the "bin/" directory at that path location for one of the alternatives to "mpif90".
  • If all else fails, contact you system administrator for help.

Q: I am unable to compile GSI for use with the HWRF system.

A: The HWRF version of GSI differs slightly from the standard community version. Thus prior to building GSI for HWRF, you must set the environment variable HWRF to one.

  • For csh: "setenv HWRF 1"
  • For ksh/bash: "export HWRF=1"

Building GSI on Mac OSX using the PGI compiler

A: Why doesn't the community GSI support the Mac OSX platform with the PGI compiler?

Q: The community GSI development team only supports platforms that it has continuous access to for porting and testing. On occasion, a user will provide the build information for a platform that the community GSI does not support. When this happens we will share this information with the user community. Based on a user contribution, GSI can be compiled on a Mac OSX platform using the PGI v11 compiler with these configure file settings.

# Darwin (MACOS), PGI compilers (pgf90 & pgcc) (dmpar,optimize)#
COREDIR = $(GSI)
INC_DIR = $(COREDIR)/include
BYTE_ORDER = LITTLE_ENDIAN
SFC = pgf90 -mp -tp=core2
SF90 = pgf90 -Mfree -mp -tp=core2
SCC = pgcc -tp=core2
INC_FLAGS = -I $(INC_DIR) -module $(INC_DIR) -I $(NETCDF)/include
FFLAGS_i4r4 = -i4 -r4
FFLAGS_i4r8 = -i4 -r8
FFLAGS_DEFAULT = -C
FFLAGS = $(FFLAGS_DEFAULT) $(INC_FLAGS) -DLINUX -DMACOS -DPGI
#
CPP = cpp
CPP_FLAGS = -C -P -D$(BYTE_ORDER) -D_REAL8_ -DWRF -DLINUX -DPGI
CPP_F90FLAGS =
DM_FC = mpif90 -tp=core2
DM_F90 = mpif90 -Mfree -tp=core2
DM_CC = mpicc -tp=core2
FC = $(DM_FC)
F90 = $(DM_F90)
CC = $(DM_CC)
CFLAGS = -O0 -DLINUX -DMACOS -DUNDERSCORE
CFLAGS2 = -DLINUX -DMACOS -Dfunder -DFortranByte=char -DFortranInt=int
   -DFortranLlong='long long'
MYLIBsys = -L$(PGI)/lib -llapack -lblas

Q: The run script fails with an mpi related error.

A: The community GSI run script assumes that your computing system comes with a fairly vanilla instalation of MPI. This means that it uses the traditional conventions for naming MPI wrapper scripts. On some newer platforms, with vender supplied versions of MPI, the MPI run wrappers can function differently and/or even have different names. For instance, either and/or both of these run commands might be found on a particular Linux system for running parallel code.

  1. mpirun
  2. mpiexec

The community run script assumes the first of these, along with some minor modifications for batch systems based on the value of the "ARCH" variable in the run script. If your system does not use "mpirun" the user will need to modify the run script for their particular computing environment. This part of the script starts at line 87 and runs through line 132.

87 case $ARCH in
88    'IBM_LSF')
89       ###### IBM LSF (Load Sharing Facility)
90       BYTE_ORDER=Big_Endian
91       RUN_COMMAND="mpirun.lsf " ;;
92
93    'IBM_LoadLevel')
94       ###### IBM LoadLeve
95       BYTE_ORDER=Big_Endian
96       RUN_COMMAND="poe " ;;
97
98    'LINUX')
99       BYTE_ORDER=Little_Endian
100       if [ $GSIPROC = 1 ]; then
101          #### Linux workstation - single processor
102          RUN_COMMAND=""
103       else
104          ###### Linux workstation -  mpi run
105         RUN_COMMAND="mpirun -np ${GSIPROC} -machinefile ~/mach "
106       fi ;;

Note that on the IBM AIX platform (line 91), the run script calls "mpirun.lsf". On some linux workstations (line 105), a machine file is required. Typically on most large clusters, attempting to specify a machine file will result in an error. It is up to the user to make the necessary modifications for their particular computing system.

Once again, if all else fails, contact your system administrator for help.

Out of Memory Error

Q: The run crashes and the stdout file, in the run directory, complains about not being able allocate memory.

A: Resize the stacksize by adding the ksh/bash command:
In bash/ksh: ulimit -s 524288 
In tcsh/csh: limit stacksize 524288
to your run script. If that fails, try increasing the number of processors used to run your analysis.

Q: When running GSI on Linux platforms, there is a problem reading prepbufr files obtained from the NCEP ftp server and/or the file gdas1.t12z.prepbufr.nr from the tutorial exercise.

A: This may be caused by what is known as the Endian problem. Different computer hardware platforms may use different byte order to representation information. For details see the Wikipedia article on Endianness. Typically this is only an issue on current systems when sharing binary IO between an IBM ("Big-Endian") and Linux system ("Little Endian").

The prepbufr format is such a binary IO format. The prepbufr files from the NCEP ftp server, or the file gdas1.t12z.prepbufr.nr from the tutorial exercises, were Big Endian files. A conversion C code ssrc.c located in the ./util directory of the GSI distribution. This byte-swapping code will take a prepbufr file generated on an IBM platofrm (Big Endian) and convert it to a prepbufr file that can be read on a Linux or Intel Mac platform (Little Endian).

Compile ssrc.c with any c compiler. To convert an IBM prepbufr file, take the executable (e.g. ssrc.exe), and run it as follows:

ssrc.exe < name of Big Endian prepbufr file > name of Little Endian prepbufr file

 

Starting with the release Version 3.2, BUFRLIB can automatically identify and do the conversion. So, BUFR/PrepBUFR files in any byte order can be used by GSI directly

Most build or run problems must be diagnosed by use of the log files.

For build errors pipe the standard out and standard error into a log file with a command such as (for csh)

./compile |& tee build.log

Search the log file for any instance of the word "Error." Its presence indicates a build error. Be certain to use the exact spelling with a capital "E." If the build fails, but the word "Error" is not present in the log file, it typically indicates that the build failed during the link phase. Information on the failed linking phase will be present at the very end of the log file. Try searching there.

For run errors, it is useful to examine the "stdout" file located in the run directory.