Navigation

This Page

1   The original pingdata demo: CODAS Processing of Shipboard ADCP pingdata

This document is dated, but not obsolete. It thoroughly describes CODAS processing of a 1993 pingdata dataset, complete with a dizzying level of detail. This is an excellent place to look up details or get instructions, and does provide specific instructions for an older dataset. Many of the components described here are still relevant.

Some of the details are only useful if the data are pingdata, and many steps are unnecessary for newer or well-behaved datasets. You might want to read this file as an overview of quick_adcp.py, a python script designed to run most of the basic CODAS processing steps, and come back to this document for clarification about details.

The CODAS postscript manual is another source of detailed information about how the CODAS database works.

2   ADCP DEMO OVERVIEW

Return to TOP

CODAS processing of shipboard ADCP data is a complicated task. Once you are familiar with it, you may find that you do not need all of it’s flexibility. However, since some data streams sporadically fail and some improve over time, there are necessarily alot of options in CODAS. This demo provides a comprehensive tutorial covering almost all the steps one might possibly want to apply to a dataset. As such, it is our ultimate reference for how to do things. It is also daunting in its thoroughness.

Here are the basic highlights of processing a shipboard adcp dataset. This can be regarded as an overview when you go through the demo. You may have other steps that need to be addressed.

  1. create the basic ashtech-corrected database
  • scanping scan.cnt - look at times, beam stats in pingdata.
  • loadping loadping - (skip some headers?) create database.
  • ubprint ubprint - get nav&ashtech corr from user buffer.
  • adcpsect as_nav - get measured u,v.
  • cal/rotate ashrot - apply ashtech heading correction, if available (in matlab|).
  1. render velocities absolute
  • adcpsect as_nav.cnt - Get navigation and ref layer velocity.

  • refabs refabs.cnt - apply reference layer velocity (ocean + ship) from measured velocities.

    • ship velocity (from navigation) to get ocean reference layer velocity (which still includes jitter from navigation).
  • smoothr smoothr.cnt - Smooth the ocean reference layer velocity and get smoothed positions (nav) from that process.

  • putnav putnav.cnt Apply the new smoothed position to the database.

  • callrefp Plot the reference layer velocity (matlab)

  1. Do Graphical (Matlab) Editing
  • Use setup.m in matlab and learn what the original interactive editing can show you.

  • Then use asetup.m and aflagit_setup.m and learn what gautoedit can show you.

  • Look especially for

    • CTD wire interference
    • unaccountable jitter rendering a profile bad
    • seeing the bottom when BT was off
  • Then apply the flags for bad profiles to the database:
    • badbin ../adcpdb/ah110 badbin.asc (if relevant)
    • dbupdate ../adcpdb/ah110 badprf.asc (if relevant)
    • dbupdate ../adcpdb/ah110 bottom.asc (if relevant)
    • set_lgb ../adcpdb/ah110 (required, penultimate)
    • setflags setflags.cnt (ultimate)
  1. calibrate: (phase and amp)
  • cal/rotate/botmtrk
  • cal/rotate/watertrk
  • cal/rotate: rotate rotate.cnt
Apply final rotation using the best guess of phase and amp from cal steps.
(make sure you don’t apply the ashtech correction twice!!)

NOTE:

Any time you alter the database you must run the nav steps again!!! (i.e. adcpsect; refabs; smoothr; putnav) because you’ve changed the data which go into calculating the reference layer.

3   SETTING UP

Return to TOP

Be sure to edit your default search path to include the appropriate paths to the binaries in codas3/bin/<lnx | mgw | osx | sgi | sol | ...> and scripts in adcp_py/< csh (for Unix users) | bat (for PC users) >. Scripts are growing in number and will all be stored in adcp_py, with their language determined by suffix.

We also need to establish the path to the Matlab routines. Codas3 matlab code no longer supports matlab4 or matlab5. The downloadable zip file contains many subdirectories, some of which of which should be added to your path. If you set your MATLABPATH ennvironment variable to be the location where the following subdirectories are located:

matlab/codas3
matlab/misc
matlab/ctd
matlab/mex6
matlab/mex7

You can modify “adcppath.m” to add these paths to your own. You may add “adcppath” to your matlab/startup.m file, or type it on the command line. Use “radcppath” to add single-ping processing routines to your path.

Now run the script adcptree to set up the processing directories and copy all the necessary control files, Matlab *.m files, etc. The first argument is the name to be used to identify the data set, usually a cruise ID. (Here we use ‘demo’). That will be the root of the processing tree. The second argument is the path to the CODAS software, from which the codas3/adcp/demo control and *.m files will be copied.

% adcptree demo /home/noio/programs

Done.

% cd demo
% ls
adcpdb/      contour/    grid/      nav/           ping/        scan/
cal/      edit/        load/             quality/    stick/       vector/
  • The ping/ directory is the repository of the raw pingdata files. It also contains a Matlab script for quickly plotting the raw data.
  • The scan/ directory will be used to scan the raw pingdata files.
  • The load/ directory will be used for loading the data into the database.
  • The adcpdb/ directory will contain the database and producer definition file.
  • The edit/ directory will be used during the editing stage.
  • The cal/ directory will be used for calibration calculations.
  • The nav/ directory will be used for navigation calculations.
  • The grid/ directory will be used to grid the data for plotting.
  • The quality/ directory contains Matlab scripts for plotting on-station and underway profile statistics.
  • The contour/ directory will be used to generate contour plots.
  • The vector/ directory will be used to generate vector plots.
  • The stick/ directory will be used to generate stick plots.

Now copy the raw pingdata files to the ping subdirectory.

% cd ping
% cp -p /home/noio/programs/codas3/adcp/demo/ping/pingdata.* .
% ls
loadrun.m    pingdata.000    pingdata.001

3.1   PRELIMINARY PLOT

Return to TOP

The following step is not necessary for processing. It provides a quick look (plots) of the raw pingdata files, where absolute velocities are calculated crudely from the navigation. Actually, these scripts were written to obtain a quick view of the data during data acquisition.

It is a good idea to review the parameter settings in loadrun.m first before running the script. They allow you to pick the reference layer, smoothing parameters, plotting offsets, etc. Once that is done, you can proceed as follows.

% matlab

>> loadrun('pingdata.000', '93/4/9-0:0:0', '93/4/9-12:0:0')

  time range:  93/04/09  00:00:00 to 93/04/09  12:00:00
  148 profiles retrieved, max no. of bins = 60.

The three figures produced show the data looking good, except for a couple of noisy ensembles toward the beginning and middle sections. We will have to check these out during processing.

>> loadrun('pingdata.000', '93/4/9-12:0:0', '93/4/10-0:0:0')

  time range:  93/04/09  12:00:00 to 93/04/10  00:00:00
  end of file

  146 profiles retrieved, max no. of bins = 60

The error velocity plots show some kinks that are typical of CTD wire interference with the ADCP.

Again we see a noisy ensemble toward the end.

3.2   SCANNING

Return to TOP

Prior to loading the database, we scan the pingdata files in order to:

  • check for readability

    • Any warnings about bad headers alert us to sections that should not be loaded into the database. These are probably due to data recording errors. We have no choice but to omit such profiles during loading.
  • check the profile times

    • Since acquisition PC clocks may not be too accurate due to a bad initial setting or drift, it is important to correct the profile times so they can be properly matched to the navigation data. The best time correction can be done during the loading stage. The scanning output will provide a list of profile times. If the user buffer provides satellite times, this step can also be used to pull out this information from the user buffer to help estimate the error and drift in the PC clock, if any. If the user buffer does not contain satellite times, one can also rely on other means (ship log, external navigation file) to come up with the time correction.
% cd ../scan
% ls
clkrate.m       ub_1020.def     ub_1281.def     ub_1920.def     ub_720.def
scanping.cnt    ub_1280.def     ub_1320.def     ub_2240.def
% ls ../ping/pingdata.* >> scanping.cnt
% vi scanping.cnt

We need to specify the OUTPUT_FILE: (any name will do; we suggest using a .scn extension as a convention). If SHORT_FORM: is set to “no”, the output file will display a list of variables recorded under each header. If you already know what these are, you may prefer to set it to “yes” for brevity’s sake.

The next 3 parameters deal with the user buffer. If the user buffer has been used to record satellite times, then the information can be used to establish accuracy of the ensemble times.

If the UH user-exit program was used, then the user buffer can be any one of the numeric types (1920 for version 4 ue4.exe with direct ASHTECH support; 720, 2240 for version 3 ue3.exe; 1281, etc. from the older version mag2.exe). If you don’t know what kind of user buffer type your data were collected with, use “SHORT_FORM: no” in scanping.cnt and look for “USER_BUFFER” in the .scn file.

For this demo, note that UB_OUTPUT_FILE: is set to “none”, because scanping will go ahead and calculate the PC-fix time and put this as the last column in the main OUTPUT_FILE:.

If another user-exit program was used, it may generate some ASCII user-buffer, which may have some form of the satellite times. That can also be used. In this case, specify a UB_OUTPUT_FILE: name and set USER_BUFFER_TYPE to “ascii”.

Non-ASCII user buffers, including the UH types, are parsed by providing a structure definition file. This is what the UB_DEFINITION: parameter specifies. Detailed instructions for constructing such a file are pointed to in the CODAS MANUAL. The example *.def files in the directory also provide some starting point. If you want scanping to attempt to parse a non-UH, non-ASCII user buffer, try to construct such a file, set the USER_BUFFER_TYPE to “other”, and specify a UB_OUTPUT_FILE: name.

% scanping scanping.cnt

Stripping comments from control file

 OUTPUT_FILE: ademo.scn
 SHORT_FORM: no
 UB_OUTPUT_FILE: none
 USER_BUFFER_TYPE: 720
 UB_DEFINITION: ub_720.def


 PINGDATA_FILES:
 DATA FILE NAME ../ping/pingdata.000

 Header 1
 Header 2
 .
 .
 .
 Header 294

 END OF FILE: ../ping/pingdata.000


 DATA FILE NAME ../ping/pingdata.001

 Header 1
 Header 2
 .
 .
 .
 Header 286

 END OF FILE: ../ping/pingdata.001

% ls
ademo.scn    clkrate.m    ub_1020.def    ub_1281.def    ub_2240.def
cleanscn\*    scanping.cnt    ub_1280.def    ub_1320.def    ub_720.def
% view ademo.scn

We normally check for 4 things:

  • Any warnings about bad headers? Search for “WARNING:”.
    • None for our demo; if there were, we’ll need to skip them during the loading stage.
  • For UH-type user buffers, the pc-fix time column shows how many seconds

    difference there was between the PC and satellite clocks. This does not detect errors in the date.

    • In version 3 ue3.exe and later, this should never exceed +/- n, where n is the max_dt_difference specified in the ue3.cnf file during data acquisition (assuming that the correct_clock option was enabled). This would indicate that the program did its job of resetting the PC clock to the satellite time whenever it drifted more than max_dt_difference seconds.
    • For our demo, it looks like the program did its job: the last column is never more than 2 seconds in either direction.
  • The interval (min:sec) column gives us the difference between profile times.

    • One would expect that it should be within a second of the sampling interval. When that is not the case, it indicates an ADCP recording gap–either instrument failure, disk full, or momentary interruptions to change some configuration setting (turning bottom tracking off/on, etc.).
    • For our demo, this column clues us in on the noisy ensembles we observed from the preliminary plots: they (headers 7, 8, 75, 76, 290, 291 under pingdata.000) are only 2 or 3 seconds long, instead of 5 minutes! This is caused by an early version of the clock resetting scheme in ue3; when it set the clock back at the start of an ensemble, the DAS promptly ended the ensemble. This is not a problem for current versions of ue3. We will not load those too-short ensembles.
  • Was bottom tracking used at any time? Search for “bottom” or “ON” and “OFF”.

INFORMATION:

For more information on types of time correction problems and clkrate.m, see the APPENDIX and the CODAS Manual.

There are other useful pieces of information in this file:

  • Are the profile times reasonable?

    • Header breaks can signal interruptions for the purpose of changing configurations or resetting the ADCP time. (Note that for the demo, there is a header for every profile because the raw data did not come directly from the ADCP recording disk, but through a serial connection that allows the user-exit program to pass a copy of the ensemble data to another machine. This feature attaches the header to each ensemble).
  • For each input file the last entry in the scanping output file is a table giving a statistical summary of the “beam statistics”.

    • This gives a general indication of the depth range achieved in that file and, more importantly, shows whether all four beams were performing similarly. Differences can provide an early warning of beam failure.

3.3   LOADING

Return to TOP

Preparatory to loading, we need to make the producer definition file. This file identifies the producer of the dataset by country, institution, platform, and instrument. It lists the variables to be recorded, their types, and frequency of storage, etc. Finally, it defines the structures for any STRUCT-type variables loaded. Details about this file are given elsewhere.

% cd ../adcpdb
% cp adcp720.def ademo.def
% vi ademo.def

The *.def files in the adcpdb/ directory are already set up for the R/V Moana Wave shipboard ADCP recording UH-type user buffers. Other producers may edit them to specify the proper PRODUCER_ID (it is not essential, as NODC no longer uses these codes), and review the variables listed (mostly a matter of switching between UNUSED and PROFILE_VAR if a different set of recorded variables are selected–this is rarely the case) and with regard to the user buffer structure definition. If the user buffer is ASCII, change the value_type field from ‘STRUCT’ to ‘CHAR’ and delete the USER_BUFFER structure definition. If it is non-UH and non-ASCII, paste in the appropriate structure definition.

Next we prepare the loadping control file.

% cd ../load
% ls
loadping.cnt
% ls ../ping/pingdata.\* >> loadping.cnt
% vi loadping.cnt

Aside from attaching the list of raw pingdata files, we need to specify the DATABASE_NAME (no more than 5 chars. to satisfy PC-DOS conventions), the DEFINITION_FILE: we set up from above, and an OUTPUT_FILE: name for logging.

The next 4 parameters define how to break the raw data up into CODAS block files. Note that each block file requires some overhead disk space (rather minimal), so it is not a good idea to generate more than is necessary. The loading program will ALWAYS start a new block file when it detects that the instrument configuration has changed, because instrument configuration is constant over a given block file. The 4 parameters in the control file allow the user to define further conditions that can break up the data into more block files. MAX_BLOCK_PROFILES: can be used to limit block file size so that each one can be accommodated on a floppy disk, for example. NEW_BLOCK_AT_FILE? set to “yes” makes it easier to associate a given block file with the pingdata file it came from. (An example of a case where a “no” makes sense is where data are recorded onto 360K floppies, and the DAS records one big pingdata.000 file and one small pingdata.001 file on each.) The NEW_BLOCK_AT_HEADER? is necessarily a “no” for the demo, where each ensemble has a header, and generally “no”, since the foregoing criteria, plus the NEW_BLOCK_TIME_GAP(min): are sufficient for most purposes. A “yes” may be used occasionally for tracking down particularly troublesome datasets. The NEW_BLOCK_TIME_GAP(min): is also useful. A time interval of 30 minutes or more is often a sign of trouble (a time correction overlooked, for example) or could actually be a couple of days in port and a transition from one cruise to another. Breaking the database at those points can make it easier to track/correct such problems at a later stage.

Following each PINGDATA_FILES: filename can be a list of options to either skip some sections during loading or apply a time correction as the data are loaded, as determined from the scanning stage. An ‘end’ is required to terminate this list, even where it is empty.

For our demo, we do not need any time corrections, just some skips over the too-short ensembles we have found from the .scn file.

Now we load the data:

% loadping loadping.cnt

Stripping comments from control file

-------------- start ----------------
pass 1: checking the control file.
.
.
.
pass 2: loading pingdata file into database.

Data file name: ../ping/pingdata.000
% OPTIONS:
% =======
% skip_header_range:
% skip_header_range:
% skip_header_range:
% =======

Header 1
Header 2
Header 3
Header 4
Header 5
Header 6
Header 7    not loaded.
Header 8    not loaded.
Header 9
Header 10
.
.
.
Header 294
end of file ../ping/pingdata.000

Data file name: ../ping/pingdata.001
% OPTIONS:
% =======
% =======

Header 1
Header 2
.
.
.
Header 286
end of file ../ping/pingdata.001

DATABASE CLOSED SUCCESSFULLY

3.4   CHECKING THE DATABASE

Return to TOP

Following are a few examples of using the utility programs lstblock and showdb to check data directly from the database:

% cd ../adcpdb
% ls
adcp1281.def    adcp2240.def    ademo001.blk    ademo004.blk    lst_prof.cnt
adcp1320.def    adcp720.def    ademo002.blk    ademodir.blk    mkblkdir.cnt
adcp1920.def    ademo.def       ademo003.blk    lst_conf.cnt

Note how the 2 pingdata files (one per day), got loaded into 4 different block files. Given our loadping.cnt settings, the 2 files should have gone to 2 different block files; but then, the 2 times when we switched bottom tracking from off to on, then from on to off are probably what triggered the extra 2 files to be generated (the instrument configuration changed). Let’s see if we’re right:

% lstblock ademo ademoblk.lst
% view ademoblk.lst

The file shows the time range covered by each block file. The first block file contains pingdata.000 entirely (checking the time range vs. our .scn file). The file pingdata.001 got split into the latter 3 block files right at the times when bottom tracking got turned on then off.

Now let’s examine the database interactively:

% showdb ademo

 DBOPEN DATABASE NAME = ademo

 BLK:     0   OPEN: 1    PRF:     0   OPEN: 1    IER:      0

 1 - Show BLK DIR HDR          10 - SRCH TIME
 2 - Show BLK DIR              11 - SRCH BLK/PRF
 3 - Show BLK DIR ENTRY        12 - MOVE
 4 - Show BLK HDR              13 - GET TIME
 5 - Show PRF DIR              14 - GET POS
 6 - Show PRF DIR ENTRY        15 - GET DEPTH RANGE
 7 - Show DATA LIST            16 - GET DATA
 8 - Show DATA DIR             17 - SET SCREEN SIZE
 9 - Show STRUCT DEFN          99 - QUIT


 ENTER CHOICE ===> 1


 BLOCK DIRECTORY HEADER

db version: 0300
dataset id: ADCP-VM
time:  1993/04/09 00:02:00 --- to --- 1993/04/10 23:58:00
longitude: MAX            --- to --- MIN
latitude:  MAX            --- to --- MIN
depth: 21 --- to --- 493
directory type -----> 0
entry nbytes -------> 80
number of entries --> 4
next_seq_number ----> 5
block file template > ademo%03u.blk
data_mask ----------> 0000 0000  0000 0000  0000 0011  1000 0001
data_proc_mask -----> 0000 0000  0000 0000  0000 0000  0000 0000

 ---Press enter to continue.---

We note down the time range for a consistency check and future reference. The longitude and latitude ranges are not yet established, but will be after the navigation step below. The data_proc_mask (data processing mask) is all 0, meaning nothing has been done to the data yet. (If we had done some time corrections, the rightmost bit would be set.) As we perform the processing steps below and update the database, these bits will get set as appropriate. The file codas3/adcp/include/dpmask.h shows the meaning of each bit position. Other entries in the above are administrative in nature and not too interesting. All the options on the left side of the menu are administrative, and aside from choice 1 and 7, are rarely needed by users. Option 7 gives the variable names and numeric codes, that can be used with option 16 (GET DATA).

BLK:     0   OPEN: 1    PRF:     0   OPEN: 1    IER:      0

1 - Show BLK DIR HDR          10 - SRCH TIME
2 - Show BLK DIR              11 - SRCH BLK/PRF
3 - Show BLK DIR ENTRY        12 - MOVE
4 - Show BLK HDR              13 - GET TIME
5 - Show PRF DIR              14 - GET POS
6 - Show PRF DIR ENTRY        15 - GET DEPTH RANGE
7 - Show DATA LIST            16 - GET DATA
8 - Show DATA DIR             17 - SET SCREEN SIZE
9 - Show STRUCT DEFN          99 - QUIT


ENTER CHOICE ===> 16


ENTER VARIABLE NAME or CODE ===> u
BLK:     0           TIME: 1993/04/09 00:02:32           LON: MAX
PRF:     0                                               LAT: MAX
----------------------------------------------------------------------------
<Scale = 0.001>
                   U : in m/s (60 VALUES)

          3          21         141         179         177         178
        162         153         157         153          97          94
        118         117         115         130         137         163
        173         183         196         187         187         194
        180         186         173         164         152         164
        161         161         142         162         175         183
        192         182         192         183         200         192
        178         205         212         215         195         193
        186         164         139         125       32767         169
      32767       32767       32767       32767       32767       32767
----------------------------------------------------------------------------
ENTER # OF PROFILES TO MOVE <return to quit> ===> 100
BLK:     0           TIME: 1993/04/09 08:22:32           LON: MAX
PRF:   100                                               LAT: MAX
----------------------------------------------------------------------------
<Scale = 0.001>
                   U : in m/s (60 VALUES)

        124         127         146         214         193         203
        203         153         172         191         204         228
        202         210         214         213         215         238
        245         256         266         274         274         271
        284         293         294         308         315         312
        310         323         313         303         310         279
        306         290         328         344         302         343
        122       32767       32767       32767       32767       32767
      32767       32767       32767       32767       32767       32767
      32767       32767       32767       32767       32767       32767
----------------------------------------------------------------------------
ENTER # OF PROFILES TO MOVE <return to quit> ===>


BLK:     0   OPEN: 1    PRF:   100   OPEN: 1    IER:      0

1 - Show BLK DIR HDR          10 - SRCH TIME
2 - Show BLK DIR              11 - SRCH BLK/PRF
3 - Show BLK DIR ENTRY        12 - MOVE
4 - Show BLK HDR              13 - GET TIME
5 - Show PRF DIR              14 - GET POS
6 - Show PRF DIR ENTRY        15 - GET DEPTH RANGE
7 - Show DATA LIST            16 - GET DATA
8 - Show DATA DIR             17 - SET SCREEN SIZE
9 - Show STRUCT DEFN          99 - QUIT


ENTER CHOICE ===> 16


ENTER VARIABLE NAME or CODE ===> 35
BLK:     0           TIME: 1993/04/09 08:22:32           LON: MAX
PRF:   100                                               LAT: MAX
---------------------------------------------------------------------------

     CONFIGURATION_1 : STRUCTURE

        avg_interval :          300 s
        compensation :            1
            num_bins :           60
            tr_depth :            5 m
          bin_length :            8 m
          pls_length :           16 m
        blank_length :            4 m
       ping_interval :        1e+38 s
           bot_track :            0
        pgs_ensemble :            1
       ens_threshold :           25
        ev_threshold :        32767 mm/s
           hd_offset :           38 deg
          pit_offset :            0 deg
          rol_offset :            0 deg
             unused1 :        1e+38
             unused2 :        1e+38
             unused3 :        1e+38
       freq_transmit :        65535 Hz
         top_ref_bin :            1
         bot_ref_bin :           15
             unused4 :        1e+38
        heading_bias :          159 deg
----------------------------------------------------------------------------
ENTER # OF PROFILES TO MOVE <return to quit> ===>


BLK:     0   OPEN: 1    PRF:   100   OPEN: 1    IER:      0

1 - Show BLK DIR HDR          10 - SRCH TIME
2 - Show BLK DIR              11 - SRCH BLK/PRF
3 - Show BLK DIR ENTRY        12 - MOVE
4 - Show BLK HDR              13 - GET TIME
5 - Show PRF DIR              14 - GET POS
6 - Show PRF DIR ENTRY        15 - GET DEPTH RANGE
7 - Show DATA LIST            16 - GET DATA
8 - Show DATA DIR             17 - SET SCREEN SIZE
9 - Show STRUCT DEFN          99 - QUIT

ENTER CHOICE ===> 99

The first thing we usually want to know is the cruise track. We do this by using a Matlab routine cruistrk.m that reads in an ASCII file with the first 3 columns being time, longitude, and latitude. In the case of the demo, the fixes are stored in the user buffer of the database, so we need to extract them using the program ubprint.

% cd ../nav
% vi ubprint.cnt

We need to specify the database name, step_size (> 1 if we just want a subsample), and the time range (as determined from showdb above). Our demo data were acquired using the user-exit program ue3.exe, which stores two fixes per ensemble, one at the start and another at the end. To obtain a fix as close as possible to the ADCP profile time, we choose to average the fix at the end of the ensemble and the fix at the start of the subsequent ensemble. We do this by using the avg_GPS_summary option in the ubprint.cnt file.

If the data were collected using ue4, which incorporates the correction to the gyrocompass using GPS heading data, we need to add options to output these data (see options: attitude, attitude_mat, position, avg_GPS2_summary).

There are 3 ways to specify the time_ranges in this and MOST other control files which require them:

  1. type in time range(s) desired in the form:
93/04/09 00:02:00 to 93/04/10 23:58:00
  1. for the entire database:

    all

  2. indicate a file with the time range(s) to read in:

    @tr_filename

In the demo we are using “all”; the warning message is not a problem. The “all” timerange is set to beginning and ending dates far removed from the present.

Now we run the program:

% ubprint ubprint.cnt

This gives us the file ademo.ags with columns time, longitude, latitude, no. of satellites, message type, dilution of precision, and height.

For old data sets that have Transit fixes or sparse/incomplete GPS coverage, see the CODAS Manual for information on edfix.m.

% vi cruistrk.m

Specify the parameters (fix_file, etc.).

% matlab

>> cruistrk

This plots the cruise track in the window and generates the PostScript file cruistrk.ps.

4   EDITING

Return to TOP

Editing involves making a few routine checks through the data, plotting the data to determine whether some correction is necessary, and applying the correction to the database as needed.

4.1   THERMISTOR CHECK

Return to TOP

We first check the thermistor performance. If the instrument was set to calculate soundspeed from the thermistor reading, then we just want to verify that the thermistor is indeed behaving properly. If the instrument was set to use a constant soundspeed and the cruise track is such that the constant soundspeed value is inappropriate, then we need to establish a better estimate of soundspeed from measurements of salinity and temperature (the case in demo).

In either case, we start by looking at the thermistor record.

% cd ../edit
% vi lst_temp.cnt
% lst_temp lst_temp.cnt

Stripping comments from control file

%% WARNING: Search before beginning

 Time range: 1970/01/01  01:01:01 to 2070/01/01  01:01:01

LST_TEMP completed.

We use a Matlab script ‘plottemp.m’ to plot the output from lst_temp:

% vi plottemp.m
% matlab
>> plottemp

infile =

ademo.tem


plot_title =

ADCP DEMO:  Mean (--) & Last (.) Transducer Temperature

At this stage, it would be good to be able to pull in independent measurements of temperature at transducer depth, from CTD data for example. Transform the data to a Matlab-loadable file and superimpose the plot on the ADCP record. Decide what kind of corrections are necessary, if any.

If a correction is needed, then:

  1. a correct temperature value:
    1. an offset to correct the ADCP record, or
    2. a file of ADCP profile times and correct transducer temperature
  2. also:
    1. a salinity value, or
    2. a file of ADCP profile times and salinity values

will be needed to calculate the correct soundspeed.

The CTD data accompanying a given adcp dataset can come in a variety of formats.

The next section is an example of how one might go about matching the ADCP thermistor temperatures to the CTD temperatures. For this example, a file with CTD data at 4db is located in the edit/ subdirectory. We will now compare it to the ADCP thermistor.

>> who

Your variables are:

infile         n              temp
last_temp      plot_title     xaxis_label
mn_temp        t              yaxis_label

>> load ctd_db4
>> who

Your variables are:

S_db4          mn_temp        t_ctd
T_db4          n              temp
infile         plot_title     xaxis_label
last_temp      t              yaxis_label

>> hold on
>> plot(t_ctd, T_db4, 'o')

Looks like the ADCP record is fairly clean but needs some offset. We match up the closest ADCP time to each CTD point to calculate the offset and then take the mean:

>> jmatch=[];
>> for i = 1 : length(t_ctd)
     [m j] = min(abs(t - t_ctd(i)));
     jmatch = [jmatch j];
   end
>> ofs = T_db4 - mn_temp(jmatch);
>> mean(ofs)

ans = -1.3512

Now plot the CTD salinity:

>> hold off
>> plot(t_ctd, S_db4, 'x')

Looks like there is too much variability to get away with using a single mean (although the soundspeed calculation is not as sensitive to salinity as it is to temperature). Since we do not have a profile-by-profile record of salinity to correct against, we will just interpolate what CTD salinity we have to the ADCP times and use that.

>> n = length(t);
>> sal = NaN * t;                                          % initialize array
>> iuse = find(t >= t_ctd(1) & t <= t_ctd(length(t_ctd))); % table1 refuses to extrapolate
>> nu = length(iuse);
>> sal(iuse) = table1([t_ctd S_db4], t(iuse));
>> sal(1:iuse(1)) = sal(iuse(1)) * ones(iuse(1),1);  % manually extrapolate ends to be equal
>> sal(iuse(nu):n) = sal(iuse(nu)) * ones(n-iuse(nu)+1,1);
>> plot(t_ctd, S_db4, 'o', t, sal, '.')              % checking...
>> ts = [t sal];
>> save corrsal.dat ts -ascii
>> quit

% vi fix_temp.cnt

Specify the temperature correction as offset_temperature= -1.35 and true_salinity= corrsal.dat. For this cruise original_soundspeed is set to 1500, as this value was used by the DAS for calculations. See the CODAS Manual for more information on soundspeed use in the DAS.

NOTE:

Unfortunately, the DAS does not record in the pingdata file the actual soundspeed it used or the method used to calculate it. This information is available only in the start.cnf file, which ideally would be copied along with the pingdata files, as well as noted in a shipboard log.

% fix_temp fix_temp.cnt

Stripping comments from control file

 Pass 1: Checking control file...
% OPTIONS:
% =======
% dbname: ../adcpdb/ademo
% original_soundspeed= 1500.000000
% offset_temperature= -1.351200
% true_salinity= file: corrsal.dat
% =======

%% WARNING: Search before database beginning

%% WARNING: Search beyond database end

%% WARNING: Reached end of database

%% WARNING: Search before database beginning

%% WARNING: Reached end of database

%% WARNING: Search beyond database end

 Pass 2: Updating database...

%% WARNING: Search before beginning

 Time range: 1970/01/01  01:01:01 to 2070/01/01  01:01:01

 fix_temp completed.

% showdb ../adcpdb/ademo

 DBOPEN DATABASE NAME = ../adcpdb/ademo

 BLK:     0   OPEN: 1    PRF:     0   OPEN: 1    IER:      0

 1 - Show BLK DIR HDR          10 - SRCH TIME
 2 - Show BLK DIR              11 - SRCH BLK/PRF
 3 - Show BLK DIR ENTRY        12 - MOVE
 4 - Show BLK HDR              13 - GET TIME
 5 - Show PRF DIR              14 - GET POS
 6 - Show PRF DIR ENTRY        15 - GET DEPTH RANGE
 7 - Show DATA LIST            16 - GET DATA
 8 - Show DATA DIR             17 - SET SCREEN SIZE
 9 - Show STRUCT DEFN          99 - QUIT


 ENTER CHOICE ===> 4


 BLOCK HEADER

db version: 0300
host:       SUN3
dataset id: ADCP-VM
producer id: 31516N0001
time:  1993/04/09 00:02:00 --- to --- 1993/04/09 23:58:00
longitude: MAX            --- to --- MIN
latitude:  MAX            --- to --- MIN
depth: 21 --- to --- 493
datalist offset ----> 160
dir      offset ----> 8232
directory type -----> 3
datalist nentries --> 77
dir entry nbytes ---> 24
dir nentries -------> 288
data dir nentries --> 26
data_mask ----------> 0000 0000  0000 0000  0000 0011  1000 0001
data_proc_mask -----> 0000 0000  0000 0000  0000 0000  0001 0000

 ---Press enter to continue.---

 BLK:     0   OPEN: 1    PRF:     0   OPEN: 1    IER:      0

 1 - Show BLK DIR HDR          10 - SRCH TIME
 2 - Show BLK DIR              11 - SRCH BLK/PRF
 3 - Show BLK DIR ENTRY        12 - MOVE
 4 - Show BLK HDR              13 - GET TIME
 5 - Show PRF DIR              14 - GET POS
 6 - Show PRF DIR ENTRY        15 - GET DEPTH RANGE
 7 - Show DATA LIST            16 - GET DATA
 8 - Show DATA DIR             17 - SET SCREEN SIZE
 9 - Show STRUCT DEFN          99 - QUIT


 ENTER CHOICE ===> 16

 ENTER VARIABLE NAME or CODE ===> 37
 BLK:     0           TIME: 1993/04/09 00:02:32           LON: MAX
 PRF:     0                                               LAT: MAX
%--------------------------------------------------------------------

          ANCILLARY_1 : STRUCTURE

              tr_temp :      30.366 C
         snd_spd_used :      1544.88 m/s
         best_snd_spd :        1e+38 m/s
           mn_heading :      24.3873 deg
           pgs_sample :          245
          unassigned1 :        32767
          unassigned2 :        32767
          unassigned3 :        32767
          unassigned4 :        32767
          unassigned5 :        32767
%---------------------------------------------------------------------
 ENTER # OF PROFILES TO MOVE <return to quit> ===>

 BLK:     0   OPEN: 1    PRF:     0   OPEN: 1    IER:      0

 1 - Show BLK DIR HDR          10 - SRCH TIME
 2 - Show BLK DIR              11 - SRCH BLK/PRF
 3 - Show BLK DIR ENTRY        12 - MOVE
 4 - Show BLK HDR              13 - GET TIME
 5 - Show PRF DIR              14 - GET POS
 6 - Show PRF DIR ENTRY        15 - GET DEPTH RANGE
 7 - Show DATA LIST            16 - GET DATA
 8 - Show DATA DIR             17 - SET SCREEN SIZE
 9 - Show STRUCT DEFN          99 - QUIT

 ENTER CHOICE ===> 99

4.2   PERCENT GOOD CHECK

Return to TOP

In some cases the signal returns to the ADCP instrument are poor, primarily in rough weather or where scattering is weak. If these conditions are expected, we have found plots of percent good vs bin useful in editing.

The function *plotpg.m* creates color or greyscale contour plots of percent good vs depth for each ensemble (reduced resolution is an option). The input and edited options are detailed in the top portion of the file. Edit plotpg.m, then call it with starting decimal day as its argument.

We make the plots here with full resolution, using greyscale rather than color.

% matlab
>> plotpg(98)
cruiseid =
ADCP DEMO

% DATABASE: ../adcpdb/ademo successfully opened

%% WARNING: Search before database beginning


 Searching CODAS database ==> ../adcpdb/ademo
 for block 0 profile 0
 Max no. of profiles ==> 101
 Max no. of bins ==> 128

%% WARNING: Search before database beginning

% WARNING: Reached end of database

 Retrieved 574 profiles, 60 bins

% DATABASE: Closed successfully

ns =
    98     1
position_contour =
    0.2000    0.7300    0.6500    0.1630
printing to pgood98.ps

>>quit

Percent good is excellent in this data. The plot does show where the signal is truncated due to bottom interference, and possible scattering layer effects from day 98.0-98.2 and 99.2-99.3. No editing based solely on percent good is needed for the data. For cruises where such editing is needed, see the APPENDIX and program mkbadprf.m.

4.3   BOTTOM, CTD WIRE, ETC.

Return to TOP

Here we need to check where bottom and CTD wire interference, random glitches, etc. have contaminated our profiles. This is done by plotting the profiles, using Matlab, and creating, as needed, the files bottom.asc, badbin.asc, and badprf.asc as we go along. These files will then be used as input to other programs that actually apply the changes to the database.

The Matlab plotting routines can be set to automatically apply some editing criteria and indicate where particular bins and profiles fail to meet them. In particular, the echo amplitude is scanned for increases large enough to signal a bottom return. The error velocity is compared against some threshold. CTD wire interference, for example, can be manifested by high error velocity in particular bins as the CTD rosette is lowered then raised. Another editing check for CTD contamination as well as other random glitches consists of examining the second derivative of the U, V, and W components. A high value for both W and either U or V at a given bin is deemed suspect and needs to be checked. The last check is for a high W variance over a profile. Again, anything in excess of a given threshold needs to be checked.

The first step then is to establish a reasonable starting point for these editing thresholds. In the case of the amplitude check, a value of 10 seems to work reasonably well for majority of the cruises. For the error velocity check, a relatively low value of 70 mm/s is a suggested starting point, and can be raised depending on the particular cruise. The second derivative and W variance checks tend to be cruise-dependent, so the user may want to calculate some profile statistics first to obtain a reasonable starting point:

% vi profst00.cnt

Specify the dbname:, output: filename, step_size: (1 = use every profile within the specified time range; > 1 = subsample), ndepth (<= no. of depth bins to use in the calculation), and the list of time ranges at the end of the file. You may also want to specify reference: to use a different set of bins. We might want to exclude times where the ship was in shallow water (unless bottom tracking was on), as bottom interference will add much noise to the statistics and might make the suggested thresholds too large for deep water data collection.

Now we run profstat to calculate statistics for W, the variance of which is our focal point:

% profstat profst00.cnt

Next we calculate statistics for second difference of U, V, and W for use with the second derivative criterion. The parameters are similar to profst00.cnt, except here we request second difference statistics for U, V, and W.

% vi profst02.cnt
% profstat profst02.cnt

Each run of the profstat program has generated both an ASCII file *.prs (PRofile Statistics) and Matlab files with the same information. A Matlab script is used to calculate the editing thresholds from these output files:

% vi threshld.m

Specify the Wdiff0_file and UVWdiff2_file files as the profstat *.mat output files.

% matlab
>> threshld

w_var_threshold = 986.2228

d2w_threshold = 40.7185

d2uv_threshold = 71.8391

Now we put these values into the Matlab setup.m file which controls the edit plotting routines:

>> !vi setup.m

Review every entry. You can use any of the old values as starting points (except the DBNAME!). Initially, all we want to do is check how well our thresholds do the job, so we can keep EDIT_MODE as 0.

Use the percent good criterion that you intend to use during data retrieval at the latter stages (e.g., with adcpsect for getting plots). Data where percent good is less than the criterion will not be displayed, thus eliminating the need to look at or edit this data. You can override this effect by setting the percent good criterion to 0. We normally use 30% for shipboard ADCP.

Note that the start_bin for W variance calculation is normally set to 3; W tends to be noisy in the uppermost 2 bins when the ship is underway.

The BOTM_EDIT flag can be set to see the effect of bottom flagging. Set_lgb (set last good bin) is used to “spread” the bottom around. That is, where profiles have been flagged for bottom interference at particular bins, the set_lgb program looks at three such profiles in sequence, and assigns the shallowest bin to the middle profile. In this way, gaps of 1 or 2 profiles in the bottom flagging eventually get caught, without having to manually add them to the bottom.asc file. In addition, set_lgb program reduces the last good bin by another 15%, to remove further contamination from sidelobe effects.

>> setup

This initializes the Matlab workspace to the selected settings. Anytime you edit this file, you will need to invoke it to take effect.

>> getp(0,0)

This retrieves DEFAULT_NPRFS profiles from the database, starting from block 0, profile 0. Now we can plot the variables of interest. This is done most easily by typing any of the following:

>> a             % for amplitude plots
>> p             % for percent good
>> e             % for error velocity
>> w             % for W component
>> u             % for U component
>> v             % for V component
>> uv            % for both U and V components on the same window
>> plotnav       % plot lon and lat, if stored in NAVIGATION variable
>> fore         % plot along track velocity component
>> port         % plot across track velocity component
>> fp         % plot both along and across track components in same window

Each of the above commands except “plotnav” can be called with an optional argument, an offset that overrides the default offset. For example, “u” plots U with offset UV_OFFSET, but “u(0.05)” uses a 5 cm/s offset instead. The default offset is not changed.

See the APPENDIX for a brief discussion of using fore, port, and fp to detect bias in along track velocity due to scattering layers.

The Y-axis is bin number, negated. The X-axis is the variable with the selected offset applied to stagger the profiles. When the variables are loaded into the workspace, the command window displays the results of applying the editing criteria:

Blk_Prf_Bin_EV =

  Columns 1 through 12

  0     0     0     0     0     0     0     0     0     0     0     0
  5     6    12    12    36    44    46    47    50    51    52    57
  7     9     7    25     8     4     4     4     3     2     3     3
-59    54    60    59   -53    56    66    51    57    53    50    53

give a row of block nos., profile nos., bin nos., and error velocity value for bins that fail the error velocity check. At the same time, the plots will mark these bins with a yellow circle. If the U & V plots, in particular, do not show anything suspicious at these circled bins, and the error velocity values are just above the thresholds you set, you may want to raise the threshold a bit. (This is accomplished by editing setup.m and typing ‘setup’ and ‘u’ or whatever to replot.)

The command window also displays the variable

Blk_Prf_Bin_Z_Gap =

  Columns 1 through 12

  0     0     0     0     0     0     0     0     0     0     0     0
  0     1    10    11    12    13    14    15    17    30    31    32
 21    20    29    29    29    30    29    29    29    29    29    29
181   173   245   245   245   253   245   245   245   245   245   245
  0     0     8     0     0     0     0     0     1    12     0     0

the first 4 rows of which give the block no., profile no., bin no., and depth value at which the amplitude check failed. Often, this coincides with a strong scattering layer, rather than an actual bottom. However, we do not generally tweak the amplitude threshold to prevent false alarms; instead it is easier to deal with them as described later. The flagged bins are marked with cyan ‘*’ on the plots.

The last row is an indication of any gaps in the flagging, i.e., the number of “unflagged” profiles between the previous and current column entries. Since it is normal for the bottom interference to contaminate a continuous sequence of profiles, it is important to detect where that may not be the case. Sometimes, the water becomes so shallow, so that there is really no good data left, but the amplitude signal will not exhibit the behavior that the flagging algorithm is searching for. In other cases, the “kink” in the amplitude is barely perceptible or has a peculiar shape that it escapes the automatic flagging. In both these cases, the user may need to manually specify the bin at which those profiles’ amplitude “max out”.

The command window may also display the variable

Blk_Prf_WVar =

  0     0     0
142   143   195
596   589   528

which shows the block and profile nos. of profiles that fail the W variance check. The third row shows the calculated W variance so that the user can review whether the threshold needs further adjustments. These profiles are marked with magenta ‘*’ at every bin.

Finally the command window may also display the variable

Blk_Prf_Bin_D2U_V_W =

  Columns 1 through 12

   0     0     0     0     0     0     0     0     0     0     0     0
 111   114   132   133   136   137   138   139   194   194   194   195
   6     7     7     7     2     2     2     2     5     6     7     4
  11   -91  -109  -133   -61    -4    -5   -20   286  -119  -211   178
-119     5    54    41    85   151   121   103   135  -140  -115    89
 -43   -44   -49   -49   -53   -60   -53   -47   -50    60    62   -56

Again, the first 3 rows are block, profile, bin nos. of bins at which the second derivative of W and the second derivative of either U or V fail the editing check. The last 3 rows are the second derivatives of U, V, and W, again displayed to see how far they are from the preset thresholds. The flagged bins are marked with a white ‘o’ on the plots.

Here are the other useful commands for examining the profiles:

>> l          % grabs the next set of DEFAULT_NPRFS profiles to the left
>> r          % grabs the next set of DEFAULT_NPRFS profiles to the right

For zooming in and out:

>> zi         % lets you use the mouse to select an area to Zoom In on
>> zo         % restores the plot to full-scale (all profiles, all bins)

For finding then marking a particular profile with white ‘+’:

>> k          % lets you use the mouse to click on the profile to mark
>> k(blk,prf) % instead of using the mouse, specify the block & profile no.
>> k(n)       % where n is a signed no. indicating how many profiles
          % to move the key down (-) or up (+) to
>> k([])      % to remove any key marks

When editing mode is enabled by setting EDIT_MODE = 1, the following commands become available. In general, one wants to either indicate where the flagging is unnecessary (a profile or bin is okay, even if it fails the editing criteria), or is insufficient (a profile or bin is bad but eludes the flagging criteria).

Commands to unflag bins/profiles:

>> ok         % lets you use the mouse to select a region where circled
          % bins (second derivative and error velocity failures)
          % will get unmarked
>> okprf      % lets you unmark entire profiles flagged as bad based
          % on the W variance check
>> sl         % lets you use the mouse to select a region where bottom
          % flagging should be unset ("Scattering Layer")

Commands to flag more bins/profiles:

>> zap        % lets you click on each bad bin to add to the flagging;
              % they get marked with a yellow '+'
>> rzap       % lets you select a region where all bins are to be
          % added to the flagging; they get marked with a yellow '+'
>> bad(bin, [blk prf], nprf]
          % another way of flagging bins, where bin is a vector of
          % bin numbers to zap, [blk prf] indicates the profile to
          % start zapping from, and nprf indicates how many other
          % profiles to zap the same bins from
>> badprf     % lets you click on each bad profile to add to the flagging;
          % all bins in these profiles get marked with a yellow '+'
>> bottom     % lets you click on each bin where you think the bottom
          % return "maxes out"

Miscellaneous commands:

>> pb         % toggles PLOTBAD variable;
          % PLOTBAD = 1 means all data are plotted
          % flagged points are marked with special symbols;
          % this is useful when you are still deciding whether the
          % flagging is occurring at the right places
          % PLOTBAD = 0 means all flagged data and data that do not
          % meet the access criteria (percent good/last good bin)
          % "disappear" from the plot but the marks remain;
          % this is useful to determine whether the current flagging
          % is satisfactory or needs to be supplemented

>> pbo    % toggles PLOTBAD_ORIGINAL variable;
          % 0, the default in setup.m, suppresses
          % plotting of data that have been flagged in the
          % database, or during the current editing
          % session, based on bin range and/or percent
          % good criteria.
          % PLOTBAD_ORIGINAL = 1 plots the flagged
          % points with a red x superimposed.

>> undo       % this can undo the last editing command
          % (ok, okbin, sl, zap, rzap, badprf, bottom)

>> list       % when you are satisfied with the editing shown on the
          % current plot, use 'list' to record them to the \*.asc
          % files for updating the database later

In summary, the plots observe the following conventions:

  • yellow circles mark bins that fail the error velocity check
  • white circles mark bins that fail the second derivative check
  • cyan ‘*’ mark bins at which the amplitude check fails
  • magenta ‘*’ mark profiles that fail the W variance check
  • yellow ‘+’ mark bins/profiles that the user manually adds to the editing
  • white ‘x’ mark bins/profiles that have already been edited out of the database as glitches; the edit commands do not affect these
  • red ‘x’ mark bins/profiles that fail a percent good criterion, and/or fall outside a specified bin range.
  • white ‘+’ is used to “key” in on a profile, but doesn’t mean anything

See the diary file in codas3/adcp/demo/edit/diary for an annotated record of the editing session for the demo database.

IMPORTANT

When EDIT_MODE is set and there are marked bins in the current workspace, and the user has not issued the ‘list’ command before trying to load a new set of profiles into the workspace, the user is prompted on whether to list the previous edits or not. However, it is not possible to check for this oversight when the next command is ‘quit’. So remember to type ‘list’ to keep a record of the editing session. It does not hurt to list a given workspace more than once (e.g., you decided to throw in another bin for flagging, after doing a ‘list’). The main side effect is duplicate entries in the *.asc files, making them larger than necessary. To undo flagging after a ‘list’ has been issued, you will have to edit the *.asc files manually, before running the programs below.

At this stage, we assume that you have gone thru the entire editing session and have the badbin.asc and bottom.asc files. To apply this editing to the database itself:

% badbin ../adcpdb/ademo badbin.asc

This simply sets the PROFILE_FLAGS variable in the database to indicate that particular bins have been tagged as bad. The original velocity values remain intact. During later access with ‘adcpsect’ or ‘profstat’, the user can specify whether to consider these tags during access by using the FLAG_MASK option. By default, this option is set to ALL_BITS, meaning data for bins at which PROFILE_FLAGS are nonzero will not be used. Use showdb, option 16 (GET DATA), variable ID 34 (PROFILE_FLAGS), to see the effects of running badbin on the database.

If there are entire profiles that need to get thrown out (none in the case of our demo database), you need to run:

% dbupdate ../adcpdb/ademo badprf.asc

The file badprf.asc is created when entire profiles are flagged or marked bad. This sets the ACCESS_VARIABLES.last_good_bin to -1 to indicate that the entire profile is not to be used during subsequent access.

For bottom editing, run:

% dbupdate ../adcpdb/ademo bottom.asc

This sets the database variable ANCILLARY_2.max_amp_bin to the bin indicated in the file, last_good_bin. It sets data processing mask bit 8 (bottom_sought), and clears bit 9 (bottom_max_good_bin), which allows further editing to be applied to the profiles.

Note that it does NOT set ACCESS_VARIABLES.last_good_bin, which access programs like adcpsect use during retrieval to determine the “good” portion of a profile (in addition to other editing criteria, of course, like percent good). The next program accomplishes this:

% set_lgb ../adcpdb/ademo

Again, use showdb option 16, variable ID 38 and 39 to see the effect of set_lgb on the database. The program updates ANCILLARY_2.last_good_bin, ACCESS_VARIABLES.last_good_bin to 85% of the minimum of the max_amp_bin from the bottom flagged profile and the two adjacent ones, as seen with BOTM_EDIT flag above, and the data processing mask (see the file edit_chgs in codas3/adcp/doc for more detail). This is done only for profiles where the ACCESS_VARIABLE last_good_bin is not set to -1 (ie, those not flagged using badprf.asc).

Finally, edit setflags.cnt to set the PERCENT_GOOD threshold and add the option set_range_bit. This will set bits in PROFILE_FLAGS everywhere that percent_good falls below the minimum given or the bin falls outside the range in ACCESS_VARIALBES: first_good_bin to last_good_bin.

% vi setflags.cnt

Now run setflags:

% setflags setflags.cnt

If you care to check all the effects of this editing at this point, you can run through the stagger plots one more time, this time setting all flagging criteria in setup.m to []. Set PLOTBAD = 0 and EDIT_MODE = 0. Just plot ‘uv’ from beginning to end to check that you haven’t missed anything.

Other useful commands for use in editing include delblk and set_top. See descriptions in the Appendix.

5   CALIBRATION

Return to TOP

In the ideal case, calibration is done in two steps. First, correct the velocity data for time-varying compass error, using the difference between the gyro heading and heading determined by a gps attitude sensor such as Ashtech 3DF. Then use water track and/or bottom track methods to calculate the constant net transducer offset relative to the GPS heading array. If heading correction data is not available, proceed to the bottom and/or water track steps, calculating the net transducer offset relative to the gyrocompass.

5.1   GYRO CORRECTION

Return to TOP

One-second GPS heading data were acquired separately during this cruise, using an Ashtech 3DF receiver. The data–gyro heading minus GPS heading–were edited then averaged to the ADCP ensemble times. Any remaining gaps at this point were filled in using a numerical gyro model. The result is a two-column ASCII file of ADCP times and correction angles. From this file, we extracted a subset corresponding to the two days of the demo, cal/rotate/gpscal.ang.

For data collected with user exit ue4, use the matlab program ashrot.m in cal/rotate to generate a heading correction angle file. Ashrot.m linearly interpolates over gaps, which might not be acceptable in every case. You can also use the program attplot.m in the nav/ subdirectory to look at all the attitude statistics.

For now we will apply the correction to the bottom track and navigation files first, rather than the database itself:

5.2   SETTING UP BOTTOM TRACK FILE

Return to TOP

% cd ../cal/botmtrk

For the demo database, we have one short piece where bottom tracking is available. We start by pulling this data out:

% vi lst_btrk.cnt

Specify the usual stuff. Then run the program:

% lst_btrk lst_btrk.cnt

Now we have a file with profile time, zonal and meridional ship velocity over the ground, and depth. We will apply the heading correction to this data in a few steps below.

5.3   SETTING UP WATERTRACK NAVIGATION FILE

Return to TOP

% cd ../watertrk

Here we extract a file of ship velocity relative to the reference layer from the database using the adcpsect program:

% vi adcpsect.cnt
% adcpsect adcpsect.cnt

If no heading correction data are available, skip the next step.

5.3.1   gyro correction to calibration input files

% cd ../rotate
% vi rotnav.cnt

First we rotate the navigation file ../watertrk/ademoraw.nav, specifying the INPUT_ANGLE_FILE: gpscal.ang.

% rotnav rotnav.cnt

Then we rotate the bottom track file ../botmtrk/ademo.btm:

% vi rotnav.cnt
% rotnav rotnav.cnt

Now we do the bottom track calibration calculations, and then the water track calibration calculations, using the rotnav output files ../watertrk/ademorot.nav and ../botmtrk/ademorot.btm.

5.4   BOTTOM TRACK METHOD

Return to TOP

%cd ../botmtrk

Given the rotated (or not) bottom track data file, and a navigation file (../nav/ademo.ags), the next step is to calculate zonal and meridional displacements from both bottom tracking and navigation data:

% vi refabsbt.cnt
% refabsbt refabsbt.cnt

The function btcaluv.m requires 3 inputs, with 2 more inputs optional. Required inputs are filenames of refabsbt and lst_btrk output and a title for the plots (entered as strings). Optional inputs are step and timerange. Step indicates the level of averaging of ensembles: 1 (default) to use individual ensembles (the usual choice when precision GPS and heading corrections are available); 2 to average adjacent ones, etc. Step sizes 1,2,3 are roughly equivalent to watertrack windows 5,7,9. Timerange can be used to run the calculation separately on segments of the bottom tracking, for checking for consistency from one part of the cruise to another, but is not necessary. Specify the timerange as a vector with starting and ending times as decimal days. Editing of outliers is done based on criteria we can set in the beginning of the file.

For the demo we make the calculation on the entire bottom tracking run, with step sizes 1,2,3 for comparison with the watertrack calibration values. Conclusions are appended to the file “btcaluv.out”, and plots to “btcaluv.ps”.

% matlab
>> [a, ph, gd_mask] = btcaluv('ademo.ref','ademorot.btm','ADCP DEMO step size 1',1);

The plot shows for each ensemble used the amplitude or scale, the phase angle, ship’s speed and heading, and the depth of the bottom. The vectors a and ph contain the individual values of the amplitude and phase estimates; gd_mask is a vector the length of a and ph, with a 1 for each value that passed the editing, and a 0 for each that did not.

Now we run it for steps 2 and 3:

>> [a, ph, gd_mask] = btcaluv('ademo.ref','ademorot.btm','ADCP DEMO step size 2',2);
>> [a, ph, gd_mask] = btcaluv('ademo.ref','ademorot.btm','ADCP DEMO step size 3',3);
>> quit

Summarizing the results:

         step size 1    step size 2        step size 3
mean(std)

amplitude   1.0022(0.0138)    1.0019(0.0089)        1.0013(0.0088)
phase       1.8203(0.7633)    1.7374(0.4999)        1.7959(0.2805)
#pts used        16/24            14/23             13/22
out of total

We see the standard deviation decreasing as we increase the averaging, but also note that the values are very consistent. With step size 3, only every third estimate is independent, so the SEM is approximately 0.28/sqrt(13/3) = 0.13, versus SEM of 0.76/sqrt(16) = 0.19 for step size 1.

5.5   WATER TRACK METHOD

Return to TOP

This method requires the navigation file (../../nav/ademo.ags) and the file of ship’s velocity relative to the reference layer, rotated (or not) by gpscal.ang.

% cd ../watertrk

The next step uses both these files to identify and calculate calibration parameters for each ship acceleration and turn. An acceleration or turn is detected based on the thresholds set in the control file:

% vi timslip.cnt

The n_refs= parameter establishes a window of ensembles that bracket the jump or turn. We generally run this step using different window sizes to get a better picture, and then check for consistency across the results. For 5-minute ensembles, we normally set each run to:

nrefs=      5    7    9
i_ref_l0=   1    1    1
i_ref_l1=   1    2    3
i_ref_r0=   4    5    6
i_ref_r1=   4    6    8

always leaving out the 2 ensembles in the middle. The 5-reference window generally finds more points but also includes more noise. As the window gets bigger, fewer points become available but the averaging also tends to produce better results. Again, when we have both precision GPS and heading correction data, the 5 ensemble window usually gives the best results. Here we will look at all 3 window lengths.

The parameter use_shifted_times is set to ‘no’, meaning don’t apply any time correction when calculating amplitude and phase (the original purpose of timslip was to determine the time correction needed for the ADCP ensembles). We presume that we have done an adequate job of time correction during loading, unless timslip convinces us otherwise. If the ADCP times have been sufficiently corrected, then ‘nav - pc’ is normally under 5 seconds. If it becomes evident that there is some timing error, it is best to go back to the loadping stage and fix the times there.

For cruises with few stations or turns, and thus few water track points, an alternate water method is available using portions of the ship’s track that are reciprocal (or roughly so). See the APPENDIX for recip.m description.

% timslip timslip.cnt

% vi adcpcal.m

Review the clipping parameters at the top of the file. The values provided are what we normally use so we don’t usually change anything.

% matlab
>> adcpcal('ademo_5r.cal', 'ADCP DEMO 5 ens')

This plots all the calibration results, then applies the clipping parameters and replots the remaining points. The amplitude and phase estimates, etc. are appended to a text file adcpcal.out, and the plots are saved to cal.ps.

Now redo all of the steps above starting from ‘vi timslip.cnt’, using the different window settings. We normally do the 7, and then the 9. Now we compare the results recorded in adcpcal.out.

Results:
-------------
nrefs            5                7                   9
mean(std)

amplitude   1.01(0.0184)    0.99(0.0147)        1.00(0.0092)
phase       2.05(1.1302)    1.92(0.8186)        1.87(0.7607)
#pts used       15/19            16/17             13/17
over total#

The amplitude varies from 0.9959 to 1.0006, with lowest standard deviation (0.0092) for 1.00 (n_refs= 9). The phase varies from 1.87 to 2.05, with lowest standard deviation (0.7607) for 1.87 (n_refs= 9).

Looking at both bottom and water track results, we would normally pick 1.8 degrees as the angle, as this is roughly between the bottom track result for step=3: 1.79 and water track result for nrefs=9 of 1.87, which have the lowest standard deviation. We usually do not specify the angle to an precision beyond a tenth of a degree. The amplitude correction would be 1.00 (that is, none, usually specified to the hundredths place). With good data sets, especially those with gps heading correction data, we use higher precision: 0.01 degree in phase and 0.001 in amplitude. If we have, say, 200 watertrack points and a phase angle of 0.5, SEM is 0.035; so we do have a hope of getting accuracy beyond 0.1 degrees and we specify one more place.

Fortunately, this is relatively consistent with the results derived when the entire cruise (48 days’ worth of data) was used. So even though our subset has severe limitations, this gives us confidence to proceed and apply the above corrections to the database. It is worth noting that it is reasonable to take into account a larger body of information when determining the calibration values, if it is available. Such information might include calibration values from previous cruises on the same ship AS LONG AS nothing has changed (ie, the transducer hasn’t been removed and replaced; GPS heading antenna array has not been moved). We have found it helpful to combine timslip output from multiple cruises and run the watertrack calculation on the combination.

Now, we will rotate the data with both the heading correction values and the transducer offset calibration values.

% cd ../rotate
% vi rotate.cnt

Specify DB_NAME, LOG_FILE (a running record of rotations performed on the database), TIME_RANGE: (or DAY_RANGE: or BLOCK_RANGE: or BLOCK_PROFILE_RANGE:; doesn’t matter in our case, we are rotating ‘ALL’). For the gyro correction, we use the time_angle_file: option, and the amplitude and phase options for the bottom/water track values.

% rotate rotate.cnt

actually rotates and rescales the U and V velocity components, (both water track and bottom track as we specified), and adds the file angle parameter and the constant angle to the variables ANCILLARY_2.watrk_hd_misalign and botrk_hd_misalign, and multiplies ANCILLARY_2.*_scale_factor by the amplitude factor.

Running showdb confirms that the data processing mask has been set to indicate rotation (second bit from the right), and the ANCILLARY_2 variable displays the rotation values used.

% showdb ../../adcpdb/ademo

Use option 1 to check the data processing mask, and option 16, variable ID 38 to view the changes to the ANCILLARY_2 variable.

7   QUALITY PLOTS

Return to TOP

Quality plots can be done at any time during processing. It usually consists of deriving a single U, V, W, etc. average for “on-station” profiles and another for “underway” profiles within a given region.

The first step is to obtain a set of on-station time ranges, and another set of underway time ranges:

% cd ../quality
% vi arrdepos.cnt
% arrdep arrdepos.cnt
% vi arrdepuw.cnt
% arrdep arrdepuw.cnt

The next step is to obtain profile statistics for each one of these sets. Concatenate the on-station time ranges to profstos.cnt and run profstat.

This can be done explicitly using:

% cat ademo_os.arr >> profstos.cnt

or by using the include file option “@ademo_os.arr” at the bottom of the control file.

Do the same for the underway time ranges.

% vi profstos.cnt
% profstat profstos.cnt
% vi profstuw.cnt
% profstat profstuw.cnt

Now we use a Matlab script to generate the quality plots from the profstat *.mat output files:

% vi stn_udw.m
% matlab
>> stn_udw

Press return after each page of plots (4 pages in all, assuming all the usual variables are being plotted). This also generates a PostScript file. The plots provide a means of comparing ADCP performance when the ship is underway vs. on station.

8   RETRIEVAL AND PLOTTING

Return to TOP

Below we describe procedures for extracting data from the database to generate contour, vector and stick plots. Since the primary key in the database is profile time, we usually start by creating a grid of time ranges. This may either represent some longitude/latitude interval, or simply a time interval.

The program llgrid converts a user-defined longitude and/or latitude grid to time ranges.

% cd ../grid
% vi llgrid.cnt

The lon_increment is set to some decimal degree interval (or a very large value to grid purely on latitude). The same is true for lat_increment. The lon_origin and lat_origin are usually set to -0.5 of the increment value to center the data on the grid boundary.

% llgrid llgrid.cnt

creates an ASCII file of time ranges, which can be appended to the adcpsect.cnt file used to retrieve the data for plotting from the database. Review this file to see if the lon-lat increments are adequate to generate an acceptable averaging interval (the comments indicate how many minutes fall into each time range); normally, we want at least 2 ensembles for each average, but not too many either. It may sometimes be necessary/desirable to manually edit some of the time ranges to suit the purpose at hand.

The other program, timegrid, simply breaks up a time range into even time intervals.

NOTE: Do NOT use the “all” time option in timegrid.cnt; “all” uses default beginning and ending times which cover a 100 year period. Timegrid does not look at the database, but calculates even time intervals within the time range(s) specified.

% vi timegrid.cnt
% timegrid timegrid.cnt

Again, the output file is meant for appending to adcpsect.cnt for extracting time-based averages.

9   VECTOR

Return to TOP

Assuming we have run llgrid to convert our vector plot lon-lat grid to time ranges:

% cd ../vector

add timeranges with the include file option < @../grid/ademo.llg > or

% cat ../grid/ademo.llg >> adcpsect.cnt
% vi adcpsect.cnt
% adcpsect adcpsect.cnt

This creates a *.vec file, which we will use as input to the vector plotting program. It has columns lon, lat, (U, V) for the first depth layer, (U, V) for the second depth layer, etc. in m/s.

To create the vector plot, we need to specify the plot parameters in the file vector.cnt:

% vi vector.cnt
% vector vector.cnt

creates a PostScript file which can be viewed with the usual utility programs for viewing PostScript files (gs, xv, pageview, ralpage, etc.) or spooled to a PostScript printer. Note that circles on the plot indicate velocity vectors whose magnitude is less than the arrowhead length.

10   STICK

Return to TOP

A Matlab plotting routine allows plotting velocity vectors at each depth layer all on one plot, and performs some harmonic analysis of on-station data. Again we use either llgrid or timegrid to generate a set of time ranges and then append this to adcpsect.cnt.

Note that some parts of the analysis require on-station data for at least 3 days to get any kind of reasonable results. The demo does not contain a time interval for which harmonic analysis is truly appropriate, but we will proceed to illustrate how it works.

From the demo’s reference layer velocity plots, we can pick out the first 1-1/3 days as being “on-station” (this is not strictly true, since the ship actually covered 0.5 degree latitude over the period). To get a more exact breakoff point for the “on-station” time, use the smoothr *.sm file: columns 4 and 5 give ship U and V in m/s. Locate the decimal day (column 1) corresponding to where the ship starts steaming southeast the rest of the way: line 378, decimal day 99.316667.

% to_date 93  99.316667

converts from decimal day to yy/mm/dd hh:mm:ss format.

% cd ../grid
% vi timegrid.cnt
% timegrid timegrid.cnt
% cd ../stick
% vi adcpsect.cnt        #add time range or include file option
% adcpsect adcpsect.cnt

The output file has a .con extension, and columns x (time), y (depth), and absolute U and V in cm/s.

% vi runstick.m

We select all possible plot options, even though we do not have a long enough time series. The script will reset the harmonic value according to the number of days covered in the on-station file. We also need to specify axes and scaling parameters, as well as the inertial period.

% matlab
>> runstick

Note that Matlab version 4 is inordinately slow in writing the PostScript output file, so don’t be impatient. Press the return key each time you see ‘done’ on the screen.

11   CONTOUR PLOTS

Return to TOP

Here we’ll contour the southeast bound section, using longitude on the x-axis, depth on the y-axis.

NOTE: This section describes a functional but unsupported commandline contouring program. You may prefer to contour the data in Matlab (see below)

% cd ../grid
% cp llgrid.cnt llcont.cnt
% vi llcont.cnt
% llgrid llcont.cnt
% cd ../contour
% cat ../grid/ademo_se.lon >> adcpsect.cnt
% vi adcpsect.cnt
% adcpsect adcpsect.cnt
% cp contour.cpa se.cpa
% vi se.cpa

The contour program and documentation are provided separately from CODAS and are available from the same anonymous ftp site. Note that only Sun binaries are provided in the package, but the source code is written in standard FORTRAN-77 so should be portable to most sites.

% contour se.con se.cpa

produces the PostScript file se.ps.

Two changes are needed in the se.cpa file to contour the U component: the title, and the FORTRAN format statement.

Then rerun contour.

12   PROCESSING DOCUMENTATION

Return to TOP

One of the most important steps in ADCP processing is documenting what has been done to the data. Ideally we would leave an accurate and concise record of all things of note and all the steps that were performed, so that someone else could come along later and reproduce all the processing, knowing exactly how it was done.

Toward that goal, we have included an example doc file, ADCPdemo.doc, in one possible format for keeping track of the details.

13   APPENDIX

Return to TOP

13.2   QUICK_ADCP.PY

Return to TOP

This is a python program which is meant for batch processing of whole cruises. It does the usual steps (scan, load, ashtech correction, nav steps:(ubprint, adcpsect, refabs, smoothr, putnav), extracts and plots temperature, does watertrack and bottomtrack calibration, sets up files in the edit/ directory for a new graphical editing schemd (see below), and makes overview matlab files suitable for vector or contour potting.

Type “quick_adcp.py” on the command line for more options.

13.3   GPLOTADCP

Return to TOP

This is a servicable first-attempt plotting tool is called “gplotadcp”. Load files created by adcpsect (*_uv.mat) and use the drop-down menu to make rough vector and contour plots. This requires the m_map package Gplotadcp will be superceded by a new snazzier tool, but that one is not quite done yet.)

Check out aREADME for more information about gautoedit. It is copied into the edit/ subdirectory of a new cruise when adcptree is run.

13.4   GAUTOEDIT

Return to TOP

A new editing package is now available: “gautoedit” provides a different means of editing large chunks of data (i.e. several days at a time, rather than 40 profiles at a time). This tool was designed to screen data for things like ringing, on-station wire interference, jittery navigation, and bottom interference when bottom tracking was not on. It cannot replace the small scale editing possible with the waterfall plots, but does help speed up the editing process. It has a crude interface to the old waterfall editing scheme. Again, installing m_map will help with the visualization. This is a useful tool for simply looking at a dataset, even if you don’t use it for editing.

13.5   TIME CORRECTION

(clkrate.m and other considerations)

Return to TOP

Clkrate.m is useful only for data collected with UH mag2 user exit programs or other user exit programs with clock corrections disabled. Date problems have to be sorted out by hand.

It plots the difference between the two times as a function of the satellite (true) time, and helps estimate PC clock drift or times at which the PC clock has been reset. Even when the user-exit program is set to do automatic clock correction there can be time corrections needed; if, for instance, the date on the PC clock is wrong and there is a drift in the PC clock.

The output file ademo.scn is used by clkrate.m. We edit the Matlab script clkrate.m to specify filenames and thresholds. Comments on top of this file explain how to set these. Then we start Matlab and run it:

% vi clkrate.m
% matlab
>> clkrate

message =

Fitting one trend line to ALL data:

correct_time: 93/04/09-00:02:31
PC_time: 93/04/09-00:02:32
clock rate: 1.000000

For the demo, we see in the plot that the distribution of the pc-fix times, after knocking out outliers, falls within max_dt_difference seconds or so (we still see a few 3s and 4s, where we expected to see no more than 2 seconds, but that’s not too bad). The clock rate is a perfect 1.0. So we are confident that we do not need to perform any time corrections.

Details of how the script works are described in clkrate.m. The main point is that it makes one or more linear fits to the (fix time, pc-fix time) data, detecting breaks in the trend (most probably due to the PC clock being reset at mid-cruise), fitting each detected trend separately. Clkrate.m uses only seconds in the day to determine fits, so it CANNOT detect date errors in the PC times. When applying a time correction during loading, the correction is made from the start_header_number specified through ALL subsequent data until another time_correction option is encountered.

The script saves the output to a text file ademo.clk. These are estimates of PC clockspeed vs. satellite clock (ie, clock_rate), the PC time, and the correct time for each segment, in format expected by loadping. It would be helpful at this point to look at the comments in the loadping.cnt file, so that the following is more meaningful.

Note that the times chosen as start of a segment are estimates; they must be compared with transitions (such as a change in header within a pingdata file or ensemble lengths longer than normal, which might indicate a reset of the PC clock) seen in the scan output file and adjusted as needed. You will also need to note the start_header_number from the scanping output.

The discussion of time problems in the CODAS Manual is not completely current. There can be, in general, 3 types of time problems, which can occur separately or all together:

  • wrong date in ensemble times recorded
  • wrong time within the same day in recorded times
  • a drift in the PC clock relative to the real times received with the satellite fixes.

When viewing the *scn output you should check for each of these. You will need to note what pingdata file they occur in, what header number, and the recorded PC time and compare these with the clkrate.m output.

The first requires knowing the cruise dates and looking for jumps in the date within the pingdata files. Sometimes the final ensembles of the previous cruise will be recorded in the first pingdata file; in this case just skip these when loading. If there is a log that tells of changes to the ADCP PC, this will be helpful.

The second shows up in the last column of the scanping output: pc-fix time. This records only the difference in seconds between the PC clock at the end of the ensemble and the real time from the GPS fix. If this is larger than ~5 seconds, it should be corrected during loadping by using this offset to get the “correct_time” of the ensemble (see *clk output).

The third (a drift in the PC clock) also shows up in the last column of the scanping output; here you see a change in the average pc-fix time over the length of the pingdata file. This drift can also be estimated by clkrate.m, but should be verified.

The method used to correct the ensemble times during loadping is:

         t(i) = t(ref) + R * (c(i) - c(ref))

where
       t   = true time
       c   = possibly erroneous clock reading
       R   = clock_rate
       i   = ith ensemble
           ref = beginning of the segment to correct.

13.6   EDITING: SPECIAL CONSIDERATIONS

Return to TOP

13.6.1   LARGE REGIONS WITH LOW PG

(mkbadprf.m)

The matlab script mkbadprf.m can be used when large regions of profiles or many single profiles need to be flagged as bad due to low percent_good. These profiles are identified as those with no data in the reference layer bins that meet a minimum percent_good criterion.

The script requires as input the output of a navigation adcpsect run (*nav) where the reference layers bins are those which you will use in calculating the final navigation and the pg_min option is set to a reasonably high value. The output is an ascii file of profile times in the format of badprf.asc and is applied using dbupdate.

Input and output file names and the year must be edited by the user.

13.6.2   SCATTERING LAYERS

(bias in along track velocity component)

Except in extreme cases of vigorous swimming by scatterers, it has been thought that scattering layers have no effect on the doppler measurements, but there are sometimes effects in u or v. When the ship is underway, strong scattering layers can cause a bias in the along track velocity component. To help detect occurances which warrant editing, use the interactive editing plot commands “fore”,”port”, and “fp” in conjunction with the amplitude (“a”) plot.

What to look for: subsurface amplitude maximum(s) or local maximum (not always triggering bottom flagging) visible in amplitude plot, when ship is UNDERWAY. The effect can be a reversed “S” in along track velocity, with the middle of “S” usually corresponding to the amplitude maximum. The plot command fp will plot both u and v rotated into ship’s coordinates. A real bias will appear in one of these but not both (except in extreme cases of rapid dispersion of a large layer). Normally such bias is seen in the alongtrack velocity, when the layer is stationary; however we have seen such bias in the across track component only and is attributed to the scattering layer moving in this direction. When the ship is on-station the profiles in rotated coordinates will appear constant.

In the demo, use the interactive editing to display starting at block 2 and profile 25 (getp(2,25), then u). You will see amplitude maximum flags on profiles 32 and 34, at bins 24 and 18. Now look at the velocity in ship’s coordinates: fp. There is a marked change in character of the profiles from ~bins 14-25 for these 2 profiles and for about 10 more following in the fore (U rotated) direction. There is some change in the acrosstrack character also. Now look at the amplitude. The local maximums are visible in these and subsequent profiles, with some profiles having double maximums. Looking at the “maximum” for profile 34 at bin 19, it looks by eye to be shifted ~20 cm/s from the trend seen in bins 7-13.

The next group of 25 profiles also shows a scattering layer bias at 2 different levels. Use the fp command to see how clearly the “S” shows up in the U rotated velocity, with no effect in the V rotated component. The amplitude plot shows what appear to be 2 different layers, the first descending and the second stationary. Here the “S”s are not as large as in the previous case, but appear to be at least on the order of 10 cm/s.

How to handle these? We are working on routines to detect and flag such cases, but for now you will have to edit by hand, if you so choose. You can use either rzap or zap to flag the “S”s with badbin, then unmark the bottom flagging with sl editing command; or just make a note that gets added to doc file. Normally the bins below the SL are still good, so we don’t want to truncate the profile.

13.6.3   DELBLK

The command delblk should be used ONLY when you want to permanently remove from the database a range of data. You may specify the range by blocks, block and profile range, or by time range.

% delblk adcpdb/ademo

% DATABASE: adcpdb/ademo successfully opened

 Enter one of the following numbers :

  1.  Delete block(s)
  2.  Delete block-profile range
  3.  Delete time range
  4.  Exit program

 ==>
-----------
Enter your choice; you will be queried from the range specifcations, with a
description of the format expected.

13.6.4   SET_TOP

The command set_top is used to indicate the topmost bin for which data are considered good for profiles when the ship was underway (see input options to use for all data), within the specified time_range(s) in a CODAS ADCP database. Copy the control file set_top.cnt from codas3/cntfiles and edit for your case.

% set_top set_top.cnt

13.7   WATER TRACK - reciprocal track method

Return to TOP

The matlab function recip.m can be used to calculate calibration phase and amplitude from approximately reciprocal ship tracks; we treat these as a closed loop and integrate around the circuit. The assumptions made are that the track is indeed reciprocal, or nearly so, and that the currents over the track are not changing over the time we are concerned with. In practice we have found results are consistent with the traditional watertrack method, even when the assumptions do not match the data very well.

Inputs required are the smoothr output file, label information, and sets of time ranges which define the beginning and end of the reciprocal tracks.

Output consists of estimates of amplitude and phase values with a “goodness” of the reciprocality in space assumption estimated by computing the sum of gap lengths over total length of circuit (as %) for each reciprocal specified. These are written to file recip.out.

13.9   MISCELLANEOUS COMMANDS

Return to TOP

  1. lst_conf lst_conf.cnt
  • Extract the history of the configuration settings from a CODAS database into a text file.
  1. lst_prof lst_prof.cnt
  • Produce a list of times and positions of selected profiles within the given time range(s) from a CODAS database.
  1. mkblkdir mkblkdir.cnt
  • Generate a new CODAS database from a set of CODAS block files. The new database will consist of a new block directory file and a copy of each input block file, converted if necessary to the host machine format.
  1. getnav getnav.cnt
  • Retrieve the navigation information for selected profiles within the given time range(s) from a CODAS ADCP database.

NOTE:

getnav gets positions from the NAVIGATION data structure where they are placed by the user exit program. UH user exits place an end_of_ensemble fix; RDI’s NAVSOFT places an ensemble-averaged fix. The NAVIGATION data structure is not updated by any CODAS processing steps.

  1. lst_hdg lst_hdg.cnt
  • Extract the ship’s mean heading and last heading from a CODAS database into a text file.
  1. nmea_gps nmea_gps.cnt
  • Convert NMEA-formatted GPS fix files to a columnar text format suitable for use with the codas3/adcp/nav programs, and/or to Mat-file format for Matlab.
  1. newflag newflag.cnt
  • Generate a list of “suspect” profiles in a CODAS database, i.e., profiles that do not conform to the specified thresholds. This is part of an older batch-mode alternative to the present interactive editing system.

13.10   BROAD BAND ADCP DATA

Return to TOP

13.10.1   SCAN (bbscan)

Scan the raw *r* files to see what the time ranges are for each file:

% scanbbs -oall.scn 9602*r*

NOTE

scanbbs is a modification of scanbb, a LADCP processing program. Its source is in programs/ladcp/src, and the binary is also in the ladcp/bin/{sun4,sol2} location. Execute it with no arguments to see how to use it.

Run navtime.prl to get time differences between the start and end ensemble times and the start and end GPGGA message times. This script compares times from *r* in all.scn with times in *n* (nav files), counting seconds in a day only, so multiples of 24 hours will not show up as unusual here:

% cp /home/noio/programs/codas3/adcp/tran_bbr/navtime.prl .
% chmod a+x navtime.prl

Edit the input file name from “test.scn” to “all.scn” near the top of navtime.prl, and execute it. Ignore the warnings about possible typos.

% navtime.prl | tee all.tim

Look at the output, all.tim, columns 2 and 3, to see if instrument and GGA times agree.

CHECKING TIMES

Extract columns 6 & 7 from all.scn into scn.times. These are the start and end ensemble times. Times can be loaded into matlab and plotted to identify jumps. Determine the seconds of time needed, if necessary, to correct ensembles during loading.

13.10.2   LOAD (bbload)

Edit loadtbbp.cnt; use user definition file loadtbbp.def.

See the documentation in loadtbbp.cnt for option specification, such as a date correction:

% ../transect/9602088p.000 add_seconds: 86400    end

% loadtbbp loadtbbp.cnt

13.10.3   GPS EXTRACTION (bbgps)

Use nmea_gps nmea_gps.cnt to extract position information from the *n* files.

Return to TOP