![]() |
You are here: WOCE-UOT > overview.htm |
The Upper Ocean Thermal Programme operates through the collaboration of a number of agencies. The majority of the data derive from XBTs. There were two sampling modes used. Certain ocean sections were chosen for higher density sampling, while others were sampled in lower density mode and largely from ships of all sorts and coordinated through the Ship of Opportunity Programme, SOOP.
The job of assembling the data sent either over the Global Telecommunications System, GTS, or coming in delayed mode is the task for Canada's MEDS, IFREMER of France, and the National Oceanographic Data Center (NODC) of the U.S. Three science centres, the JAFOOS in Australia (jointly run by CSIRO and BOM, responsible for Indian Ocean data), AOML (responsible for Atlantic Ocean data) and Scripps (responsible for Pacific Ocean data) in the U.S. carry out scientific quality assessment of the data and return their results to the main archive at the NODC.
A large fraction of the data on this disk were collected using XBTs. In the early 1990's a group was organized to examine the fall rate of XBTs and to compare what was found to manufacturer's specifications. Results of this work are reported in UNESCO 1994,( Calculation of new depth equations for expendable bathythermographs using a temperature-error-free method (application to Sippican/TSK T-7, T-6 and T-4 XBTs), UNESCO Technical Papers in Marine Sciences, 67, 46pp) and by Rual, et. al. in WOCE Newsletter, October, 1996 as well as in scientific literature. Only a subset of the different classes of XBT probes were tested thoroughly, but generally it was found that depths calculated using the manufacturers fall rate equations were shallower than they really were when compared to coincident CTD data. The corrections are linear but vary from one probe type to another.
Up until this problem was recognized, it was uncommon for information about the type of XBT probe used to accompany the temperature data sent to archive centres. Subsequently, the oceanographic archive centres were alerted to ask data providers to provide this information and advised to store this information with the data. At the same time, WMO modified its BATHY code form (the code used to send XBT data in real-time on the GTS) to allow for the inclusion of information about the probe type, the fall rate equation used and the recorder used. Use of this code form began in November, 1995 but it took two years for more than 80% of the data sent this way to have this information included.
The NODC has preserved XBT data in the GTSPP Continuously Managed Database (CMD) from which the data for this disk came. The probe type and fall rate equation information is stored in the GTSPP CMD if it was provided. First of all, in the global attributes portion of the file (see the format description in the Data Files section), the data_type states what sort of data is being considered. If the value here is BATHY or XBT then the issue of the fall rate comes into play. Information about the fall rate and probe is associated with the parameter codes, PEQ$, PFR$, and PRT$. These parameter codes are found in the surfacecodes_srfccode variable and the value in the associated surfacecodes_cparm. The value stored is the information in the code table entries of WMO tables 1770 and 4770. For example, where the PFR$ code is found, its value could be 04205 where 042 means a Sippican T-7 probe (table 1770) and 05 means a MK12 recorder (table 4770). Note that 041 is also a Sippican T-7 but with different (older) fall rate equation coefficients.
The NODC has worked with MEDS and CSIRO to develop logic for the depth correction process of archived XBT data. It was agreed that Science Centres can make corrections to depths even when they do not have all of the information about probe types, but when they are sufficiently sure that a correction is appropriate. If NODC or MEDS makes a correction, it will do so only when it has checked that no correction has been applied by a Science Centre and only if there is sufficient information to be sure that a correction is appropriate. It was also agreed that corrections to the depth would be applied only to the data that will be placed on the WOCE DVDs. The NODC will not make depth corrections on the archived XBT data. However, if the data centers did make depth corrections, the NODC will preserve the old depths by moving them to a different table, load the new corrected depths by the science centers and use the existing database ID to link the new depths to old ones in the database. In addition, two new codes will be created to retain depth correction information in the surface codes structure. The "DPC$" indicates the status of depth correction and the "FRA$" will retain the conversion factor of 1.0336. The "DPC$" code will have the following states:
01 = Known Probe Type, Needs Correction,
02 = Known Probe Type, No need to Correct
03 = Unknown Probe Type, Not enough information to know what
to do, leave alone,
04 = Known XBT Probe Type, Correction was done, and
05 = Unknown Probe Type, but a correction was done.
Having determined which profiles are from XBTs by querying the data type, the XBT probe type and the fall rate equation stored in the XBT archives, the strategy for the fall rate correction is to simply multiply the existing depths by a factor of 1.0336. This will be the technique employed with the multiplication factor stored in the file structure as agreed by the GTSPP team members. A FORTRAN program, dephcorr.f, developed by CSIRO and PERL modules, v3cd_dvd.pl, db.pm, and proc.pm, developed by the US NODC for the depth correction are included in this disk for future references.
However, the international community suggests that the correction to global archives be carried out in cooperation with other data centres around the world to ensure internatinal standards.
The WOCE period is from 1990 to 1998. This disk contains a mixture of both high and low vertical resolution data and data that have passed through different levels of quality control. Generally, all of the data, whether delayed mode, high resolution or low resolution, real-time data from 1990 to 1998 have been examined by science centres. The other data, and data from after the WOCE period have been included to provide additional data. The exact level of QC and the resolution of the data are all documented with the data themselves.
The data on this disk are the highest resolution, highest quality available at the time of production. Because of delays in receiving and processing high resolution data, there is a mixture of both low resolution and high resolution data. Once a year all available data (both high and low resolution) from a particular period are passed to the science centres for scientific quality assessment. When this is completed they are returned to the NODC. Delays in submitting data to the NODC means that some arrive too late to be sent to the science centres. In this case quality control has been performed by data centres only. A document in the Data section describing the format explains how to distinguish the data. Generally, all of the data from the 1990-1998 have passed through scientific quality control. However, as data arrive sometimes years after they were collected, the only way to be certain that you are dealing with data that have passed scientific QC is to examine the data tracking information attached to each profile.
As data pass through QC both at data centres and science centres, the profiles receive data quality flags. In addition, the science centres place information in the file that describes the reason why the quality of certain data are considered to be less than good. This information is stored in the HISTORY section of the data format. To see how to extract this information look at the document in the Data section describing the format. These codes are stored in the ACT_CODE field and described in the code tables in the Data section.
All of the data on this disk are written in netCDF. Because of the structure of netCDF it was necessary to write one file for every station. However, when you go to download the data from this disk you will see the individual files have been combined by ocean basin and quarter of the year into a single compressed file. An inventory file exists for each of these compressed files. This contains a record for each netCDF file. See the format documents for more details.
A complete description of the data flow from collectors to users is given in the Documents section. In brief, much of the data handled by the UOT programme come from XBTs dropped from ships. These profiles are transmitted in real-time (less than 30 days) in low vertical resolution. These data are received by MEDS each day. In addition, there are now a substantial number of profiling floats operating in the oceans and also reporting in real-time. These provide varying degrees of higher vertical resolution and typically they report every 10-14 days. After quality control and duplicates checking, the data are uploaded to the U.S. NODC three times a week. The NODC manages the high resolution data received in delayed mode as well, passing these data through quality control and duplicates resolution processing and, replacing the low resolution records with the high resolution ones as they arrive. Once a year, the data are forwarded to science centres (AOML, JAFOOS and Scripps) for quality control and then returned to NODC for archival. As well, NODC cooperates with the WOCE Subsurface Data Centre operated by IFREMER in Brest to ensure completeness of the NODC holdings.
Details of the processing and quality control procedures used at data and science centres can be found in the Documents section.
When looking at the data, you will note that there are profiles collected from a variety of instruments. The type of instrument used is stored in the data file in the field called DATA_TYPE. To interpret these codes, you should look in the code tables described in the Document section.
Please note that the data and information presented on this disk have been reviewed using techniques and standards consistent with good scientific and data management practices. It is, however, not possible to guarantee that all errors have been detected and either corrected or flagged. It is the responsibility of the user to review the data and information to determine the acceptability for their application.