PS 82-112
Understanding and interpreting data quality of NEON’s terrestrial sensor measurements

Friday, August 9, 2013
Exhibit Hall B, Minneapolis Convention Center
Derek E. Smith, National Ecological Observatory Network (NEON, Inc.), Boulder, CO
Stefan Metzger, National Ecological Observatory Network (NEON), Boulder, CO
Jeffrey Taylor, National Ecological Observatory Network (NEON, Inc.), Boulder, CO
Background/Question/Methods

We all assess the quality of data, both consciously and subconsciously, on a daily basis. Whether it is in a scientific context, while pouring over our own research or a journal article, or when we find ourselves wondering what the source was for the latest statistic we heard. No matter the case, data is useless without the ability to assess its validity. At the National Ecological Observatory Network (NEON), a fundamental role is to offer transparent and valuable data to the community. Thus, it is essential for NEON to implement a method for users to assess the quality of its data. Since tower-based sensors represent a significant portion of NEON’s measurements, it is critical that any ambiguity in sensor measurements are captured and quantified. Sensor data quality will be based on sensor tests, as well as a suite of quality assurance and quality control (QA/QC) analyses. This will result in “data products” (data produced by NEON) being accompanied with a set of “quality flags” (results from the QA/QC and sensor tests). However, in order to accommodate users with various backgrounds, a framework has been developed to present a data product’s quality in varying levels of detail. 

Results/Conclusions

Each data product will have the results from quality flags produced by the QA/QC analyses and sensor tests presented in two separate schemes; a quality report and a quality summary. The quality report will present the results of specific quality flags as they relate to individual observations. For example, the quality report for a thirty-minute temperature average, sampled at a rate of 1 Hz, allows the user to differentiate the 1800 outcomes of each QA/QC analysis and sensor test. The quality summary will instead provide a “quality metric” for each quality flag. A quality metric summarizes, as a percentage, the number of failed QA/QC analyses and sensor tests over the number of observations that were used to calculate a data product. The quality summary will also include a final quality flag, which allows users to quickly assess whether a data product is valid or not. The final quality flag assesses whether or not the observations used to create a data product exceeded a threshold, set by NEON, for an acceptable number of failed QA/QC analyses and sensor tests. This straightforward framework retains several levels of detail on the data quality of NEON’s sensor measurements in order to facilitate data transparency and usability.