In a world of increasingly complex products and faster release cycles, the ability to accumulate and efficiently analyze test data has never been more important.
Due to the conflicting trends of increasing complexity of structures and systems and drastically reduced development times, test labs are under immense pressure to produce test results quicker in order to save costs and reduced development times despite acquiring more data from more sensors. Test engineers are continuously looking at ways to reduce test time and risk. To work faster and more efficiently, these engineers must be able to monitor and respond to test data in realtime regardless of the data volume.
Depending on the type of test, the duration, and measurement frequency, an overwhelming avalanche of data is generated. The challenge ahead is not only to acquire the data, but to store and preserve large volumes of data, and have the ability to access this data for fast continuous online analysis. Large volumes of both structured and unstructured data require increased processing power, storage, and a reliable data infrastructure. When all elements are applied together into a scalable data backend it can greatly improve time to market, reduce costs, and build better products.
Adaptive and scalable data backend
An adaptive and scalable data backend provides a scalable storage and compute platform for acquiring data streams from instruments, storing configurations, and performing analyses.
To cope with constantly changing requirements, setup configuration, parameter extensions, and varying sample rates, a separation between hot and cold data is the best choice. Raw data, data that is less-frequently accessed and only needed for auditing or test post-processing (‘cold data’), is stored in a distributed streaming platform that scales extremely efficiently. If one has to store, process and calculate new variables from hundreds of thousands of samples per second and from hundreds of channels at the same time, this distributed streaming architecture will show its strength and power.
So-called ‘hot data’, measurement data that must be accessed immediately for analysis, is provided in a NoSQL time series database. This database stores data securely in redundant, fault-tolerant clusters. All measurement data is automatically backed up. Flexible data aggregation ensures that measurement data is continuous processed from the streaming platform to the database with predefined datasets for easy data processing of test metrics and KPIs, like mean value, standard deviation, and minimum/maximum. However, the same data can be replayed and aggregated differently in case detailed analysis around a certain test event is required. This approach minimizes the investment and operational cost for IT and storage infrastructure in the test lab, whilst maintaining the necessary computing performance for test-critical data analysis tasks.
Aircraft engine testing is a typical use case where a scalable data backend offers major advantages. Engine testing generates a lot of data, especially when engine transient responses must be recorded. Data rates can vary from 10 samples/second up to 100,000 samples/second. The challenge is to store massive amounts of sensor data, keep it available on a 24/7 basis and allow rapid data analysis. Another example is where a scalable data backend proves its advantages is fatigue testing of large components or fullscale structures. A typical fatigue test program is divided into a number of flight blocks. At the end of each flight block the test is stopped and the test specimen is inspected for cracks. These manual inspections are time consuming and the time interval between these inspections is relatively large. Structural abnormalities may be detected too late and may result into retrofitting in-service aircraft.