- Supports single-variate and multivariate time series.
- Detection of single isolated anomaly points (e.g. outliers).
- Detection of changes in trends and dynamics.
- Detection of anomalous subsequences.
- One-step dynamic forecasts.
- Trend and seasonality handling.
- Linear models (i.e. whole family of SARIMA models).
- Variance modelling (GARCH models family).
- Kalman filtering.
- Non-linear autoregressive models.
- Neural networks.
- Fuzzy logic systems.
How Our Methodology Works
First, we obtain the data from a client in whatever format they provide. We use automatic routines to transform this raw data into a standardized format to make processing easier. Then, we start a typical data science processing flow, starting from data cleaning (e.g. removing invalid values from a time series), fixing non-uniformly sampled parts (e.g. data is often collected in regular time intervals but can sometimes be interrupted due to errors and maintenance) and normalizing value ranges. There are often series with huge differences in scale, some varying between 0-1.0 and some between 0 and 1,000,000,000.
Next, we begin the exploratory process where we analyze the data, draw charts, histograms, extract task related features and calculate various statistics. Then, we proceed to model selection and tuning. We have models for prediction, filtering and anomaly/change detection. Tuning is completely automatic and our experts perform the model selection. After that, we evaluate results according to various metrics. Here, the first iteration ends. From this point, we analyze results, and, if they are not adequate, tweak the previous stages to improve overall performance — this is the next iteration.
This process is generally linear, as one stage comes after the other. However, in practice, each stage performs its task with multiple sets of parameters and the result of every run is passed to the next stage. In this way, the process is more like a graph or a tree—not a sequence—and is somewhat similar to neural nets insofar as they relate to how computing nodes are organized. Also, this method stores all intermediate results. So, if only the second stage from the end of the pipeline is changed, then only the last two stages will be re-computed.
It is important to present your team’s data scientists with a comprehensive and actionable set of results from a time series analysis. All reports in our framework are delivered as PDF documents and include a variety of information in the form of plots, charts, and tables. Our framework is interactive and can used by data scientists as a tool to perform a variety of tasks with regards to time series analysis.
Our report can help your data scientist to assess whether your process in a time series analysis project is successful. If not, the report can guide your team in making adjustments where necessary to optimize the process and produce more accurate results.
In addition to a final report, an exploratory report is performed at the early stages of the process to provide a quick glimpse into your dataset. It additionally allows a quick assessment of data quality, quantities, and core characteristics such as range and variance.
Finally, our framework allows your data scientist to receive iterative reports during the process of building a data analysis and modeling framework for existing pieces of data. The iterative reports provides the results of the algorithm's application and the metrics to show how your data scientists’ models perform.
EXPLORATORY REPORT: TABLE OF CONTENTS
- Total variation denoising
- Anomaly Detection
- Basic info
- Multivariate Anomaly Detection
To download a sample Exploratory Report generated by our Time Series Analysis Framework, please provide your contact details below. Your contact information will not be shared and will be used only to send you back the report.