Recent advancements in time series analysis are increasingly focused on enhancing forecasting accuracy and interpretability through innovative modeling techniques. New methodologies, such as the Time-Invariant Frequency Operator, are addressing distribution shifts in nonstationary data by emphasizing stationary frequency components, thereby improving performance across various forecasting models. Multi-modal approaches are gaining traction, integrating textual information with time series data to refine predictions, while frameworks like FreqLens are introducing interpretable frequency attribution, allowing users to understand the underlying drivers of forecasts. Additionally, systems that combine visual reasoning with time series analysis are emerging, enabling more intuitive interpretations of complex temporal dynamics. These developments not only enhance predictive capabilities but also address practical challenges in real-world applications, such as anomaly detection in environments with variable data structures. The field is clearly moving toward more robust, explainable, and adaptable solutions that can meet the demands of diverse industrial contexts.
We propose the Deep Distance Measurement Method (DDMM) to improve retrieval accuracy in unsupervised multivariate time series similarity retrieval. DDMM enables learning of minute differences within s...
High-dimensional time series forecasting suffers from severe overfitting when the number of predictors exceeds available observations, making standard local projection methods unstable and unreliable....
Multivariate time series (MTS) modeling often implicitly imposes an artificial ordering over variables, violating the inherent exchangeability found in many real-world systems where no canonical varia...
We present a new method for generating plausible counterfactual explanations for time series classification problems. The approach performs gradient-based optimization directly in the input space. To ...
Nonstationary time series forecasting suffers from the distribution shift issue due to the different distributions that produce the training and test data. Existing methods attempt to alleviate the de...
The signature is a canonical representation of a multidimensional path over an interval. However, it treats all historical information uniformly, offering no intrinsic mechanism for contextualising th...
Autoregressive (AR) models remain widely used in time series analysis due to their interpretability, but convencional parameter estimation methods can be computationally expensive and prone to converg...
Time Series Event Detection (TSED) has long been an important task with critical applications across many high-stakes domains. Unlike statistical anomalies, events are defined by semantics with comple...
Persistent homology (PH) -- the conventional method in topological data analysis -- is computationally expensive, requires further vectorization of its signatures before machine learning (ML) can be a...
Time series analysis underpins many real-world applications, yet existing time-series-specific methods and pretrained large-model-based approaches remain limited in integrating intuitive visual reason...