Forecasting
Improve Model Performance with Forecast Combination
How combining forecasts can improve robustness when individual models are unstable, biased, or sensitive to sample choice.
View article briefArticles
Research-led articles on forecasting, econometrics, machine learning, risk modelling, financial analytics, clustering, model validation, and automated analytics systems.
Research approach
Hurst Analytics draws on academic literature and applied quantitative research to evaluate methods and translate them into practical systems.
Hurst Analytics draws on ongoing research work with academic collaborators in econometrics, empirical finance, forecasting, statistical modelling, and machine learning.
The emphasis is not theory for its own sake. The goal is to understand which methods are defensible, where they fail, and how they can be implemented in practical systems for forecasting, risk measurement, reporting, and decision support.
Consulting work is informed by up-to-date methods used in academic research and industry practice, with attention to validation, implementation limits, and commercial usability.
Articles
Short previews for a research-led article library. Full articles can be added as each piece is written and reviewed.
Forecasting
How combining forecasts can improve robustness when individual models are unstable, biased, or sensitive to sample choice.
View article briefRisk Modelling
Why modelling conditional quantiles can be more useful than focusing only on average outcomes in risk-sensitive settings.
View article briefClustering
How clustering workflows can support more stable segmentation, monitoring, and review when group definitions matter.
View article briefRisk Modelling
How forecasting the full distribution can support clearer downside analysis, thresholds, and decision rules.
View article briefFinancial Modelling
A practical view of volatility models, diagnostics, and the reporting outputs that make them useful.
View article briefMachine Learning
Where flexible models can complement econometric structure, and how to compare them without losing interpretability.
View article briefModel Validation
Common backtesting traps, including leakage, unstable benchmarks, short holdouts, and misleading evaluation windows.
View article briefModel Validation
Why model performance, assumptions, data quality, and operational use all need review before forecasts become routine.
View article briefEconometrics
How asset-pricing signals can be evaluated with disciplined out-of-sample testing and combination methods.
View article briefClustering
How ensemble approaches can reduce instability when grouping assets, customers, products, or operational units.
View article briefMachine Learning
A restrained look at where machine learning helps forecasting workflows, and where simpler benchmarks still win.
View article briefReporting Automation
How recurring analysis can move from manual reports into repeatable systems with validation, scheduling, and support.
View article briefShare the analytical decision, reporting burden, or forecasting requirement you want to improve.