One of my academic goal is to establish a Bayesian risk management workflow in operations research, especially for the defense industry where I have great project opportunities through my startup. We provide tools for prediction and optimization aiming for wider use of analytic tools and have successfully finished projects with Korean Navy and logistics company. Rare event inference and simulation hold importance in Financial Engineering (FE) which, I believe, would bring great benefits once applied to other domains 1. Below are specific topics and theories needed for my research; bolded are concepts that are especially relevant to Bayesian which I am highly enthusiastic and experienced as they excel in uncertainty quantification.
- information embedding for subsequent inference/optimization
Design prior belief embedding framework which equip model with structure that could boost the sampler and optimizer efficiency. Within Bayesian community, there has been great emphasis on designing a prior that could boost the sampling efficiency by providing a nicer geometry to posterior. This is especially necessary in a situation with sparse data where prior helps interpolating the likelihood resulting in an amenable posterior.
- prior information types and structure (shape, moment etc 2), structured problem sampling and optimization algorithms
- Monte Carlo sampling (manifold, importance, Hamiltonian)
2. automatic model calibration based on self-consistency
- online learning, uncertainty set
- simulation-based calibration (my R-package and potential research direction extending my intern research with Andrew Gelman)
3. decision calibration
- decision, risk, vulnerability analysis, risk quantification, visual tools
- robust decision based on simulated Bayes factor
As you can see, the three goals form a workflow where both design of a modeler for both inference and decision could be tested and calibrated upon. This recent article could be one example where interaction between model (likelihood and prior), computation tools, and decisions are analyzed.
For example, when faced upon the cyber attack, my workflow would garner prior information on cyberattack and transform into distributional form. Even the qualitative or categorical knowledge could contribute to this quantified output. Then the model would generate several sets of simulated dataset with which the real data are compared. In other words, we observe how our virtually created cyber world reproduce the specificities of the historical attack. Feedbacks from this comparison provide feedback on how to change the model structure and parameter value. Lastly, upon the calibration-completed virtual network, we compare the effect of our decision candidates, detecting and punishing certain nodes for instance.
All steps should and could be examined by the modeler but algorithms that can alert the potential problem would bring great value as system gets more complex and data becomes high-dimensional. Interactive risk dashboard could be one example of my research as a physical form.
- of course there exist differences between financial and nonfinancial domain such as delay, physical constraints, and variability sources (eg weather events); however rare event simulation and its risk estimation are crucial skills for both FE and defense industry.
- On Optimization over Tail Distributions introduce several types of prior belief with rich structure including shape information like monotonicity and convexity (prior belief) and moment-type information.
Comment is the energy for a writer, thanks!