SE PhD Final Oral Defense of Jing Wang

  • Starts: 12:00 pm on Thursday, July 10, 2014
  • Ends: 2:00 pm on Thursday, July 10, 2014
TITLE: Anomaly Detection and Dynamic Decision Making for Stochastic Systems ABSTRACT: This dissertation focuses on two types of problems, both of which are related to systems with uncertainties. The first problem concerns network system anomaly detection. We present several methods for anomaly detection of networks whose normal behavior is stationary. Our methods cover Statistical Hypothesis Testing (SHT), Support Vector Machines (SVM) and clustering analysis. We evaluate all methods in a simulated network that consists of nominal data, three flowlevel anomalies and one packetlevel attack. As a next step, we propose two robust stochastic anomaly detection methods for networks whose normal behavior is timevarying. We formulate the problem as a binary composite hypothesis testing problem for evaluating whether a sequence of observations is drawn from a family of Probabilities Laws (PLs). Then, two robust methods, one modelfree and one modelbased, are proposed to solve this problem. We develop a procedure for learning the underlying family of PLs that characterize a timevarying network. This procedure first estimates a large class of PLs from network data and then refines it to a representative subset. The latter part formulates the refinement problem using ideas from set covering via integer programming. Simulation results show that the robust methods have significant advantages over the alternative stationary methods in timevarying networks. The final anomaly detection setting we consider targets the detection of botnets before they launch an attack. Our method analyzes the social graph of the nodes in a network and consists of two stages: (i) network anomaly detection based on large deviations theory and (ii) community detection based on a refined modularity measure. We apply our method on realworld botnet traffic and compare its performance with other methods. The second problem considered by this dissertation concerns sequential decision making under uncertainty. Markov Decision Processes (MDPs) and Approximate Dynamic Programming (ADP) are general frameworks applicable to this problem. We focus on ADP methods with an actorcritic structure, where the critic part estimates the gradient of the overall objective with respect to tunable policy parameters and the actor part optimizes a Randomized Stationary Policy (RSP) with respect to these parameters. Our first contribution is to propose an actorcritic method that uses a Least Squares Temporal Difference (LSTD) method, which is known to converge faster than the TD methods used by most existing actorcritic methods. Our second contribution is to develop a new Hessian ActorCritic (HAC) method a Newtonlike method that performs better especially for illconditioned problems. We evaluate our methods in problems motivated from robot motion control. COMMITTEE: Advisor: Ioannis Paschalidis, SE/ECE; Christos Cassandras, SE/ECE; David Starobinski, SE/ECE; Mark Crovella, SE/CAS CS
15 Saint Mary's Street, Rm 105

Back to Calendar