Research

I am committing the cardinal sin of an early career economist/scientist and exploring several broad areas that are of interest to me.  Broadly, I am interested in game theory, computer network security, machine learning and deep reenforcement learning.  Fortunately, these areas do have some overlap.  The descriptions below the paper are not abstracts but are intended to be more of a high-level, non-technical synopsis of my work.  This page is also likely infested with tYpos.  I’ll edit it sooner or later.

 

Game Theory

  • Reasoning About ‘When’ Instead of ‘What’ : Collusive Equilibria with Stochastic Timing in Repeated Oligopoly (Submitted, Dowload PDF)

Collusion is a well-studied phenomenon in economics, game theory and industrial organization.  However, many of the models assume that firms receive information and act upon such information simultaneously and at pre-set time intervals.  In this paper, we relax such assumptions and allow firms to take actions and receive information at random times determined by an underlying stochastic process, while at the same time, demand is evolving according to yet another stochastic process.  Similar to traditional oligopoly models, we show that if firms are patient, they would be willing to forgo immediate profits from undercutting their competitors and instead would enter a collusive cartel that artificially inflates prices. However, we also find that the frequency in which firms are monitored also affects their willingness to join a cartel.  If the monitoring frequency is relatively low, then it might be in a firms best interest to undercut the other cartel participants because the other cartel participants might not find out about the defection for a long period of time.

  • How much would you pay to change a game before playing it? (Submitted, Download PDF)

What is the difference between lobbying and bribing?  One might say that lobbying is publicly observable but bribery is done in secret.  Of course, this is a gross simplification and there are other concerns (such as legality) when considering bribery.  However, in both cases a party is trying to enact a “change in the rules.”  Such a scenario is not unique to bribery and lobbying.   In this paper, we provide an explicit framework for determining how much a player would be willing to pay to change the rules of a strategic scenario (a scenario in which there are multiple interacting decision makers).  Specifically, we focus on the difference between how much a player would be willing to pay if a) everybody saw that it paid to change the rules or b) it was able to secretly pay to change the rules.  In addition, we find that in some scenarios the players willingness to change the rules depends on the player anticipating what possible mistakes it might make after the rules are changed.  In economics language, a player anticipated its own future bounded rationality when it decides whether or not to change the rules.

Computer Network Security

  • A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks, Journal of Network and Computer Applications, Volume 66, May 2016, Pages 166-179 (Download PDF)

Much of recent computer network security has shifted its attention away from keeping malicious hackers out of a network.  Instead, a large amount of work has focused on using network logs to identify malicious users within a network.  A typical approach is anomaly detection.  In such an approach,  network activity is classified as either normal or malicious.  Unfortunately, computer networks often evolve and what seems abnormal might actually be a benign change in the computer network structure.  However, we do know that one pattern of malicious activity is when an attacker “hops” from host to host, leaving a trace of increased traffic as it traverses the network.  In this paper, we develop a likelihood ratio detector that takes into account this type of attacker behavior.  We provide an algorithm and conduct simulations and show that our likelihood ratio detector does better than a traditional anomaly detector.

This work involves combining game theory with computer network security.  A computer network attack can be viewed as a game between an attacker—that presumable wants to cause damage, and a defender—that wants to catch as many attackers as possible.  Much of the previous literature in this regard boils down to a simple rock-paper-scissors game.  The attacker think “if the defender thinks I am going to play rock (attack server A, for example) then I cam going to play scissors (attack server C).  While the defender thinks “if the attacker is going to play paper (attacker server B), then I am going to play scissors (defender server B).  Of course, the equilibrium strategy for both the attacker and the defender is to randomize over their action so that the attacker sometimes attacker server A and sometimes attacks server B and so on.  However, this modeling approach leaves out a crucial element of the scenario when considering anomaly detection.  When a defender uses an anomaly detector, it tries to classify (random) network activity as either normal or malicious while an attacker tries to penetrate a network without being detected by the defender.  In other words, the attacker is trying to hide in the network and the defender is trying to infer if what it is observing is actually generated by normal network activity or a malicious attacker.  In my dissertation work, I develop a model of a defender that uses an anomaly detector and an attacker that wants to attack with the highest intensity possible but also not be detected.  I prove that there is a unique Nash equilibrium in pure strategies and show how the variance of the underlying network affects detectability.  The crucial element is that this result is far broader than computer network security.  Other scenarios such as tax evasion, insurance fraud and copyright infringement have the same underlying incentive structure.

Applied Machine Learning

(I call this part of my research “applied” machine learning because I am using machine learning techniques to work with real data rather than develop new machine learning algorithms)

  • Mixed Frequency and Mixed Granularity Spatio-Temporal Employment Forecasting with Feed-Forward Neural Networks (Paper coming soon)

In this part consulting, part research project I am developing a feed-forward neural network to forecast zip-code level employment.  However, the data in this project is particularly messy in that it comes in mixed spatial granularity and mixed   frequency.  For example, I have annual zip-code level employment figures but have monthly Metropolitan Statistical Area (MSA) level employment figures and quarterly county level employment figures.  As a result, the data is of mixed frequency (annual, monthly, quarterly) and mixed granularity (zip-code, MSA, county).  While there may be parametric models that address these issues, it is likely that their forecasting abilities are limited by the functional form assumptions (for example, one might assume that zip-code level employment might follow an ARIMA process).  So, instead of fine tuning a parametric model to improve zip-code level forecasts, I implemented a multi-layer feed forward neural network to learn the (approximation to) the optimal function that transforms the available information to a reliable forecast.  I find that the neural network significantly outperforms a simple linear forecasting model in terms of mean squared error and mean relative forecast error.

 

Deep Reenforcement Learning

  • Predicting Attacker Behavior in Large Scale Networks using Game Theory and Deep Reenforcement Learning (In Progress)

This work combines deep learning, game theory and computer network security and is still in its nascent stages.  On the highest level, in order to predict what a “smart” attacker would do, one has to consider how a smart attacker would react to a “smart” defender.  Thus predicting attacker behavior involves applying the tools of game theory.  However, in real world attack and detection scenarios, the optimal strategy for an attacker—even for a fixed defender strategy—is often highly (probably infinitely) dimensional making all but the simplest attacker strategies intractable.  However, with recent advances in deep reenforcement learning, it is possible to train a neural network to learn the optimal attacker strategy.  In this work, I am exploring the possible of using deep LSTM reenforcement learning algorithms to figure out how a smart attacker would behave in a large-scale computer network where the defender is also smart in that it anticipates attacker behavior.  The goal is to then use information regarding attacker behavior  to better detect attackers within a computer network.