I just came back from IFORS17, which was a fantastic conference in a great venue (and I'm glad that many people showed up for my early-morning talk on binary optimization and R&D portfolio management), but for today's post I wanted to write something quick about the new TSA queues at the airport. In academia we talk a lot about queues like M/M/1 but I don't think there is a name for the queues in the new TSA system. (Maybe TSA is a queueing pioneer! It creates queues we academics have never imagined in our wildest nightmares.) If you know the names for the queues described below or any literature that supports their use, I'd love to hear about it.
Now there are two tracks in front of the X-ray machine, one divided into 5 "stations" where people put their stuff in the new bins and one parallel to the first one that actually feeds into the X-ray machine. The idea is that once you have your bin ready, it is going to go onto the main track that goes into the X-ray machine. It happens because "someone" pushes the bin forward onto the main track. The thing is, that someone isn't the traveler. The TSA agent wants to do that, perhaps to avoid having people knock other people's bins while they try to push theirs, but then travelers who arrived long after you could have their bins move forward long before yours.
If you have to put your stuff in more than one bin (even your rollerboard has to go in a bin, so it's likely you'll need more than one, although the new bins are very large), the bins are going to get separated from each other because your stuff goes in line on the main track for the X-ray machine when the agent feels like letting it get in line. This is not a good thing.
Also (1) the TSA agent at EWR kept saying laptops in a bin by themselves but the bins are enormous and since your stuff is going out of order it becomes really easy for someone else to take your laptop by mistake (it also went against the pictures I'd seen at DFW about how to properly pack the new bins - the important point is that nothing should be above or below the laptops but there can be things around them - but I wasn't going to argue with TSA) and (2) several lines of people at EWR had to merge into one line for the X-ray machines; at DFW people do "alternate merge" like it's second nature but at EWR you have people who just stare at the line and hope someone will be nice to them and (3) it's not FIFO anymore because you take the first available station by the X-ray, even if that station is closer to the X-ray machine than the station of people who were in front of you in line.
I liked the old system a lot more. I particularly liked the FIFO part because it seemed fair. Sadly, I've seen the new approach both at EWR and DFW, so I'm assuming a nationwide rollout. I just hope whoever came up with the new way of doing things wasn't an engineer. It's suboptimal by a wide margin.
Ok, so here is the mathematical description. You have 1 X-ray machine, 2 lines of travelers that attempt to merge into 1 line to use the X-ray machine, 5 stations, 1 parallel track where the bins are later pushed onto to get Xray'd. The traveler goes to the first available station. Then he has to wait until his stuff goes onto the X-ray track, and if he has multiple bins he has to wait until he found a way to squeeze the last one in, then he queues for the metal detector, then he waits for his stuff to come out of the X-ray machine. Remember that the TSA agent, not the traveler, decides when the traveler's bins go on the X-rays track (the TSA agent makes the gaps between bins to let a new bin come onto the track). Therefore, people who came to available stations after the traveler might have their stuff pushed into the X-ray machine first and might go through the metal detector before the traveler. Compute the average time to go through security and compare with the average time in the old system. Also compute a new criterion called the Traveler's Sense of Shock and Disbelief at TSA's New Ideas, alternatively called "Did someone get paid to come up with this?", surely to be the focus of a case study either about the crazy things people come up with to justify their consulting gigs or the flaws in the decision processes that green-lighted this innovation. Discuss.
I suppose whoever came up with this disliked having to wait behind, say, families with young children who had to take off their shoes, or elderly people who moved too slowly for their taste. It still seems there would be better, simpler ways to address this, starting with having more lines open so that people can move away from a slow line toward one that goes faster.
Maybe the lines should be per type of customers (although I'm not sure if travelers would comply if they're in a rush and another line goes faster, but perhaps having TSA lines matching the group numbers of the airlines, such as A/B/C for Southwest or 1/2/3/4/5 for United would be aligned with travelers' frequency of travel and presumably their familiarity with the screening process), or the lines could be based on the amount of luggage travelers have to screen (do they have a laptop, do they have a rollerboard) - something similar to the express checkout lane at the supermarket.
Or there could be a system where travelers don't enter the line to the X-rays until their stuff is in bins. (If we allow ourselves to dream for a second, TSA could invent a cart that carries multiple bins to the X-rays. Heck, it could be the X-ray machine of the future: a cart robot that has all your things neatly arranged on the cart and drives itself to the X-ray machine, and then the X-ray machine would be able to scan your things one by one, and your cart would re-appear at the other end of the machine, and then you could happily push your cart to the gate or return it at the checkpoint.)
It'd also be interesting to have a system where people can see the average time to go through security at different times of the day at their airport and the current real time, like what Google does for restaurants, although for airports we also need to have a measure of the staffing level to be able to compare numbers.
If TSA was serious about decreasing waiting times at checkpoints, it would run a nationwide competition among universities (or at least industrial engineering departments) to suggest improvements that would be a bit more thought-out than this. I bet students could come up with a better alternative using IE/OR tools. In the end, just thinking I might have to go through the new screening system every time I go to the airport makes me very uneager to fly, and perhaps that's the point.
Congratulations to my PhD student Victoire Denoyel of ESSEC Business School in Paris, France (co-advised with Laurent Alfandari) for successfully defending her dissertation a few days ago! Her dissertation is entitled "Towards an efficient embedding of logit choice models into 0-1 selection problems". Our first paper was recently published online in EJOR, the European Journal of Operational Research.
I hadn't attended a French PhD defense since that of Nabila Remli in 2011 on robustness in linear programming, and I have to say that I like the French system a lot. It is a lot more formalized than the American way, and as a result I feel that the PhD candidate gets more out of it. For instance, France uses a formal system of reviewers and examiners in the jury. The reviewers must receive the dissertation about 8 weeks early to submit a written evaluation before the PhD defense, and after the presentation, reviewers and examiners ask questions to the PhD candidate in the "debate" part of the defense. The questions can be quite specific and in-depth. In Victoire's case, her presentation was about one hour long and the debate lasted close to two hours. I was very impressed by the quality of the questions, and in Victoire's poise in answering all of them with flying colors. We have also had great suggestions from the jury to finalize our other two papers, for which I am very grateful. Victoire's committee consisted of Professors Ivana Ljubic (jury president) of ESSEC, Knut Haase from Hamburg University, Cecile Murat of Paris Dauphine University, Andre de Palma from ENS Cachan and Nathalie Picard from the University of Cergy, in addition to Laurent Alfandari and myself. Many thanks to the jury members!
I started working with Victoire in 2013 and it's great to see her complete this stage of her career. From a personal standpoint, she is the first French PhD student I have advised and I've greatly enjoyed our research discussions. I tend to prefer working with students who think on the spot about the ideas I suggest and won't hesitate telling me if they don't understand or disagree, because it helps weed out the bad ideas faster and make the viable ones stronger. While I haven't worked with many such PhD students (certainly there are cultural norms at play: we French people like to debate more than most, and in certain foreign countries it is viewed as disrespectful to debate ideas with one's professor), Victoire showed outstanding independent research skills from the start, great follow-through, and superior ability to derive actionable insights for decision makers. She is going to make a great professor. She is currently a lecturer in Operations Management at Brooklyn College and will be on the job market for a tenure track position this fall.
(This is an update to my previous post. Additions are in italics.)
Here are some quick news about alumni and current members of my research group:
- Dr. Mike Dziecichowicz PhD'10, now a manager at Ernst & Young in NYC, recently was an invited speaker at the INFORMS Analytics conference, where he provided an overview of retail credit loss forecasting. I am also very excited to announce that our paper on Robust Timing of Markdowns with Daniela Caro '08, G'09, who did research in my group before continuing on to MillerCoors as a Corporate Pricing Analyst and Industry Analyst, then to INSEAD for her MBA and now to Jefe de Planeamiento Comercial - Interbank in Peru, has just been published in Annals of Operations Research. Congratulations, Mike and Daniela!
- Dr. Tengjiao Xiao, who graduated last month from Lehigh with her PhD in Industrial Engineering, has started her new position as a Senior Consultant in the New York City office of Ernst & Young. We wish her good luck in this next chapter of her career.
- Doctoral candidate Shuyi Wang, who just passed her general exam, is interning at Humana in the Dallas/Fort Worth area this summer.
- Shuyi is supervised by my former doctoral student Dr. Gokhan Metan PhD'08, who recently joined Humana to oversee consumer marketing and advanced analytics after a distinguished tenure at American Airlines (Operations Research Analyst and Senior Analyst) and CapitalOne (Business Manager, Marketing Analysis and Credit Risk Management).
- Dr. Yang Dong PhD'14, who received honorable mention in the INFORMS Financial Services Section Paper Competition last year, has been promoted to Associate in the Quantitative Research and Analytics group at JP Morgan (after only 8 months on the job! Way to impress them, Yang!). Our paper on "Robust Investment Management with Uncertainty in Fund Managers' Allocation" has just been published online in RAIRO-Operations Research and will appear in the October-December 2015 issue in print.
- My paper with Dr. Ruken Duzgun PhD'12, "Robust Binary Optimization using a Safe Tractable Approximation", has just been published in Operations Research Letters. (It comes with an AudioSlides presentation, for those of you who don't have time to read the full paper.) Another paper of ours, that one on the theory of multi-range robust optimization, was published in the proceedings of the 2015 INFORMS Computing Society conference in January. Finally, a study on project selection using our multi-range robust optimization is forthcoming in the Winter Simulation Conference proceedings.
- My paper with Dr. Elcin Cetinkaya PhD'14 on "Data-Driven Portfolio Management with Quantile Constraints" has just been published in OR Spectrum. Our paper "New Product Launch Decisions with Robust Optimization" has just been accepted in Computational Management Science. We hope to have more paper-related news to report soon.
- ESSEC doctoral candidate Victoire Denoyel, whom I co-advise with Laurent Alfandari, will present our current work at the MSOM conference in Toronto tomorrow (come and see her presentation if you are attending the conference, she is speaking in an afternoon session on Tuesday.) Also, we have finalized our paper on "Optimizing healthcare network design under reference pricing and parameter uncertainty". The paper is now submitted. This version of the paper contains important new results. In particular, the section on incorporating parameter uncertainty is completely new. We also provide insights into the "ideal" solution of the network design problem without quality or satisfaction requirements. Please take a look and let us know what you think!
Elsevier has a new service that allows authors to create a short AudioSlides presentation (slides + audio, the name says it all) to be viewed alongside the paper. I created one for the paper I co-authored with my former student Dr. Ruken Duzgun, "Robust Binary Optimization Using a Safe Tractable Approximation", which was just published in Operations Research Letters. Elsevier allows authors to embed the presentation on their website. Of course the goal is to get all of you to check out the paper. I hope that this presentation will help Ruken's great work get the visibility it deserves!
Here is the revised version of my paper "Robust Timing of Markdowns", written with Michael Dziecichowicz and Daniela Caro. It considers a product sold over a selling horizon at various decreasing prices by a retailer and applies robust optimization to the uncertain demand arrival rates. A key feature of this paper is that, because the time during which the product is put on sale at a given price is a decision variable, the argument of the (concave) budget of uncertainty is itself a decision variable, introducing non-convexities in the formulation. We show how to solve the problem in a tractable manner. We also show that, under a mild condition, there is one optimal sale time (two distinct selling prices) in the selling season when the budget of uncertainty is concave. We also implement a dynamic policy and demonstrate its potential on numerical experiments, and extend the setup to the case of multiple products.
Since my previous post on two recent papers of mine attracted quite a few viewers, maybe you'll find the following one insteresting as well: "Robust Binary Optimization using a Safe Tractable Approximation", this time with former PhD student Ruken Duzgun, now at Marriott International.
Abstract: We present a robust optimization approach to 0-1 linear programming with uncertain objective coefficients based on a safe tractable approximation of chance constraints, when only the first two moments and the support of the random parameters is known. We obtain nonlinear problems with only one additional (continuous) variable. The resulting robust optimization problems can be interpreted as nominal problems with modified coefficients. We compare our approach with Bertsimas and Sim (2003). In numerical experiments, we obtain solutions of similar quality in faster time.
Let us know what you think!
...in the 2014 INFORMS Financial Services Section student paper competition, for work I supervised as Yang's dissertation advisor. Other finalists were from Columbia University (two students), the University of Illinois at Urbana-Champaign and KAIST. I'm very proud of Yang for her great accomplishment. You can read more about her work in this previous post of mine.
Below is a picture of Yang with her certificate. Wishing her best of luck in the next phase of her career as a Senior Analyst at JP Morgan in New York City!
Here are two research papers I wrote with my then-PhD-student Dr. Elcin Cetinkaya that might be of interest to my operations research readers. One is on a data-driven approximation algorithm to portfolio management with quantile constraints. While the algorithm is very simple, it performs very well in practice. (Calafiore considers a similar problem from the perspective of finding the minimum sample size that will provide certain probabilistic guarantees to the manager, in a setting slightly more general than ours.) Our paper is the latest version of the report based on our ISMP presentation back in 2012. The other is on new product launch. The key contribution of that paper is to incorporate robust optimization to the parameters of the Bass diffusion process driving innovation adoption, in a context where the decision maker seeks to maximize his Net Present Value. For tractability purposes we enforce that the uncertain parameters must take either their nominal, best-case or worst-case values. While this requires the use of integer variables in the uncertainty set, which precludes an immediate use of strong duality as is traditionally done in classical robust optimization, we show that a total unimodularity property of a parametrized version of the problem allows us to obtain tractable reformulations of the master problem.
I am happy to report that my recently graduated PhD student Yang Dong (now Dr. Yang Dong!), who just joined JP Morgan in New York City as a senior quantitative analyst, has been named a finalist in the INFORMS Financial Services Section paper competition for work based on our paper, "Robust investment management with uncertainty in fund managers' asset allocation." Yang truly did outstanding work on that paper and I am thrilled to see her efforts rewarded at the national level.
Abstract: "We consider a problem where an investment manager must allocate an available budget among a set of fund managers, whose asset allocations are not precisely known to the investment manager. In this paper, we propose a robust framework that takes into account the uncertainty stemming from the fund managers' allocation, as well as the more traditional uncertainty due to uncertain asset returns, in the context of manager selection and portfolio management. We assume that only bounds on the fund managers' holdings (expressed as fractions of the portfolio) are available, and fractions must sum to 1 for each fund manager. We define worst-case risk as the largest variance attainable by the investment manager's portfolio over that uncertainty set. We propose two exact approaches (of different complexity) and a heuristic one to solve the problem efficiently. Numerical experiments suggest that our robust model provides better protection against risk than the nominal model when the fund managers' allocations are not known precisely."
Full paper: here.
Congratulations to Yang on closing her time at Lehigh in such a distinguished manner and starting a new chapter in her life and career!