Home » Articles posted by Susan Brown

Author Archives: Susan Brown

Making a Strong First Impression and Impact in Your Work this Summer

by the Duke Engineering Master’s Career Services & Professional Development Team


As we transition to summer, many students are taking on new roles.  Regardless of whether your summer includes work, internships, projects, lab, classes, or any combination, it is a good time to refine your approach to making a strong first impression and a lasting impact.


Below are five tips with some reference material to help you dive deeper. We’ve posted this here so that you can access over the next few months: what you need at the middle or end of the summer may be different than what you expect right now. In fact, don’t presume that the advice expires at the end of the summer; you may find this to be helpful throughout your career.


  1. First things first.

Quickly build a reputation for being a top performer by getting the little things right and remaining focused on the tasks you’re asked to do. To paraphrase some direct advice from a Hollywood producer, “If I can’t trust you to show up on time, how can I trust you with a client or a contract?” Your access to more advanced responsibilities or relationships tends to be determined by your current performance.


  1. Remember the Seven Indicators for Search Success.

Before you start this new endeavor, take a design-based approach and consider the experiences you want to gain and the relationships you want to build. Use the next months to purposefully seek out opportunities to build your strengths and show your motivation.


  1. Meet alumni and other locals.

Take full advantage of in-person relationship-building while you are able. For example, Duke has regional alumni networks all around the world. As a student, you are welcome as part of the “Forever Duke” network.  We encourage you to reach out to introduce yourself and learn the best ways to participate. Beyond Duke, you can also search Meetup, EventBrite, Facebook, LinkedIn, and any relevant professional organizations for local opportunities that are open to the public. Use a common personal or professional interest as a natural way to meet others.


  1. Continue using Career Services.

We are available to help current Duke Engineering Master’s students navigate your career successes or challenges throughout the summer and for four years after graduation. It can be about search strategy, navigating a difficult work or internship situation, or any other career-related situation.


  1. Have Fun.

Take a breath, enjoy your new colleagues, explore your environment, notice how you’re growing. Be both spontaneous and planful. Right-size your life so that you try new things and follow through on your commitments.


Finally, if you want an extensive list of ideas for workplace success, this Business Insider piece has many good ideas. Just don’t get carried away and follow their recommendation to do them all in your first day!

2018 MEMPC PriSim Business War Games Competition

by Randolph Frank, Nebo Iwenofu, Marshall Ma, and Rushabh Shah


This spring semester, four Duke students from the Master of Engineering Management Program participated in the MEMPC PriSim Business War Games Competition. Teams from the Master of Engineering Management Programs Consortium (MEMPC) including Northwestern, Dartmouth, Johns Hopkins, Cornell, USC, and Purdue were represented in this year’s challenge. The teams competed head to head using the StratSimManagement software platform developed by Interpretive Solutions.

Over the course of 4-weeks, each team ran a fictional automotive company and was responsible for each aspect of business operations. Each round, the teams made strategic decisions regarding marketing, new product development, technology investment, financing, and manufacturing. After the end of the round, results were made available and teams needed to readjust both short and long term strategies based on the rapidly moving competitive landscape.

The Duke MEMP team, consisting of Randolph Frank, Nebo Iwenofu, Marshall Ma, and Rushabh Shah, performed well throughout the competition and came out with many valuable lessons learned about optimizing business strategies in a competitive industry. This competition’s similarity to realistic business arena offered ample opportunities for the Duke MEMP team to experiment, learn, and reflect. Flexibility was key during this fast-paced competition. The team was leading in the initial stage with focus on the truck market but shifted their strategies with unpredictable competition, resulting in lost position in the segment. Additionally, as the team navigated through the competition, they realized that there were too many decisions and factors for one team member to handle. Each member was assigned a different area of operation which led to more efficient strategy meetings and in-depth industry analytics required for success.

Bring us your App ideas!

By Ric Telford


Mobile software is all around us and has become an important part of our lives.  According to StatCounter, Internet usage from mobile devices now surpasses that of PCs.  It is not surprising therefore that stackoverflow listed iOS and Android as 2 of the top 3 development skills in high demand.


In the Fall of 2015, the Pratt School of Engineering introduced a graduate-level 590 class, “Mobile App Programming.”  The course filled up on the first day of registration and 3 sections later it is still that way.  The course was given a permanent class code for the Fall of 2017, and is now known as ECE 564.  Students get an accelerated education of programming in Swift and iOS for Apple devices in the first half of the semester.  The second half of the semester is devoted to more advanced concepts and the development of a team project.  The class is project-based, so across the 3 sections taught thus far, there have been 30 apps developed by the 3-person student teams.


Project ideas are gathered from across the Duke community and sometimes outside of Duke.  Students can pick from a list of these solicited project ideas, or they can propose their own project (which needs to be approved by me).  This has resulted in quite a wide array of apps created in the class.  Here are a few examples:

  • There have been 6 proof-of-concept apps delivered to assist local start-up companies with their early app development, including DUhatch-graduates FarmShots and Voyij.
  • 2 apps were developed in conjunction with the Duke School of Medicine, including an app to assist in the treatment of obesity.
  • 2 games have been developed, including Dodge the Potholes, which is still on the colab appstore for download.
  • At least 10 apps were developed to provide services to different parts of the Duke community. One app provides walking directions to any room in the Fitzpatrick Center.  The Fava app allows students to trade favors.  Peer Konnect helps match student tutors to those needing help.  Finally, the Duke Sakai app provides an iOS-native app implementation of the key Sakai functions.

Several of these apps are still available for download if you want to check them out at appstore.colab.duke.edu.

With the Fall semester upon us, it is time to start collecting project ideas for the Fall cohort.  We are open to any interesting ideas, even if it is not something with a clear path to the App Store.  It is more about giving the students something challenging and unique to work on and delivering something that is a viable app.


If you have an app you would like developed, please let me know!  The best way to start is with an email to ric.telford@duke.edu.  From there I will follow-up with you to see if your project is a good match for our class.


2017 MEMPC PriSim Business War Games Competition

by Chiraag Devani, Mark Henry, and Fajar Prihantoro

Students from graduate level Engineering Management programs around the nation are offered the opportunity participate in some friendly academic competition in the 2017 MEMPC PriSim Business War Games Competition. This simulation is designed to mimic the real world: an overwhelming amount of data is provided, decision combinations are seemingly endless, and finding success in the simulation involves finding success in the teams. Teams are expected to meet at least twice a week to submit decisions every four days or so, which translate to “periods” in the simulation. Input decisions and inherent simulation factors are processed and results are presented for review before the team makes the next set of decisions. Duke University was represented in two teams of four Engineering Management students, and faced teams from other Engineering Management programs such as Cornell University, Northwestern University, and Dartmouth College.

The teams were formed based on student interest, and the simulation was kicked off with a lunch to meet team members and learn more about the simulation. After initial simulation familiarity was established, team members divided the major sections of the simulation for specialization – advertising, financing, manufacturing, and product development. Each specialization became a focus for one team member, who closely examined results in that specific section and presented recommendations during each meeting. These recommendations were then discussed with the rest of the team before locking in final decisions.

The winner of the simulation was determined by a scoring sheet containing various performance metrics: market share, final stock price, market value, cumulative profit, final round profit, ROA, ROE, and customer preference. Teams decided which metrics they wish to be measured or focused on. Winning teams are those who were able to achieve outstanding performance in metrics selected based on strategy used during the simulation. Establishing strategy in the beginning of the simulation was critical. Some considerations for establishing a winning strategy are a) focusing on technology and product development in the early stage so that the teams can launch the best cars in the market b) understanding which of the car’s attributes the market wants and targeting the market segment/sub segment that best fits the team’s strategy, c) creating the right marketing and advertising strategy for the current/new cars, and d) properly forecasting sales to plan production and inventory capacity.

Relating the simulation back to our classes within the Engineering Management Program was the biggest take away. During our marketing classes, we had completed a PharmaSim simulation within the medical industry and the strategies we had developed were directly applicable. The ability to analytically prepare and identify revenue models that would result in the highest level of profit was a key advantage for the Duke teams. In addition, through our Finance class and exposure to project management and technology commercialization, the members of the team were able to directly apply their coursework and academic knowledge. In regard to choosing the right metrics to be graded by, our team did extremely well in identifying Return on Equity and Return on Assets as our largest competitive advantage.

While the competition was a fantastic learning experience, not everyone could be a winner. Based on our two teams’ strategies, we believe there was room for improvement. Firstly, the majority of our focus was on our individual markets and strategy. There was excitement in developing a new strategy and changing the product lineup. Ultimately, this was a competition and this approach allowed teams who kept the original basic sedan lineup to own the market segment and maximize volume growth. Lastly, the simulation was run with a finite number of periods. Several of our competitors exaggerated their strategies in the final period to increase their selected performance metrics without consideration of the next period. Both Duke teams acted with the idea that this would be an ongoing business and did not take actions that would have improved performance in the last period but would jeopardize the long-term viability of the enterprise. This decision affected our performance in the last period and our ranking in the competition.

All in all, the PriSim Business War Games Competition was a great learning experience for the participating Master of Engineering Management students. It helped to apply contextualize coursework and provided students with an opportunity to work as a team solving complex business and engineering situations.

How Much Less Wrong was Nate Silver?

By Daniel Egger, Director – Center for Quantitative Modeling, Master of Engineering Management Program, Pratt School of Engineering


In the week leading up to Tuesday’s presidential election, Nate Silver and his well-known political forecasting web site www.fivethirtyeight.com received harsh public criticism for assigning to Donald Trump a much higher winning probability than other similar sites.


If I recall correctly, this contrast peaked around Thursday, November 3, when Silver gave Hillary Clinton “only” a 65% probability of winning, while other mainstream projections all assigned her winning probabilities greater than 90%. Silver tried to defend himself by saying, in effect, that his expected vote percentages for Clinton were really not so different than those of other forecasters, but that he assumed higher variance around those numbers, so that her November 3 projected margin of victory (3-4%) was  within a margin of error.


In hindsight, all forecasts, including Silver’s, based on third-party polling were quite wrong about the vote percentages, although Clinton still “won” the popular vote. Even more unexpected by the forecasters, Trump won the election in an unforeseen Electoral College rout.


It seems to me that Nate Silver should be given credit for being much less wrong than others (not to mention for sticking to his guns under withering, and in hindsight extremely foolish, criticism).


As a data scientist, I am interested in metrics that can quantify the relative effectiveness of probabilistic forecasts, in order to optimize forecasts in the future. What follows are two different methods that you may not have seen before that allow us to quantify exactly how much less wrong Nate Silver was than everyone else.


First I’ll use a standard Bayesian Inverse Probability approach. This approach is attractive in the present situation because, unlike statistical methods that require a large sample of outcomes in order to be reliable, it works perfectly fine for a sample size of one: the one election outcome that we have.


Under this method, I must first assume that one of the two probabilistic processes we are comparing is in fact responsible for generating any observed election outcomes. The first probability distribution, which I’ll call the “Consensus Process,” assigned probabilities of approximately 90% to a Clinton victory and 10% to a Trump victory. The second, “Silver Process,” assigned probabilities of approximately 65% to a Clinton victory and 35% to a Trump victory.  I assume further that before observing the present election outcome, we had no rational basis to believe one of these two processes more than the other. Therefore the probability of each process, before any results are observed, is 50/50.


Applying Bayes’ Theorem, if an election victory for Clinton were the observed outcome, then it would be reasonable to infer that the probability that the Consensus Process generated the outcome was 58%, and that the Silver Process generated the outcome was 42%. This metric gives the edge to the Consensus Process, but not by an overwhelming margin.


On the other hand, since an election victory for Trump was observed, it would be reasonable to infer that the probability that the consensus process is the one that generated the observed result would be only 22%, while the probability that the Silver Process generated the observed result would be 78%.[i]


This is a pretty dramatic win for Silver, and it would seem that those who criticized him owe him an apology. It also suggests that he himself thinks of probabilities from a Bayesian Inference point of view, and is trying to minimize his parameter error in that context.


The second method will be familiar to my data science graduate students at Duke, with whom I approach the subject from both a Bayesian and an Information Theory perspective.[ii] I first need to assume a “base rate” for election of Republican and Democratic Presidential candidates. I will use a base rate of 50/50 (because the last 10 elections have been split right down the middle: 5 for the Democrat, 5 for the Republican; or because I really have no idea). Next, I treat the Consensus and Silver probabilistic forecasts as “side information” that could potentially allow a gambler who relies on one of them to bet on the outcome more successfully – with less uncertainty – than a gambler who knew only the base rate.  The advantage of the second method is that rather than being a relative comparison of only the two processes, it is an absolute measure of forecast quality and any number of additional forecasts can also be compared using the same metric.


Based on a Clinton win, the Consensus probability would have reduced a gambler’s uncertainty about the outcome by 76.3%, while the Silver probability would have reduced their uncertainty by only 24.6%.


On the other hand, given a Trump win, both forecasts were worse than the base rate, so a gambler believing in them would have lost money, but the losses would be worse for the gambler betting based on the Consensus forecast. A gambler trusting in the Consensus would have increased their uncertainty over the base rate, by 23.2%, while a gambler trusting Silver would have increased uncertainty by only 18.0%.[iii]


Final Scores:


Bayesian Inverse Probabilities:

Silver 78%, Consensus 22%


Kelly Side Information (both negative)

Silver -18%, Consensus  -23.2%


For a detailed discussion and explanation of the first method, please see my Coursera course, Mastering Data Analysis with Excel.


For more details on the second method, you will need to enroll in my Data Mining course 590-05 here in the Duke MEM program.



p(Process|Observation) = p(Observation|Process)P(Process) / p(Observation)
P(consensus process|Clinton win) =(0.9)*(0.5)  / (0.9)*(0.5)


= 0.58
p(Silver forecast’ Clinton win) =(0.66)*(0.5)  /  (0.9)*(0.5)


= 0.42
p(Process|Observation) = p(Observation|Process)P(Process) / p(Observation)
P(consensus process|Trump win) =(0.1)*(0.5)  / (0.1)*(0.5)


= 0.22
p(Silver forecast | Trump win) =(0.35)*(0.5)  /  (0.1)*(0.5)


= 0.78


[ii] This method is called “Kelly doubling-rate-of-wealth scoring for individual sequences” – I really need to work on a better name.

[iii] ­­

side information assuming base rate r = .5, .5 and
Clinton win: forecast  b(i) base rate r(i)
under Consensus method 0.9 0.5
under Silver method 0.65 0.5
Trump  win:
under Consensus method 0.1 0.5
under Silver method 0.35 0.5
side information (in bits) entropy of base rate percent information gain
0.763 1 76.3%
0.246 1 24.6%
-0.232 1 -23.2%
-0.180 1 -18.0%