Blog powered by Typepad

Hello, world

  • I think too much and then I write about what I think.
  • Hello, world


First Book

  • First Book: Do You Remember the Magic of Your First Book?


On my shelves

Become a Fan


  • I post links to blogs I find interesting. I do not "swap links" and I do not post advertisements in any form. Please do not contact me about such services. Thank you.
  • This is my personal blog. The views expressed here are mine only and do not necessarily represent the opinions of my employer, the National Science Foundation, or any other funding source.
  • I am not affiliated with,,, or any other spam-type site that uses my blog address as part of a link on their site. Please do not sign up for any services offered through these sites.

Comment Policy

  • Comments are moderated. Inappropriate comments may be edited for content while respecting their author's intentions, but more often will not be published. Spam comments, including "generic praise spam" will not be published.

« On "Robust Fluid Processing Networks" by Bertsimas, Nasrabadi and Paschalidis (2015) | Main | MicroMasters »

March 20, 2017


Great post. You've put your finger on a number of problems (and perhaps signs of hypocrisy) in the current university system. Having spent years teaching in a (theoretically integrated) MBA core, I can point to one of the biggest reasons for a lack of true integration. It's difficult and, for the faculty, time-consuming. If you are being judged primarily, if not exclusively, by research productivity, all that time spent on integration has a negative return to you, irrespective of the possible positive return to the students.

A second problem with integration is that time spent on integrative content/tasks is typically time not spent on some niche topic closer to the instructor's heart. (Seriously, how can we graduate MBAs who have not seen multiobjective nonconvex programming under uncertainty??) Also, integrative content is often somewhat foreign/less familiar/less enjoyable to the instructor. (Me? Talk about marketing?? )

Among many (most?) of my colleagues, there was a perception that teaching evaluations were more a popularity contest than a measure of actual teaching. I tend to agree with that. I never heard of, let alone saw, any sort of pre/post competency testing of students. So my colleagues felt much more confident about "objectively" measured research productivity (articles, articles in "A" journals, citation counts, ...). These are, to a large extent, as meaningless as teaching evaluations, but easier for faculty to believe. So we reward research but not teaching.

The comments to this entry are closed.