The Truth About Globalization, Harvard Business Review

Adi Ignatius
FROM THE JULY–AUGUST 2017 ISSUE
SAVE SHARE COMMENT TEXT SIZE PRINT

Public sentiment about globalization has taken a sharp turn. The election of Donald Trump, Brexit, and the rise of ultra-right parties in Europe are all signs of growing popular displeasure with the free movement of trade, capital, people, and information. Even among business leaders, doubts about the benefits of global interconnectedness surfaced during the 2008 financial meltdown and haven’t fully receded.

In “Globalization in the Age of Trump,” Pankaj Ghemawat, a professor of global strategy at NYU’s Stern School and at IESE Business School, acknowledges these shifts. But he predicts that their impact will be limited, in large part because the world was never as “flat” as many thought.

“The contrast between the mixed-to-positive data on actual international flows and the sharply negative swing in the discourse about globalization may be rooted, ironically, in the tendency of even experienced executives to greatly overestimate the intensity of international business flows,” writes Ghemawat. Moreover, his research suggests that public policy leaders “tend to underestimate the potential gains from increased globalization and to overestimate its harmful consequences.”

The once-popular vision of a globally integrated enterprise operating in a virtually borderless world has lost its hold, weakened not just by politics but by the realities of doing business in very different markets with very different dynamics and rules. Now is the time for business and political leaders to find a balance—encouraging policies that generate global prosperity at a level that democratic societies can accept.

Source

Overfitting in Statistics

Figure 1.  The green line represents an overfitted model and the black line represents a regularized model. While the green line best follows the training data, it is too dependent on that data and it is likely to have a higher error rate on new unseen data, compared to the black line.
Figure 2.  Noisy (roughly linear) data is fitted to a linear function and a polynomial function. Although the polynomial function is a perfect fit, the linear function can be expected to generalize better: if the two functions were used to extrapolate beyond the fit data, the linear function would make better predictions.

In statistics, overfitting is “the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit additional data or predict future observations reliably”.[1] An overfitted model is a statistical model that contains more parameters than can be justified by the data.[2] The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e. the noise) as if that variation represented underlying model structure.[3]:45

Underfitting occurs when a statistical model cannot adequately capture the underlying structure of the data. An underfitted model is a model where some parameters or terms that would appear in a correctly specified model are missing.[2] Underfitting would occur, for example, when fitting a linear model to non-linear data. Such a model will tend to have poor predictive performance.

Overfitting and underfitting can occur in machine learning, in particular. In machine learning, the phenomena are sometimes called “overtraining” and “undertraining”.

The possibility of overfitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; then overfitting occurs when a model begins to “memorize” training data rather than “learning” to generalize from a trend.

As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions.

The potential for overfitting depends not only on the number of parameters and data but also the conformability of the model structure with the data shape, and the magnitude of model error compared to the expected level of noise or error in the data.[citation needed] Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new data set than on the data set used for fitting (a phenomenon sometimes known as shrinkage).[2] In particular, the value of the coefficient of determination will shrink relative to the original data.

To lessen the chance of, or amount of, overfitting, several techniques are available (e.g. model comparisoncross-validationregularizationearly stoppingpruningBayesian priors, or dropout). The basis of some techniques is either (1) to explicitly penalize overly complex models or (2) to test the model’s ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.

%d bloggers like this: