A common request when analyzing large amounts of data is to evaluate the impact exceptional data has on results. Statistics addresses these needs by offering “median” and “average” when normalizing large numbers of data points.
Median selects a data point in the exact center of all data points to define the “normal” value and, as a result, is unaffected by exceptionally high or low data.
Average, also known as “mean”, on the other hand, sums all of the data points and divides by the number of data points to determine the “normal” value. Average is affected by exceptionally high or low data.
For example, if you have 100 data points, where 97 of them are 100 and the last three values are 1,000, 10,000, and 100,000, which would be “right skewed” data, look at the difference:
● Median value = 100
● Average/Mean value = 1,207 (More…)