In information technology (IT), we think we know a thing or two about scaling systems. When we say scaling, we mean something different than what the physicists mean when they say it: we mean that we are trying to increase the performance or throughput of a machine by adding more components---or, by adding more horses, to paraphrase Henry Ford. In physics, scaling is about how system behaviours depend (in the most general way) on dimensionless variables, like the number of components, or ratios of scales, that measure the macroscopic state of the system. It is related to notions like self-similarity, which can sometimes be observed to span certain ranges of the parameters, and `universality' which tells us how the broad behaviours are somehow inevitable, and don't depend on the low level details. How these two viewpoints are related has long been of interest to me.
This article introduces a paper on scaling. On the scaling of functional spaces, from smart cities to cloud
computing, which is inspired by work on scaling in cities.
The meaning of scale, for CS and Phys
When I started studying computers in the 1990s, 100s of machines were a lot; then it was 1000s, and 10,000s, and so on. Each few years, we seem to add a power of ten. As demand increases, we have to think about how to scale not only size but functionality to cope with growth. This is a special concern for IT: functionality, or intent of outcome, is a separate dimension to performance, that is not obviously related to size. What happens, in practice, is that we tend to start out by designing a functional system to service a small scale, and later try to photographically enlarge the whole thing, assuming that the outcome will be inevitable without necessarily rethinking the approach. But what if that is wrong?