‘Economists do it with models.’
This is a quote from my economics professor, who explained this to me far more lucidly than I am explaining it to you. This means we try to find a way of representing the phenomena we are trying to examine with a set of assumptions about how it works.
Why do this? We hear a lot of insane examples from this, excluding a lot of stuff we expect to be important. Surely, we should just look at things in their full complexity if we want to see how they work in the real world.
However, in economics there is an immense amount of ‘noise’ to our ‘signal’. A person’s choice of buying an apple will depend on a million things: the person’s mood, how recently they have seen an advert to be healthy, and so on. If we try to understand everything at once, there is no hope in understanding anything meaningfully as any result can be assigned to a million causes, and nothing can be definitively proven.
Economists resolve this by trying to build a simpler set of rules - taking some things as given, some things as irrelevant - and trying to build simple rules which can logically connect to both be logically plausible in their own right, and represent the relationships we can see in the real world.
IMPORTANTLY, a ‘model’ is not a description of how the world is, a great example being the utility maximiser model. A common assumption is that an agent (a person in effect) in a model has perfect information and derives a numerical value of ‘utility’ from consuming goods and has some kind of cost constraint which they are trying to maximise utility of.
This is not how I buy things, and I’m sure it is not how you do either. And it’s not meant to be! But the reason we make this assumption is that this is a much simpler way of seeing how we make decisions and we can use it to understand the way that real people do.
Suppose you go to the shops and you buy an apple, a chocolate bar and some other goods. You had a variety of complex and inscrutable reasons for this. Feeling a craving, feeling guilt for the craving and wanting to be healthy, buying laundry detergent because it smells like the one you grew up having used (which has been shown to be a real determinant for which laundry detergent people buy).
Now, imagine a ‘utility bot’ does the same shop with the same budget and has a set of arbitrarily assigned utilities associated with each product in the store, which leads it to make the exact same purchases as you do.
Now suppose the prices changed a little, maybe pears were a little cheaper, or your chocolate bar was a little more expensive. You might make different choices here, or you might not, you might decide that you just really want an apple, even if a pear would be cheaper, or you might switch.
The utility bot might make different choices as well. If the utility bot made the same changes you did, maybe we can say it models your preferences well, in the sense that if we want to know how you would probably react to a change in price, we could just plug it into the utility bot and have a pretty good guess.
This gets a lot easier when you consider that we are trying to estimate the decisions of the average person. Say instead of just you, 100 people went to shop in the same store, all with their own unique foibles, interests and variances. The average was taken, and a utility function is used to create the same results. If there was a slight price change, we would expect the results of the function and the 100 people to be far more similar as the variances and eccentricities would be more likely to cancel out, leaving only the response to price (which is what the utility function should calculate). If we talked about the average of a thousand people, or a million, then the utility function starts looking like a very reliable model for behaviour indeed.
This is a defence of the model of utility maximisation, but more generally I am trying to express that models are necessary and valuable in economics (and many other disciplines) because there is so much going on with so many different possible causal factors that isolating the cause or consequence of anything is extremely difficult in the real world, but if we make a simple, comprehensible model with relationships we understand and it resembles the real world data, then that allows us to learn things about what might actually matter within the complex model.
OK, we should now (hopefully) be convinced that using models is a good way of learning things. So, what is a model?
And more importantly what should it do? A model is, at its simplest, a ‘set of assumptions’, assumptions being things we assert to be true. For example: ‘all cats are orange’ and ‘all orange things have one braincell’ is a model, and we can even draw conclusions from it (all cats have one braincell). Obviously, this isn’t a very good model, but it is a model.
Usually, in economics we deal with models mathematically. For example, we may say that the real interest rate is the nominal interest rate minus the inflation rate or write an equation to explain the impact of capital and labour on output. These are still assumptions and behave in the exact same way as the simpler assumptions above. We design a set of claims about the world and see what conclusions we would have to draw if these things were true.
From here we can test the conclusions we draw from the model against what we see in the real world. If there is a strong resemblance (what we’d expect to see in the model is what we actually see in the real world), we may be able to do two things.
Firstly, we can gain insight about what actually is causative in the real world. For example, if a model which solely assumes economic growth is mathematically related to technological changes is mostly right about the trajectory of economic growth, then that tells us that technological change is probably an important factor which needs to be included in future models.
If the model is very close to being correct (or even just the closest we have to being correct) we can even use it to predict things about the real world (if there was a technological breakthrough tomorrow, what would it do to the economy).
However, models run into two very big problems.
First, if you are trying to get a sense of which factors are important through a model, it isn’t sufficient to merely observe a strong correlation. If factors are associated, even very strongly so, this doesn’t mean you have shown causality.
For example, perhaps two factors which seem interrelated are both just consequences of a third factor, or the one which you expect to cause the other is in fact the caused factor, or causality does exist, but only under a specific set of circumstances which you don’t recognise. It is even possible for the whole connection to be purely random chance, as economics is so big, with so many factors, that you are virtually guaranteed to see some apparent correlation even by sheer chance.
This means economic models can’t just be justified by resembling the world, they must also stand on their own logic. If we make a model which states that X is caused by Y, or even that X is correlated with Y, we cannot just point to past examples of X and Y being correlated, no matter how numerous they are. Instead, we must also provide an explanation for why this model makes sense, else we run the risk of just matching variables which appear correlated without actually considering the interconnection between them.
Suppose we have done that and have created a model which links concepts in a sensible and well justified way. Now we must suppose we have a model which describes the future and can feel confident predicting the future based on it. Unfortunately, this runs into our second problem, which is that there is a fundamental trade off between the simplicity of the model and what it can explain.
We make models because we want to capture what we care about in the world simply, but as a model grows more complete, it grows more complex, with more things needing to be considered and included. A simple elegant model, which explains a concept directly and clearly (and so is explainable and actionable) likely doesn’t explain the full situation properly, and a model that includes enough to truly capture all that is actually happening is probably too complex to really aid understanding. Thus, model making in economics is a balancing act of explaining things simply enough to be clear, and complexly enough to be true.
The upshot of this is simply that economists think with models to understand and explain the world, and that models can be used to accurately represent complex topics simply. This means we can grasp concepts and solve problems by working out what is and is not significant to complex economic issues. However, a model is by nature, never a complete representation of the real world, and are worth looking at, and asking questions about what it assumes and what it leaves out.
The final part of this little essay is ‘why should I care’, but I think this is an easy question. Models are used by people to solve real problems which affect you: the Bank of England consults models to set interest rates, the government consults models to promote growth. If these models are wrong, then the mistakes people make because of that will massively affect you. In addition, even if the models aren’t wrong, it is dangerous to not understand them. Economic models should not be this incomprehensible dogma, if they are, people will accept unnecessary and unfair hardship with the conviction that the model can’t be wrong, or fall to conspiricism with the conviction that economists are just making it up. To be a good citizen and to fully take advantage of the democracy we live in, I believe it is necessary to understand the rules which govern us and form opinions on them, and that applied to economics too.
I would like to thank my economics professor, James Forder, for explaining this to me, and my reader (if you have bothered to get this far), for trying to understand.
Ian Chakravarti
our braincell