top of page
Search
Writer's pictureDr. Roee Sarel

Do interaction terms in non-linear models have an average marginal effect?

Updated: Oct 13, 2018

The short answer is "no". The long answer is: no, but nonetheless many researchers get confused because one can trick statistics software into producing a (wrong) statistic.

 

Disclaimer: This post relies on an excellent paper by Richard Williams (Williams, Richard. "Using the margins command to estimate and interpret adjusted predictions and marginal effects." Stata Journal 12.2 (2012): 308.), which I recommend to anyone working with Stata.


In the last few months, I came across several researchers who were struggling with the analysis of interaction terms in non-linear models. As coefficients of such models are indeed tricky to interpret, the straightforward solution is calculating marginal effects, i.e. the marginal change in the outcome given a discrete change in the independent variable.


In linear models this is fairly simple. Suppose for example, that one runs an experiment testing the binary treatment T on the continuous outcome Y. The researcher hypothesizes that T positively impacts Y, but that the effect is stronger for females than for males. Therefore, the researches constructs the following a regression model:


Y= B0 + B1*T + B2 * Female + B3* ( T x Female) + E


where Y is the outcome, B are coefficients (B0 is the intercept), T is a treatment dummy (taking 1 if the subject is treated and 0 otherwise), Female is a gender dummy (taking 1 if the subject is female and 0 otherwise), and E is the error term.


Side note: It is good practice to include not only the interaction term (in this case "T x Female"), but also the interacted variables themselves (in this case, T and Female) - otherwise interpretation is confounded.


How do we interpret these four B's in the linear model?

- B0 is the intercept, i.e. it is the expected outcome when all the other included variables are equal to zero. In other words, B0 is the reference category - which, in this example, is the average outcome among those who are Male (Female=0) and Untreated (T=0).

- B1 is the addition to the outcome for Males (Female=0) who are treated (T=1).

- B2 is the addition to the outcomes for Females (Female=1) who are Untreated (T=0)

- B3 is the addition to the outcome for Females (Female=1) who are treated (T=0), above and beyond the other additions. Note: This addition occurs only if T=1 and Female=1 such that all three additions (B1, B2, B3) are added.


Hence, in linear models, looking at the coefficients already tells us the whole story - we don't really need marginal effects (at least when using binary variables. With continuous variables I recommend looking at average marginal effects nonetheless).


How do we interpret these four B's in a non-linear model?


Suppose instead that Y is a binary outcome, and therefore a probit/logit model is used to test the (slightly adjusted) regression model:


P(Y=1|T,Female,TxFemale) = B0 + B1*T + B2 * Female + B3* ( T x Female) + E


After we run the regression, ideally we would want to be able to use the same kind of interpretation as above. This can be done by using marginal effects. For simplicity, I avoid the discussion of how these are exactly calculated (there is also some debate about whether the method used in Williams (2012) above is appropriate, or whether Ai & Norton (2003) method [see also the Stata-Journal version] can be used).


Generally speaking, calculating the marginal effects requires predicting the outcome given different values that are assigned to each variable. For example, suppose we want to calculate the marginal effect of the treatment T - this can be done using three different methods (see the Williams (2002) paper above):


- Marginal effects at the means (MEM): We set the other variables (in this case Female) to it's mean. for example, if the sample has 55% females, we set it to 0.55. Then we calculate:

MEM = Pr(Y=1| T=1, Female=0.55) - Pr(Y=1 | T=0, Female=0.55).


-Average marginal effect (AME): For each observation, we contrast two predictions - one as-is and the other when changing the variable of interest. For example, for an observation of an untreated female (T=0,Female=1) we artificially assume T=1 and take the difference. We do this for each observation, and then find the average difference.


- Marginal effects at representative values (MER): We set the other variables to pre-defined values of interest and do the same process as MEM. For example, suppose we want to know the effect of T for females - then we calculate:

MER = Pr(Y=1| T=1, Female=1) - Pr(Y=1 | T=0, Female=1).

And if we want to do the same for males, we calculate:

MER = Pr(Y=1| T=1, Female=0) - Pr(Y=1 | T=0, Female=0).


How does this relate to the interaction term?

Now comes the confusing part - what do we do when there is an interaction term? The answer is - exactly the same, with a small difference: when we insert a value to the variables we also insert the same value into the interaction terms. For example, if we calculate MEM (when Female=0.55) the equation used for prediction is:


Pr(Y=1|T,Female=0.55) = B0 + B1*T + B2*0.55 + B3(T x 0.55)


Which brings us to the question posed in the title: Does an interaction term has a marginal effect? I already pointed out that the answer is no, and now we can see why: the interaction term is used for calculating the marginal effects (because we substitute values in that term as well) but does not, on it's own, have a "marginal effect".


So why does Stata sometimes produce a marginal effect for the interaction nonetheless?

Again, the short answer is - it shouldn't, unless the computer program is tricked. The long answer is that the researcher should be careful with the model specification. To see this, let us compare the correct and incorrect specification:


The correct specification:

probit y i.T##i.female [or, alternatively probit i.T i.female i.T#i.Female]

Note that I use the prefix i before each variable and indicate the interaction explicitly by including "#" in between (or ##, which just means includes both separatley and the interaction).

The incorrect specification:

Artificially creating an interaction variable, e.g. FemT=Female*T and then running the command: probit y T Female FemT


If we run any margins command following the correct specification, we will not get a marginal effect for the interaction term (which is correct!). If we run the incorrect specification we will get a number but it will be wrong - it is only produced because the software doesn't know that "FemT" is an interaction term.


Summing up:


1. Interaction terms are used for calculating marginal effects, but do not have their own marginal effect. This makes sense, because we are usually interested in the effect of the interacted variables (e.g. how does the treatment effect the outcome, taking into account that the effect may differ for females and males) and not the interaction itself (e.g. how being a treated-female impacts the outcome).


2. Stata accordingly does not produce a marginal effect if the specification is correct.


3. Stata will produce some incorrect number if we trick the software using an incorrect specification.







1,392 views0 comments

Recent Posts

See All

Comments


bottom of page