| F1 | F2 | F3 | 
| X1: Convenient location | 0.954 | -0.234 | -0.236 | 
| X2: Near home | 0.942 | 0.254 | 0.325 | 
| X3: Value for money | 0.251 | 0.723 | -0.221 | 
| X4: Attractive promotions | 0.124 | 0.884 | -0.251 | 
| X5: Low prices | -0.132 | 0.952 | 0.122 | 
| X6: Easy to locate items | 0.114 | 0.231 | 0.945 | 
| X7: Good service | -0.122 | 0.341 | 0.789 | 
| X8: Ease of parking | 0.181 | -0.332 | 0.678 | 
| X9: Efficient checkouts | 0.238 | 0.102 | 0.988 | 
Exhibit 17.31   Factor analysis summarizes variables into factors. Example — 
    supermarket shopping attributes.
Factor analysis is a generic term referring to a class of statistical methods 
    for investigating whether a number of variables of interest are linearly related to a smaller 
    number of unobservable factors. 
The prime objective of this inter-dependence technique in marketing models (e.g., 
    models for brand equity and customer satisfaction), is to simplify the data. Based on patterns 
    in the data, the technique summarises numerous variables into a few factors.
For example, the 9 (n=9) variables (attributes) in Exhibit 17.31, 
    are summarized as 3 (k=3) factors. It is assumed that each variable (X1, 
    X2 … Xn) is linearly related to the factors (F1, 
    F2 … Fk) as shown below:
$$ X_1 = \beta_{10}+\beta_{11}F_1+\beta_{12}F_2+ … \beta_{1k}F_k + e_1 $$
$$ X_2 = \beta_{20}+\beta_{21}F_1+\beta_{22}F_2+ … \beta_{2k}F_k  + e_2 $$
$$…$$
$$ X_n = \beta_{n0}+\beta_{n1}F_1+\beta_{n2}F_2+ … \beta_{nk}F_k  + e_n $$
The error terms e1, e2 etc. indicate that 
    these relationships are not exact.
The parameters β ij are referred to as loadings, i.e., 
    β 11 is called the loading of variable X1 on factor 
    F1.
For mathematical convenience, it is assumed that the factors are in standardized 
    form, i.e., E(Fj) = 0 and Var(Fj) = 1. With this assumption, 
    the variance of Xi may be computed as:
$$ Var(X_i) = \beta^2_{i1}Var(F_1)+ \beta^2_{i2}Var(F_2) + … + Var(e_i) $$
$$ Var(X_i) = \sum_{j=1}^k \beta^2_{ij} + Var(e_i) $$
The portion of the variance that is explained by the common factors 
    ∑(βij2) is called the communality of the variable. 
    The greater the communality, the better the ability of the postulated model in explaining the 
    variable.
Factor analysis methods such as principal component analysis, seek values of the 
    loadings that bring the estimate of the total communality as close as possible to the total of 
    the observed variances. 

    Exhibit 17.32   Variables grouped according to the factors that they define.
 
Variables with high loading help define the factor. For instance, as seen from 
    Exhibit 17.31, the variables ‘value for money’, ‘attractive 
    promotions’ and ‘low prices’ move in concert and are associated more strongly with 
    F2. These variables that define the same factor are usually grouped under their 
    respective factors in shown in Exhibit 17.32.
Since loading can be interpreted like standardized regression coefficients, the 
    factor loading is the correlation between the variable and the factor. The variable ‘convenient 
    location’, for instance, has a correlation of 0.954 with factor F1.
There often exists some common meaning among the variables that define a factor. 
    Factor naming is a subjective process that combines understanding of market with inspection of 
    variables that define the factor. For instance, in Exhibit 17.32, factor F1
    has been labelled ‘location’, since the variables that define it allude to proximity of store.