GLOSSARY

**Addition of matrices** (See Matrix)

If `A` and `B` are matrices with the same dimensions, and the entries of `A` are `a_(ij)` and the entries of `B` are `b_(ij)`, then the entries of `A + B` are `a_(ij) + b_(ij)`.

For example, if `A = [(2,1),(0,3),(1,4)]` and `B = [(5,1),(2,1),(1,6)]`, then `A + B = [(7,2),(2,4),(2,10)]`.

**Angle sum and difference identities**

`sin(A + B) = sinAcosB + sinBcosA`

`sin(A - B) = sinAcosB - sinBcosA`

`cos(A + B) = cosAcosB - sinBsinA`

`cos(A - B) = cosAcosB + sinBsinA`

**Arithmetic sequence**

An arithmetic sequence is a sequence of numbers such that the difference of any two successive members of the sequence is a constant. For instance, the sequence `2, 5, 8, 11, 14, 17, ...` is an arithmetic sequence with common difference 3.

If the initial term of an arithmetic sequence is a and the common difference of successive members is `d`, then the `n^(th)` term, `t_n`, of the sequence is given by `t_n = a + (n - 1)d` where `n >= 1`.

A recursive definition is `U_n = U_(n - 1) + d` where `n >= 1`.

**Arithmetic series**

An arithmetic series is the sum of an arithmetic sequence `S_n = U_1 + U_2 + U_3 + ... + U_n = sum_(r=1)^nU_r`.

The infinite series is given by `S_(oo) = U_1 + U_2+U_3+ ... = sum_(r=1)^(oo)U_r`. This can be found by evaluating `lim_(n -> oo)S_n`.

**Argument (abbreviated Arg)**

If a complex number is represented by a point `P` in the complex plane, then the argument of `z`, denoted `Arg(z)`, is the angle theta that `OP` makes with the positive real axis `O_x`, with the angle measured anticlockwise from `O_x`. The principal value of the argument is the one in the interval `(–pi, pi]`.

**Complex arithmetic**

If `z_1 = x_1 + y_1i` and `z_2 = x_2 + y_2i`, then:

- `z_1 + z_2 = (x_1 + x_2) + (y_1 + y_2)i`
- `z_1 - z_2 = (x_1 - x_2) + (y_1 - y_2)i`
- `z_1 × z_2 = (x_1x_2 - y_1y_2) + (x_1y_2 + x_2y_1)i`
- `z_1 × (0 + 0i) = 0`
*Note*: `0 + 0i` is usually written as `**0**`
- `z_1 × (1 + 0i) = z_1`
*Note*: `1 + 0i` is usually written as `1`

**Complex conjugate**

For any complex number `z = x + yi`, its conjugate is `barz = x - yi`. The following properties hold:

- `bar(z_1.z_2) = bar(z_1). bar(z_2)`
- `bar((z_1"/"z_2)) = bar(z_1)"/"bar(z_2)`
- `barz = [z]^2`
- `z + barz` is real

**Complex plane (Argand plane)**

The complex plane is a geometric representation of the complex numbers established by the real axis and the orthogonal imaginary axis. The complex plane is sometimes called the *Argand* plane.

**Convergent Sequences**

A sequence `{a_n}` converges to a finite number `L` if given `epsilon > 0`, however small, there exists `N` (dependent on `epsilon`) such that `|a_n - L| < epsilon`, provided that `n > N`.

`lim_(n->oo) a_n = L`.

**Cosine and Sine functions**

Since each angle `theta` measured anticlockwise from the positive x-axis determines a point `P` on the unit circle, we will define:

- the cosine of `theta` to be the x-coordinate of the point `P`
- the sine of `theta` to be the y-coordinate of the point `P`
- the tangent of `theta` is the gradient of the line segment `OP`

**De Moivre’s Theorem**

For all integers `n`, `(costheta + isintheta)^n = cosntheta + isinntheta`

**Determinant of a 2 x 2 matrix**

If `A = [(a,b),(c,d)]`, the determinant of `A` is denoted by `detA = ad - bc`.

If `detA != 0`,

- the matrix `A` has an inverse
- the simultaneous linear equations `ax + by = e` and `cx + dy = f` have a unique solution
- the linear transformation of the plane, defined by `A`, maps the unit square, `O (0, 0), B (0,1), C(1, 1), D (1,0)`, to a parallelogram `OB'C'D'` of area `|detA|`
- the sign of the determinant determines the orientation of the image of a figure under the transformation defined by the matrix.

**Dimension (or Size) of a matrix**

Two matrices are said to have the same dimensions (or size) if they have the same number of rows and columns.

For example, the matrices `[(1,8,0),(2,5,7)]` and `[(3,4,5),(6,7,8)]` have the same dimensions. They are both `2×3` matrices.

An `m×n` matrix has `m` rows and `n` columns.

**Difference Method (or Method of Differences)**

The method of differences can be used determine some ‘special series’, when we are not given the sum of the series.

For example, as `n^2 - (n - 1)^2 = 2n - 1`, this will mean that `n^2 = 2 sum_(r=1)^n n^2 - 1`.

**Divergent Sequences**

A sequence `{a_n}` diverges to `oo`, if given `K > 0`, however great, there exists `N` (dependent on `K` ) such that `a_n > K`, provided `n > N`.

`lim_(n-> oo) a_n = oo`

**Double angle formulae**

`sin2A = 2sinAcosA`

`cos2A = cos^2A - sin^2A = 2cos^2A - 1 = 1 - 2sin^2A`

`tan2A= (2tanA)/(1 - tan^2A)`

**Entries (Elements) of a matrix**

The symbol `a_(ij)` represents the `(i, j)` entry which occurs in the `i^(th)` row and the `j^(th)` column.

For example, a general `3×2` matrix is `[(a_(11),a_(12)),(a_(21),a_(22)),(a_(31),a_(32))]`, and `a_(32)` is the entry in the third row and the second column.

**Geometric sequence**

A geometric sequence is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed number called the *common ratio*. For example, the sequence `6, 12, 24, ...` is a geometric sequence with common ratio `2`.

Similarly, the sequence `40, 20, 10, 5, 2.5, ...` is a geometric sequence with common ratio `1/2`.

If the initial term of a geometric sequence is `a`, and the common ratio of successive members is `r`, then the `n^(th)` term, `t_n`, of the sequence is given by `t_n = ar^(n - 1)` for `n >= 1`.

A recursive definition is `U_n = rU_(n-1)` where `U_1 = a`.

**Geometric series**

A geometric series is the sum of a geometric sequence `S_n = U_1 + U_2 + U_3 + ... + U_n = sum_(r=1)^n U_r`

`S_n = (a(1 - r^n))/((1 - r))` where `r != 1`.

The infinite series `S_(oo) = U_1 + U_2 + U_3 + ... = sum_(r=1)^(oo) U_r` can be found by evaluating `lim_(n->oo)S_n`.

`S_(oo) = a/(1 - r)`, provided `|r| < 1`.

**Imaginary part of a complex number**

A complex number `z` may be written as `x + yi`, where `x` and `y` are real, and then `y` is the imaginary part of `z`. It is denoted by `Im(z)`.

**Implicit differentiation**

When variables `x` and `y` satisfy a single equation, this may define `y` as a function of `x` even though there is no explicit formula for `y` in terms of `x`. *Implicit differentiation* consists of differentiating each term of the equation as it stands and making use of the chain rule. This can lead to a formula for `dy/dx`. For example, if `x^2 + xy^3 - 2x + 3y = 0`, then `2x + x(3y^2)dy/dx + y^3 - 2 + 3dy/dx = 0`, and so `dy/dx = (2 - 2x - y^3)/(3xy^2 + 3)`.

**Inverse trigonometric functions**

The inverse sine function, `y = sin^(-1)x`

If the domain for the `"sine"` function is restricted to the interval `[-pi/2,pi/2]`, a one-to-one function is formed and so an inverse function exists.

The inverse of this restricted `"sine"` function is denoted by `sin^(-1)`, and is defined by `sin^(-1):[-1,1] -> R`, `sin^(-1)x = y` where`sin y = x`, `y in [-pi/2,pi/2]`.

`sin^(-1)` is also denoted by `"arcsin"`.

The inverse cosine function, `y = cos^(-1)x`

If the domain of the `"cosine"` function is restricted to `[0,pi]`, a one-to-one function is formed and so the inverse function exists.

`cos^(-1)x`, the inverse function of this restricted `"cosine"` function, is defined by `cos^(-1):[-1,1] -> R`, `cos^(-1)x = y` where `cosy = x`, `y in [0,pi]`.

`cos^(-1)` is also denoted by `"arccos"`.

The inverse tangent function, `y = tan^(-1)x`

If the domain of the `"tangent"` function is restricted to `(-pi/2,pi/2)`, a one-to-one function is formed and so the inverse function exists.

`tan^(-1):R -> R`, `tan^(-1)x = y`, where `tany = x`, `y in (-pi/2,pi/2)`

`tan^(-1)` is also denoted by `"arctan"`.

**Leading diagonal**

The leading diagonal of a square matrix is the diagonal which runs from the top left corner to the bottom right corner.

**Linear Transformation Defined by a 2x2 matrix**

The matrix multiplication `[(a,b),(c,d)][(x),(y)] = [(ax + by),(cx + dy)]` defines a transformation `T(x,y) = (ax + by, cx + dy)`.

**Linear Transformations in 2-dimensions**

A linear transformation in the plane is a mapping of the form `T(x,y) = (ax + by,cx + dy)`.

A transformation `T` is linear if and only if `T(alpha(x_1,y_1) + beta(x_2,y_2)) = alphaT(x_1,y_1) + betaT(x_2,y_2)`.

Linear transformations include:

- rotations around the origin
- reflections in lines through the origin
- dilations.

Translations are not linear transformations.

**MacLaurin’s Series**

A Maclaurin series is a special power series expansion for `f(x)`.

`f(x) = f(0) + f'(0)x + (f''(0))/(2!)x^2 + (f^(3)(0))/(3!)x^3 + ... + (f^(n)(0))/(n!)x^n + ...`

It equals `f(x)` whenever the series converges. MacLaurin series converge for all real `x`, some for a subset of `x` and others for no values of `x`.

**Matrix (matrices)**

A matrix is a rectangular array of elements or entries displayed in rows and columns. For example, `A = [(2,1),(0,3),(1,4)]` and `B = [(1,8,0),(2,5,7)]` are both matrices.

Matrix `A` is said to be a `3×2` matrix (three rows and two columns), while `B` is said to be a `2×3` matrix (two rows and three columns).

A *square matrix* has the same number of rows and columns.

A *column matrix* (or *vector*) has only one column.

A *row matrix* (or *vector*) has only one row.

**Matrix algebra of 2×2 matrices**

If `A`, `B` and `C` are `2×2` matrices, `I` the `2×2` (*multiplicative*) *identity matrix*, and `0` the `2×2` *zero matrix*, then:

`A + B = B + A` (commutative law for addition)

`(A + B) + C = A + (B + C)` (associative law for addition)

`A + 0 = A` (additive identity)

`A + (-A) = 0` (additive inverse)

`(AB)C = A(BC)` (associative law for multiplication)

`AI = A = IA` (multiplicative identity)

`A(B + C) = AB + AC` (left distributive law)

`(B + C)A = BA + CA` (right distributive law)

**Matrix multiplication**

*Matrix multiplication* is the process of multiplying a matrix by another matrix. The product `AB` of two matrices `A` and `B` with *dimensions* `m×n` and `p×q` is defined if `n = p`. If it is defined, the product `AB` is an `m×q` matrix and it is computed as shown in the following example.

`[(1,8,0),(2,5,7)][(6,10),(11,3),(12,4)] = [(94,34),(151,63)]`

The entries are computed as shown:

`1 × 6 + 8 × 11 + 0 × 12 = 94`

`1 × 10 + 8 × 3 + 0 × 4 = 34`

`2 × 6 + 5 × 11 + 7 × 12 = 151`

`2 × 10 + 5 × 3 + 7 × 4 = 63`

The entry in row `i` and column `j` of the product `AB` is computed by ‘multiplying’ row `i` of `A` by column `j` of `B` as shown.

If `A = [(a_(11),a_(12)),(a_(21),a_(22)),(a_(31),a_(32))]` and `B = [(b_(11),b_(12),b_(13)),(b_(21),b_(22),b_(23))]`, then

`AB = [(a_(11)b_(11) + a_(12)b_(21),a_(11)b_(12) + a_(12)b_(22),a_(11)b_(13) + a_(12)b_(23)),(a_(21)b_(11) + a_(22)b_(21),a_(21)b_(12) + a_(22)b_(22),a_(21)b_(13) + a_(22)b_(23)),(a_(31)b_(11) + a_(32)b_(21),a_(31)b_(12) + a_(32)b_(22),a_(31)b_(13) + a_(32)b_(23))]`

**Modulus (Absolute value) of a complex number**

If `z` is a complex number and `z = x + yi`, then the modulus of `z` is the distance of `z` from the origin in the Argand plane. The modulus of `z` is denoted by `|z| = sqrt(x^2 + y^2)`.

**Multiplication by a scalar**

Let `a` be a non-zero vector and `k` a positive real number (scalar), then the *scalar multiple* of `a` by `k` is the vector `ka` which has magnitude `|k||a|` and the same direction as `a`. If `k` is a negative real number, then `ka` has magnitude `|k||a|` but is directed in the opposite direction to `a`.

Some properties of scalar multiplication are:

`k(a + b) = ka + kb`

`h(k(a)) = (hk)a`

`Ia = a`

**(Multiplicative) identity matrix**

A (*multiplicative*) *identity matrix* is a square matrix in which all the elements in the leading diagonal are 1s and the remaining elements are 0s. Identity matrices are designated by the letter `I`.

For example, `[(1,0),(0,1)]` and `[(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1)]` are both identity matrices.

There is an identity matrix for each order of square matrix. When clarity is needed, the order is written with subscript, `I_n`.

**Multiplicative inverse of a square matrix**

The inverse of a square matrix `A` is written as `A^(-1)` and has the property that `A A^(-1) = I`.

Not all square matrices have an inverse. A matrix that has an inverse is said to be *invertible*.

Multiplicative inverse of a `2×2` matrix:

The inverse of the matrix `A = [(a,b),(c,d)]` is `A^(-1) = 1/detA [(d,-b),(-c,a)]`, when `detA != 0`.

**Scalar multiplication (Matrices)**

*Scalar multiplication* is the process of multiplying a matrix by a scalar (number). For example, forming the product `10[(2,1),(0,3),(1,4)] = [(20,10),(0,30),(10,40)]` is an example of the process of scalar multiplication.

In general, for the matrix `A` with entries `a_(ij)`, the entries of `kA` are `ka_(ij)`.

**Polar form of a complex number**

For a complex number `z`, let `theta = arg(z)`. Then `z = r(costheta + isintheta)` is the polar form of `z`.

**Power Series**

A *power series* is an *infinite series* `a_0 + a_1x + a_2x^2 + a_3x^3 + ...` where `a_0, a_1, a_2, a_3, ...` are real constants.

**Principle of mathematical induction**

Let there be associated with each positive integer `n`, a proposition `P(n)`.

If `P(1)` is true, and for all `k`, `P(k)` is true implies `P(k + 1)` is true, then `P(n)` is true for all positive integers `n`.

**Products as sums and differences**

`cosAcosB = 1/2[cos(A – B) + cos(A + B)]`

`sinAsinB = 1/2[cos(A – B) – cos(A + B)]`

`sinAcosB = 1/2[sin(A + B) + sin(A – B)]`

`cosAsinB = 1/2[sin(A + B) – sin(A – B)]`

**Pythagorean identities**

`cos^2A + sin^2A = 1`

`tan^2A + 1 = sec^2A`

`cot^2A + 1 = cosec^2A`

**Rational function**

A rational function is a function such that `f(x) = (g(x))/(h(x))`, where `g(x)` and `h(x)` are polynomials. Usually, `g(x)` and `h(x)` are chosen so as to have no common factor of degree greater than or equal to 1, and the domain of `f` is usually taken to be `{x in R: h(x) != 0}`.

**Real part of a complex number**

A complex number `z` may be written as `x + yi`, where `x` and `y` are real, and then `x` is the real part of `z`. It is denoted by `Re(z)`.

**Reciprocal trigonometric functions**

`secA = 1/(cosA)`, `cosA != 0`

`cosecA = 1/(sinA)`, `sinA != 0`

`cotA = cosA/sinA`, `sinA != 0`

**Root of unity**

Given a complex number `z` such that `z^n = 1`, then the `n^(th)` roots of unity are: `cos((2kpi)/n) + isin((2kpi)/n)`, where `k = 0, 1, 2, ..., n - 1`.

The points in the complex plane representing roots of unity lie on the unit circle.

The cube roots of unity are `z_1 = 1`, `z_2 = 1/2(–1 + isqrt3)`, `z_3 = 1/2(–1 – isqrt3)`.

*Note*: `z_3 = bar(z_2)`; `z_3 = 1/(z_2)`; and `z_2z_3 = 1`.

**Separation of variables**

Differential equations of the form `dy/dx = g(x)h(y)` can be rearranged as long as `h(y) != 0` to obtain `1/(h(y))dy/dx = g(x)`.

**Sigma Notation Rules**

`sum_(r=1)^n f(r) = f(1) + f(2) + f(3) + ... + f(n)`

`sum_(r=1)^n (f(r) + g(r)) = sum_(r=1)^n f(r) + sum_(r=1)^n g(r)`

`sum_(r=1)^n kf(r) = ksum_(r=1)^n f(r)`

**Singular matrix**

A matrix is singular if `detA = 0`. A singular matrix does not have a multiplicative inverse.

**Vector equation of a plane**

Let `a` be a position vector of a point `A` in the plane, and `n` a normal vector to the plane, then the plane consists of all points `P` whose position vector `p` satisfies `(p - a)n = 0`. This equation may also be written as `pn = an`, `a` constant.

(If the normal vector `n` is the vector `(l,m,n)` in ordered triple notation and the scalar product `an = k`, this gives the Cartesian equation `lx + my + nz = k` for the plane.)

**Vector equation of a straight line**

Let `a` be the position vector of a point on a line `l`, and `u` any vector with direction along the line. The line consists of all points `P` whose position vector `p` is given by `p = a + tu` for some real number `t`.

(Given the position vectors of two points on the plane `a` and `b`, the equation can be written as `p = a + t(b - a)` for some real number `t`.)

**Zero matrix**

A zero matrix is a matrix if all of its entries are zero. For example, `[(0,0,0),(0,0,0)]` and `[(0,0),(0,0)]` are zero matrices.

There is a zero matrix for each *size* of matrix.