Friday, June 25, 2010

Common Features in Economics and Finance

Common Features in Economics and Finance

Giovanni Urga
Faculty of Finance, Centre for Econometric Analysis, Cass Business School, London EC1Y 8TZ, U.K. ()
This introductory article offers an overview of some developments in the common features literature since the publication of the seminal article by Engle and Kozicki in the Journal of Business & Economic Statistics in 1993, with the aim of highlighting the unifying theme of the contributions in this volume.

Thursday, June 24, 2010

Resampling methods in econometrics

Editors’ Introduction
Resampling methods in econometrics
Jean-Marie Dufoura, b, c, , , and Benoit Perrona, b, c
aCIRANO, Canada
bCIREQ, Canada
cDépartement de sciences économiques, Université de Montréal, Canada
Available online 31 August 2005.

Thirty-five years of journal of econometrics

By Takeshi Amemiya
Thirty-five years of journal of econometrics
Takeshi Amemiya1, a,
aDepartment of Economics, Stanford University, Stanford, CA 94035-6072, USA
Available online 13 November 2008.

Monday, June 21, 2010

Matrix Algebra in R: Resources, Videos, Textbooks

Matrix Algebra in R: Resources, Videos, Textbooks

A Brief History of Linear Algebra and Matrix Theory

A Brief History of Linear Algebra and Matrix Theory
Source: http://darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html
The introduction and development of the notion of a matrix and the subject of linear algebra followed the development of determinants, which arose from the study of coefficients of systems of linear equations. Leibnitz, one of the two founders of calculus, used determinants in 1693 and Cramer presented his determinant-based formula for solving systems of linear equations (today known as Cramer's Rule) in 1750. In contrast, the first implicit use of matrices occurred in Lagrange's work on bilinear forms in the late 1700s. Lagrange desired to characterize the maxima and minima of multivariate functions. His method is now known as the method of Lagrange multipliers. In order to do this he first required the first order partial derivatives to be 0 and additionally required that a condition on the matrix of second order partial derivatives hold; this condition is today called positive or negative definiteness, although Lagrange didn't use matrices explicitly.

Gauss developed Gaussian elimination around 1800 and used it to solve least squares problems in celestial computations and later in computations to measure the earth and its surface (the branch of applied mathematics concerned with measuring or determining the shape of the earth or with locating exactly points on the earth's surface is called geodesy. Even though Gauss' name is associated with this technique for successively eliminating variables from systems of linear equations Chinese manuscripts from several centuries earlier have been found that explain how to solve a system of three equations in three unknowns by ''Gaussian'' elimination. For years Gaussian elimination was considered part of the development of geodesy, not mathematics. The first appearance of Gauss-Jordan elimination in print was in a handbook on geodesy written by Wilhelm Jordan. Many people incorrectly assume that the famous mathematician Camille Jordan is the Jordan in ''Gauss-Jordan'' elimination.

For matrix algebra to fruitfully develop one needed both proper notation and the proper definition of matrix multiplication. Both needs were met at about the same time and in the same place. In 1848 in England, J.J. Sylvester first introduced the term ''matrix,'' which was the Latin word for womb, as a name for an array of numbers. Matrix algebra was nurtured by the work of Arthur Cayley in 1855. Cayley studied compositions of linear transformations and was led to define matrix multiplication so that the matrix of coefficients for the composite transformation ST is the product of the matrix for S times the matrix for T. He went on to study the algebra of these compositions including matrix inverses. The famous Cayley-Hamilton theorem which asserts that a square matrix is a root of its characteristic polynomial was given by Cayley in his 1858 Memoir on the Theory of Matrices. The use of a single letter A to represent a matrix was crucial to the development of matrix algebra. Early in the development the formula det(AB) = det(A)det(B) provided a connection between matrix algebra and determinants. Cayley wrote ''There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants.''

Mathematicians also attempted to develop of algebra of vectors but there was no natural definition of the product of two vectors that held in arbitrary dimensions. The first vector algebra that involved a noncommutative vector product (that is, v x w need not equal w x v) was proposed by Hermann Grassmann in his book Ausdehnungslehre (1844). Grassmann's text also introduced the product of a column matrix and a row matrix, which resulted in what is now called a simple or a rank-one matrix. In the late 19th century the American mathematical physicist Willard Gibbs published his famous treatise on vector analysis. In that treatise Gibbs represented general matrices, which he called dyadics, as sums of simple matrices, which Gibbs called dyads. Later the physicist P. A. M. Dirac introduced the term ''bra-ket'' for what we now call the scalar product of a ''bra'' (row) vector times a ''ket'' (column) vector and the term ''ket-bra'' for the product of a ket times a bra, resulting in what we now call a simple matrix, as above. Our convention of identifying column matrices and vectors was introduced by physicists in the 20th century.

Matrices continued to be closely associated with linear transformations. By 1900 they were just a finite-dimensional subcase of the emerging theory of linear transformations. The modern definition of a vector space was introduced by Peano in 1888. Abstract vector spaces whose elements were functions soon followed.

There was renewed interest in matrices, particularly on the numerical analysis of matrices, after World War II with the development of modern digital computers. John von Neumann and Herman Goldstine introduced condition numbers in analyzing round-off errors in 1947. Alan Turing and von Neumann were the 20th century giants in the development of stored-program computers. Turing introduced the LU decomposition of a matrix in 1948. The L is a lower triangular matrix with 1's on the diagonal and the U is an echelon matrix. It is common to use LU decompositions in the solution of a sequence of systems of linear equations, each having the same coefficient matrix. The benefits of the QR decomposition was realized a decade later. The Q is a matrix whose columns are orthonormal vectors and R is a square upper triangular invertible matrix with positive entries on its diagonal. The QR factorization is used in computer algorithms for various computations, such as solving equations and find eigenvalues.

References


S. Athloen and R. McLaughlin, Gauss-Jordan reduction: A brief history, American Mathematical Monthly 94 (1987) 130-142.

A. Tucker, The growing importance of linear algebra in undergraduate mathematics, The College Mathematics Journal, 24 (1993) 3-9.