diff --git a/books/bookvol10.5.pamphlet b/books/bookvol10.5.pamphlet
index 9955f2b..aa37dcc 100644
--- a/books/bookvol10.5.pamphlet
+++ b/books/bookvol10.5.pamphlet
@@ -257,6 +257,78 @@ November 10, 2003 ((iHy))
 \vfill
 \eject
 \pagenumbering{arabic}
+\chapter{Numerical Analysis \cite{4}}
+We can describe each number as $x^{*}$ which has a machine-representable
+form which differs from the number $x$ it is intended to represent.
+Quoting Householder we get:
+\[x^{*}=\pm(x_1\beta^{-1} + x_2\beta^{-2}+\cdots+x_\lambda\beta^\lambda)
+\beta^\sigma\]
+where $\beta$ is the base, usually 2 or 10, $\lambda$ is a positive
+integer, and $\sigma$ is any integer, possibly zero. It may be that
+$\lambda$ is fixed throughout the course of the computation, or it may
+vary, but in any case it is limited by practical considerations. Such a
+number will be called a representation. It may be that $x^{*}$ is
+obtained by ``rounding off'' a number whose true value is $x$ (for example,
+$x=1/3$, $x^{*}=0.33$), or that $x^{*}$ is the result of measuring
+physically a quantity whose true value is $x$, or that $x^{*}$ is the
+result of a computation intended to give an approximation to the quantity $x$.
+
+Suppose one is interested in performing an operations $\omega$ upon a pair
+of numbers $x$ and $y$. That is to say, $x\omega{}y$ may represent a 
+product of $x$ and $y$, a quotient of $x$ by $y$, the $y$th power of $x$,
+$\ldots$. In the numerical computation, however, one has only $x^{*}$ and
+$y^{*}$ upon which to operate, not $x$ and $y$ (or at least these are the
+quantitites upon which one does, in fact operate). Not only this, but often
+one does not even perform the strict operation $\omega$, but rather a
+pseudo operation $\omega^{*}$, which yields a rounded-off product, quotient,
+power, etc. Hence, instead of obtaining the desired result $x\omega{}y$, 
+one obtains a result $x^{*}\omega^{*}y^{*}$.
+
+The error in the result is therefore
+\[x\omega{}y - x^{*}\omega^{*}y^{*} = (x\omega{}y-x^{*}\omega{}y^{*}) +
+(x^{*}\omega{}y^{*} - x^{*}\omega^{*}y^{*})\]
+Since $x^{*}$ and $y^{*}$ are numbers, the operation $\omega$ can be 
+applied to them, and $x^{*}\omega{}y^{*}$ is well defined, except for special
+cases as when $\omega$ represents division and $y^{*}=0$. But the 
+expression in the first parenthesees on the right represents propagated
+error, and that in the second parentheses represents generated error, or
+round-off. Hence the total error in the result is the sum of the error
+propagated by the operation and that generated by the operation.
+
+Householder notes that, given two operations $\omega$ and $\phi$, it may
+be true that the operations are associative, e.g:
+\[(x^{*}\omega{}y^{*})\phi{}z^{*} = x^{*}\omega{}(y^{*}\phi{}z^{*})\]
+but if we expand these in terms of the above definitions of propogation
+and generation error we get two different expressions:
+\[
+\begin{array}{lcccc}
+(x^{*}\omega{}y^{*})\phi{}z^{*} - (x^{*}\omega^{*}y^{*})\phi^{*}z^{*} &=&
+ [(x^{*}\omega{}y^{*})\phi{}z^{*}& - & (x^{*}\omega^{*}y^{*})\phi{}z^{*}]\\
+&+& [(x^{*}\omega^{*}y^{*})\phi{}z^{*}& -& (x^{*}\omega^{*}y^{*})\phi^{*}z^{*}]
+\end{array}
+\]
+ 
+\[
+\begin{array}{lcccc}
+x^{*}\omega{}(y^{*}\phi{}z^{*}) - x^{*}\omega^{*}(y^{*}\phi^{*}z^{*}) &=&
+ [x^{*}\omega{}(y^{*}\phi{}z^{*})& -& x^{*}\omega(y^{*}\phi^{*}z^{*})]\\
+&+&[x^{*}\omega(y^{*}\phi^{*}z^{*})& -& x^{*}\omega^{*}(y^{*}\phi^{*}z^{*})]
+\end{array}
+\]
+These are not always equal which implies that the strictly machine operations
+are not necessarily commutative.
+
+Householder distinguishes a third class of error (besides propagation
+and generative) called residual errors. This occurs because some functions
+are approximated by infinite series. The finite computation of the series
+forces the truncation of the remaining terms causing these residual errors.
+
+We will try to perform an analysis of each of the routines in this library
+for the given inputs to try to see the propagation errors and generation
+errors they introduce. Every effort will be made to minimize these errors.
+In particular, we will appeal to the machine generated code to see what
+approximations actually occur.
+
 \chapter{Chapter Overview}
 Each routine in the Basic Linear Algebra Subroutine set (BLAS) has
 a prefix where:
@@ -399,7 +471,7 @@ dcabs1(t5)
 --R                                                            Type: DoubleFloat
 --E 10
 
-a:PRIMARR(DFLOAT):=[[1.0,2.0,3.0,4,0,5,0,6,0]]
+a:PRIMARR(DFLOAT):=[ [1.0,2.0,3.0,4,0,5,0,6,0] ]
 dasum(3,a,-1) -- 0.0   neg incx
 dasum(3,a,0) --  0.0   zero incx
 dasum(-1,a,1) -- 0.0   neg elements
@@ -502,7 +574,7 @@ BlasLevelOne() : Exports == Implementation where
       ++ dasum(n,array,incx) computes the sum of n elements in array
       ++ using a stride of incx
       ++
-      ++X dx:PRIMARR(DFLOAT):=[[1.0,2.0,3.0,4.0,5.0,6.0]]
+      ++X dx:PRIMARR(DFLOAT):=[ [1.0,2.0,3.0,4.0,5.0,6.0] ]
       ++X dasum(6,dx,1)
       ++X dasum(3,dx,2)
 
@@ -512,12 +584,12 @@ BlasLevelOne() : Exports == Implementation where
       ++ and a constant multiplier a
       ++ Note that the vector y is modified with the results.
       ++
-      ++X x:PRIMARR(DFLOAT):=[[1.0,2.0,3.0,4.0,5.0,6.0]]
-      ++X y:PRIMARR(DFLOAT):=[[1.0,2.0,3.0,4.0,5.0,6.0]]
+      ++X x:PRIMARR(DFLOAT):=[ [1.0,2.0,3.0,4.0,5.0,6.0] ]
+      ++X y:PRIMARR(DFLOAT):=[ [1.0,2.0,3.0,4.0,5.0,6.0] ]
       ++X daxpy(6,2.0,x,1,y,1)
       ++X y
-      ++X m:PRIMARR(DFLOAT):=[[1.0,2.0,3.0]]
-      ++X n:PRIMARR(DFLOAT):=[[1.0,2.0,3.0,4.0,5.0,6.0]]
+      ++X m:PRIMARR(DFLOAT):=[ [1.0,2.0,3.0] ]
+      ++X n:PRIMARR(DFLOAT):=[ [1.0,2.0,3.0,4.0,5.0,6.0] ]
       ++X daxpy(3,-2.0,m,1,n,2)
       ++X n
 
@@ -526,12 +598,12 @@ BlasLevelOne() : Exports == Implementation where
       ++ for each of the chosen elements of the vectors x and y
       ++ Note that the vector y is modified with the results.
       ++
-      ++X x:PRIMARR(DFLOAT):=[[1.0,2.0,3.0,4.0,5.0,6.0]]
-      ++X y:PRIMARR(DFLOAT):=[[0.0,0.0,0.0,0.0,0.0,0.0]]
+      ++X x:PRIMARR(DFLOAT):=[ [1.0,2.0,3.0,4.0,5.0,6.0] ]
+      ++X y:PRIMARR(DFLOAT):=[ [0.0,0.0,0.0,0.0,0.0,0.0] ]
       ++X dcopy(6,x,1,y,1)
       ++X y
-      ++X m:PRIMARR(DFLOAT):=[[1.0,2.0,3.0]]
-      ++X n:PRIMARR(DFLOAT):=[[0.0,0.0,0.0,0.0,0.0,0.0]]
+      ++X m:PRIMARR(DFLOAT):=[ [1.0,2.0,3.0] ]
+      ++X n:PRIMARR(DFLOAT):=[ [0.0,0.0,0.0,0.0,0.0,0.0] ] 
       ++X dcopy(3,m,1,n,2)
       ++X n
 
@@ -764,7 +836,7 @@ function.
 )clear all
 
 --S 1 of 28
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] ]
 --R 
 --R
 --R   (1)  [1.,2.,3.,4.,5.,6.]
@@ -995,7 +1067,7 @@ dasum(1,a,7) --  1.0   1.0
 dasum examples
 ====================================================================
 
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] ]
     [1.,2.,3.,4.,5.,6.]
 
 dasum(3,a,-1) -- 0.0   neg incx
@@ -1209,7 +1281,7 @@ NOTES:
 )clear all
 
 --S 1 of 22
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (1)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1217,7 +1289,7 @@ a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
 --E 1
 
 --S 2 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (2)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1233,7 +1305,7 @@ daxpy(3,2.0,a,1,b,1)
 --E 3
 
 --S 4 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (4)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1249,7 +1321,7 @@ daxpy(7,2.0,a,1,b,1)
 --E 5
 
 --S 6 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (6)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1265,7 +1337,7 @@ daxpy(8,2.0,a,1,b,1)
 --E 7
 
 --S 8 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (8)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1281,7 +1353,7 @@ daxpy(3,2.0,a,3,b,3)
 --E 9
 
 --S 10 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (10)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1297,7 +1369,7 @@ daxpy(4,2.0,a,2,b,2)
 --E 11
 
 --S 12 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (12)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1313,7 +1385,7 @@ daxpy(5,2.0,a,2,b,2)
 --E 13
 
 --S 14 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (14)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1329,7 +1401,7 @@ daxpy(3,2.0,a,2,b,2)
 --E 15
 
 --S 16 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (16)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1345,7 +1417,7 @@ daxpy(3,-2.0,a,2,b,2)
 --E 17
 
 --S 18 of 22
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0] ]
 --R 
 --R
 --R   (18)  [1.,2.,3.]
@@ -1353,7 +1425,7 @@ a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0]]
 --E 18
 
 --S 19 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0] ]
 --R 
 --R
 --R   (19)  [1.,2.,3.,4.,5.]
@@ -1369,7 +1441,7 @@ daxpy(3,-2.0,a,1,b,2)
 --E 20
 
 --S 21 of 22
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (21)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1397,8 +1469,8 @@ the variables to the following values. Note that the daxpy function
 will modify the second array. Each example assumes we have reset
 the variables to these values.
 
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 
 then we compute the sum of the first 3 elements of each vector
 and we show the steps of the computation with trailing comments.
@@ -1466,8 +1538,8 @@ daxpy(3,-2.0,a,2,b,2) ==> [- 1.,2.,- 3.,4.,- 5.,6.,7.]
 or we change the lengths of the input vectors, making them unequal.
 So for the next two examples we assume the arrays look like:
 
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0]]
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0] ]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0] ]
 
 We compute 3 elements, with a negative multiplier, and different increments
 using an index for the 'a' array having values 0, 1, 2
@@ -1595,7 +1667,7 @@ RETURN VALUES
 )clear all
 
 --S 1 of 23
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 --R 
 --R
 --R   (1)  [1.,2.,3.,4.,5.,6.,7.]
@@ -1603,7 +1675,7 @@ a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
 --E 1
 
 --S 2 of 23
-b:PRIMARR(DFLOAT):=[[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+b:PRIMARR(DFLOAT):=[ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]
 --R 
 --R
 --R   (2)  [0.,0.,0.,0.,0.,0.,0.]
@@ -1619,7 +1691,7 @@ dcopy(3,a,1,b,1)
 --E 3
 
 --S 4 of 23
-b:PRIMARR(DFLOAT):=[[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+b:PRIMARR(DFLOAT):=[ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]
 --R 
 --R
 --R   (4)  [0.,0.,0.,0.,0.,0.,0.]
@@ -1635,7 +1707,7 @@ dcopy(7,a,1,b,1)
 --E 5
 
 --S 6 of 23
-b:PRIMARR(DFLOAT):=[[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+b:PRIMARR(DFLOAT):=[ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]
 --R 
 --R
 --R   (6)  [0.,0.,0.,0.,0.,0.,0.]
@@ -1651,7 +1723,7 @@ dcopy(8,a,1,b,1)
 --E 7
 
 --S 8 of 23
-b:PRIMARR(DFLOAT):=[[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+b:PRIMARR(DFLOAT):=[ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]
 --R 
 --R
 --R   (8)  [0.,0.,0.,0.,0.,0.,0.]
@@ -1667,7 +1739,7 @@ dcopy(3,a,3,b,3)
 --E 9
 
 --S 10 of 23
-b:PRIMARR(DFLOAT):=[[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+b:PRIMARR(DFLOAT):=[ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]
 --R 
 --R
 --R   (10)  [0.,0.,0.,0.,0.,0.,0.]
@@ -1683,7 +1755,7 @@ dcopy(4,a,2,b,2)
 --E 11
 
 --S 12 of 23
-b:PRIMARR(DFLOAT):=[[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+b:PRIMARR(DFLOAT):=[ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]
 --R 
 --R
 --R   (12)  [0.,0.,0.,0.,0.,0.,0.]
@@ -1699,7 +1771,7 @@ dcopy(5,a,2,b,2)
 --E 13
 
 --S 14 of 23
-b:PRIMARR(DFLOAT):=[[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+b:PRIMARR(DFLOAT):=[ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]
 --R 
 --R
 --R   (14)  [0.,0.,0.,0.,0.,0.,0.]
@@ -1715,7 +1787,7 @@ dcopy(3,a,2,b,2)
 --E 15
 
 --S 16 of 23
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0] ]
 --R 
 --R
 --R   (16)  [1.,2.,3.]
@@ -1723,7 +1795,7 @@ a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0]]
 --E 16
 
 --S 17 of 23
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0] ]
 --R 
 --R
 --R   (17)  [1.,2.,3.,4.,5.]
@@ -1739,7 +1811,7 @@ dcopy(3,a,1,b,1)
 --E 18
 
 --S 19 of 23
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0] ]
 --R 
 --R
 --R   (19)  [1.,2.,3.,4.,5.]
@@ -1755,7 +1827,7 @@ dcopy(3,a,1,b,2)
 --E 20
 
 --S 21 of 23
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0] ]
 --R 
 --R
 --R   (21)  [1.,2.,3.,4.,5.]
@@ -1763,7 +1835,7 @@ a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0]]
 --E 21
 
 --S 22 of 23
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0] ]
 --R 
 --R
 --R   (22)  [1.,2.,3.]
@@ -1787,9 +1859,9 @@ dcopy examples
 ====================================================================
 
 Assume we have two arrays which we initialize to these values:
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] ]
 
-b:PRIMARR(DFLOAT):=[[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+b:PRIMARR(DFLOAT):=[ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]
 
 Note that after each call to dcopy the b array is modified.
 The changed value is shown. We reset it after each bcopy but we
@@ -1811,17 +1883,17 @@ dcopy(3,a,2,b,2) ==> [1.,0.,3.,0.,5.,0.,0.]
 
 The arrays can be of different lengths:
 
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0] ]
 
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0] ]
 
 dcopy(3,a,1,b,1) ==>  [1.,2.,3.,4.,5.]
 
 dcopy(3,a,1,b,2) ==>  [1.,2.,2.,4.,3.]
 
-a:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0, 4.0, 5.0]]
+a:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0, 4.0, 5.0] ]
 
-b:PRIMARR(DFLOAT):=[[ 1.0, 2.0, 3.0]]
+b:PRIMARR(DFLOAT):=[ [ 1.0, 2.0, 3.0] ]
 
 dcopy(5,a,1,b,1) ==> [1.,2.,3.]
 
@@ -76036,6 +76108,15 @@ Jack Dongarra, Argonne National Lab.
 Jeremy Du Croz, Nag Central Office.  
 Sven Hammarling, Nag Central Office.
 Richard Hanson, Sandia National Labs.
+\bibitem{4} Householder, Alston S.
+``Principles of Numerical Analysis'' 
+Dover Publications, Mineola, NY ISBN 0-486-45312-X (1981)
+\bibitem{5} Golub, Gene H. and Van Loan, Charles F.
+``Matrix Computations''
+Johns Hopkins University Press ISBN 0-8018-3772-3 (1989)
+\bibitem{6} Higham, Nicholas J.
+``Accuracy and stability of numerical algorithms''
+SIAM Philadelphia, PA ISBN 0-89871-521-0 (2002)
 \end{thebibliography}
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 \chapter{Index}
diff --git a/books/bookvolbib.pamphlet b/books/bookvolbib.pamphlet
index 216b026..8c214ed 100644
--- a/books/bookvolbib.pamphlet
+++ b/books/bookvolbib.pamphlet
@@ -677,6 +677,10 @@ University, Aston Triangle, Birmingham B4 7 ET, U. K.
 Garcia, A. and Stichtenoth, H.
 ``A tower of Artin-Schreier extensions of function fields attaining the 
 Drinfeld-Vladut bound'' Invent. Math., vol. 121, 1995, pp. 211--222.
+\bibitem[GL89][GL89}
+Golub, Gene H. and Van Loan, Charles F.
+``Matrix Computations''
+Johns Hopkins University Press ISBN 0-8018-3772-3 (1989)
 \bibitem[Ha1896]{Ha1896}
 Hathway, Arthur S., "A Primer Of Quaternions"  (1896)
 \bibitem[Ha95]{Ha95}
@@ -691,6 +695,10 @@ Septembre 1996.
 E. Hermite. Sur l'int\'{e}gration des fractions
 rationelles. {\sl Nouvelles Annales de Math\'{e}matiques}
 ($2^{eme}$ s\'{e}rie), 11:145-148, 1872
+\bibitem[Hig02]{Hig02}
+Higham, Nicholas J.
+``Accuracy and stability of numerical algorithms''
+SIAM Philadelphia, PA ISBN 0-89871-521-0 (2002)
 \bibitem[HI96]{HI96}
 Huang, M.D. and Ierardi, D.
 ``Efficient algorithms for Riemann-Roch problem and for addition in the 
@@ -702,6 +710,9 @@ Hach\'e, G. and Le Brigand, D.
 ``Effective construction of algebraic geometry codes'' 
 IEEE Transaction on Information Theory, vol. 41, n27 6, 
 November 1995, pp. 1615--1628.
+\bibitem[Hou81]{Hou81}
+Householder, Alston S. ``Principles of Numerical Analysis''
+Dover Publications, Mineola, NY ISBN 0-486-45312-X (1981)
 \bibitem[Knu84]{Knu84}
 Knuth, Donald, {\it The \TeX{}book} \\
 Reading, Massachusetts, Addison-Wesley Publishing Company, Inc., 
diff --git a/changelog b/changelog
index f20e014..7b25e3f 100644
--- a/changelog
+++ b/changelog
@@ -1,3 +1,6 @@
+20100610 tpd src/axiom-website/patches.html 20100610.01.tpd.patch
+20100610 tpd books/bookvolbib Golub [GL89], Higham [Hig02], Householder [Hou81]
+20100610 tpd books/bookvol10.5 start of numeric analysis of BLAS/LAPACK
 20100609 tpd src/axiom-website/patches.html 20100609.03.tpd.patch
 20100609 tpd src/input/Makefile rule-based trig integration
 20100609 tpd src/input/richtrig300-399.input rule-based trig integration
diff --git a/src/axiom-website/patches.html b/src/axiom-website/patches.html
index 26213d9..a275cca 100644
--- a/src/axiom-website/patches.html
+++ b/src/axiom-website/patches.html
@@ -2872,6 +2872,8 @@ src/input/richtrig100-199.input rule-based trig integration<br/>
 src/input/richtrig200-299.input rule-based trig integration<br/>
 <a href="patches/20100609.03.tpd.patch">20100609.03.tpd.patch</a>
 src/input/richtrig300-399.input rule-based trig integration<br/>
+<a href="patches/20100610.01.tpd.patch">20100610.01.tpd.patch</a>
+books/bookvol10.5 start of numeric analysis of BLAS/LAPACK<br/>
  </body>
 </html>
 
