An edition of Coding the Matrix (2013)

Coding the Matrix

Linear Algebra through Computer Science Applications

  • 5.0 (1 rating)
  • 4 Want to read
  • 1 Currently reading
  • 1 Have read

My Reading Lists:

Create a new list

  • 5.0 (1 rating)
  • 4 Want to read
  • 1 Currently reading
  • 1 Have read

Buy this book

Last edited by ImportBot
December 19, 2023 | History
An edition of Coding the Matrix (2013)

Coding the Matrix

Linear Algebra through Computer Science Applications

  • 5.0 (1 rating)
  • 4 Want to read
  • 1 Currently reading
  • 1 Have read

This work doesn't have a description yet. Can you add one?

Publish Date
Publisher
Newtonian Press
Pages
528

Buy this book

Edition Availability
Cover of: Coding the Matrix
Coding the Matrix: Linear Algebra through Applications to Computer Science
Sep 03, 2013, Newtonian Press
paperback
Cover of: Coding the Matrix
Coding the Matrix: Linear Algebra through Computer Science Applications
Jul 26, 2013, Newtonian Press
paperback

Add another edition?

Book Details


Table of Contents

0. The Function (and Other Mathematical and Computational Preliminaries)
Page 1
0.1. Set Terminology and Notation
Page 1
0.2. Cartesian Product
Page 1
0.3. The Function
Page 2
0.3.1. Functions Versus Procedures, Versus Computational Problems
Page 3
0.3.2. The Two Computational Problems Related to a Function
Page 4
0.3.3. Notation for the Set of Functions with Given Domain and Co-domain
Page 5
0.3.4. Identity Function
Page 5
0.3.5. Composition of Functions
Page 5
0.3.6. Associativity of Function Composition
Page 6
0.3.7. Functional Inverse
Page 6
0.4. Probability
Page 9
0.4.1. Probability Distributions
Page 9
0.4.2. Events, and Adding Probabilities
Page 10
0.4.3. Applying a Function to a Random Input
Page 11
0.4.4. Perfect Secrecy
Page 12
0.4.5. Perfect Secrecy and Invertible Functions
Page 13
0.5. Lab: Introduction to Python—Sets, Lists, Dictionaries, and Comprehensions
Page 14
0.5.1. Simple Expressions
Page 15
0.5.2. Assignment Statements
Page 16
0.5.3. Conditional Expressions
Page 17
0.5.4. Sets
Page 17
0.5.5. Lists
Page 20
0.5.6. Tuples
Page 24
0.5.7. Other Things to Iterate Over
Page 25
0.5.8. Dictionaries
Page 26
0.5.9. Defining One-Line Procedures
Page 30
0.6. Lab: Python Modules and Control Structures — and Inverse Index
Page 31
0.6.1. Using Existing Modules
Page 31
0.6.2. Creating Your Own Modules
Page 32
0.6.3. Loops and Conditional Statements
Page 33
0.6.4. Grouping in Python Using Indentation
Page 33
0.6.5. Breaking Out of a Loop
Page 34
0.6.6. Reading from a File
Page 34
0.6.7. Mini-Search Engine
Page 34
0.7. Review Questions
Page 35
0.8. Problems
Page 36
1. The Field
Page 39
1.1. Introduction to Complex Numbers
Page 39
1.2. Complex Numbers in Python
Page 40
1.3. Abstracting over Fields
Page 41
1.4. Playing with C
Page 42
1.4.1. The Absolute Value of a Complex Number
Page 43
1.4.2. Adding Complex Numbers
Page 44
1.4.3. Multiplying Complex Numbers by a Positive Real Number
Page 46
1.4.4. Multiplying Complex Numbers by a Negative Number: Rotation by 180 Degrees
Page 46
1.4.5. Multiplying by i: Rotation by 90 Degrees
Page 47
1.4.6. The Unit Circle in the Complex Plane: Argument and Angle
Page 49
1.4.7. Euler's Formula
Page 50
1.4.8. Polar Representation for Complex Numbers
Page 51
1.4.9. The First Law of Exponentiation
Page 51
1.4.10. Rotation by θ Radians
Page 52
1.4.11. Combining Operations
Page 53
1.4.12. Beyond Two Dimensions
Page 54
1.5. Playing with GF(2)
Page 54
1.5.1. Perfect Secrecy Revisited
Page 54
1.5.2. Network Coding
Page 56
1.6. Review Questions
Page 57
1.7. Problems
Page 58
2. The Vector
Page 61
2.1. What Is a Vector?
Page 63
2.2. Vectors Are Functions
Page 64
2.2.1. Representation of Vectors Using Python Dictionaries
Page 65
2.2.2. Sparsity
Page 65
2.3. What Can We Represent with Vectors?
Page 65
2.4. Vector Addition
Page 67
2.4.1. Translation and Vector Addition
Page 67
2.4.2. Vector Addition Is Associative and Commutative
Page 68
2.4.3. Vectors as Arrows
Page 69
2.5. Scalar-Vector Multiplication
Page 70
2.5.1. Scaling Arrows
Page 71
2.5.2. Associativity of Scalar-Vector Multiplication
Page 72
2.5.3. Line Segments through the Origin
Page 72
2.5.4. Lines through the Origin
Page 73
2.6. Combining Vector Addition and Scalar Multiplication
Page 74
2.6.1. Line Segments and Lines That Don't Go through the Origin
Page 74
2.6.2. Distributive Laws for Scalar-Vector Multiplication and Vector Addition
Page 75
2.6.3. First Look at Convex Combinations
Page 76
2.6.4. First Look at Affine Combinations
Page 77
2.7. Dictionary-Based Representations of Vectors
Page 78
2.7.1. Setter and Getter
Page 79
2.7.2. Scalar-Vector Multiplication
Page 79
2.7.3. Addition
Page 80
2.7.4. Vector Negative, Invertibility of Vector Addition, and Vector Subtraction
Page 80
2.8. Vectors over GF(2)
Page 81
2.8.1. Perfect Secrecy Re-Revisited
Page 82
2.8.2. All-or-Nothing Secret-Sharing Using GF(2)
Page 82
2.8.3. Lights Out
Page 83
2.9. Dot-Product
Page 87
2.9.1. Total Cost or Benefit
Page 88
2.9.2. Linear Equations
Page 89
2.9.3. Measuring Similarity
Page 91
2.9.4. Dot-Product of Vectors over GF(2)
Page 94
2.9.5. Parity Bit
Page 95
2.9.6. Simple Authentication Scheme
Page 95
2.9.7. Attacking the Simple Authentication Scheme
Page 97
2.9.8. Algebraic Properties of the Dot-Product
Page 98
2.9.9. Attacking the Simple Authentication Scheme, Revisited
Page 99
2.10. Our Implementation of Vec
Page 100
2.10.1. Syntax for Manipulating Vecs
Page 100
2.10.2. The Implementation
Page 101
2.10.3. Using Vecs
Page 101
2.10.4. Printing Vecs
Page 101
2.10.5. Copying Vecs
Page 101
2.10.6. From List to Vec
Page 102
2.11. Solving a Triangular System of Linear Equations
Page 102
2.11.1. Upper-Triangular Systems
Page 102
2.11.2. Backward Substitution
Page 103
2.11.3. First Implementation of Backward Substitution
Page 104
2.11.4. When Does the Algorithm Work?
Page 105
2.11.5. Backward Substitution with Arbitrary-Domain Vectors
Page 105
2.12. Lab: Comparing Voting Records Using Dot-Product
Page 106
2.12.1. Motivation
Page 106
2.12.2. Reading in the File
Page 106
2.12.3. Two Ways to Use Dot-Product to Compare Vectors
Page 107
2.12.4. Policy Comparison
Page 107
2.12.5. Not Your Average Democrat
Page 108
2.12.6. Bitter Rivals
Page 108
2.12.7. Open-Ended Study
Page 108
2.13. Review Questions
Page 108
2.14. Problems
Page 109
3. The Vector Space
Page 113
3.1. Linear Combination
Page 113
3.1.1. Definition of Linear Combination
Page 113
3.1.2. Uses of Linear Combinations
Page 113
3.1.3. From Coefficients to Linear Combination
Page 115
3.1.4. From Linear Combination to Coefficients
Page 116
3.2. Span
Page 117
3.2.1. Definition of Span
Page 117
3.2.2. A System of Linear Equations Implies Other Equations
Page 118
3.2.3. Generators
Page 119
3.2.4. Linear Combinations of Linear Combinations
Page 120
3.2.5. Standard Generators
Page 121
3.3. The Geometry of Sets of Vectors
Page 122
3.3.1. The Geometry of the Span of Vectors over R
Page 122
3.3.2. The Geometry of Solution Sets of Homogeneous Linear Systems
Page 124
3.3.3. The Two Representations of Flats Containing the Origin
Page 125
3.4. Vector Spaces
Page 126
3.4.1. What's Common to the Two Representations?
Page 126
3.4.2. Definition and Examples of Vector Space
Page 127
3.4.3. Subspaces
Page 128
3.4.4. Abstract Vector Spaces
Page 130
3.5. Affine Spaces
Page 130
3.5.1. Flats That Don't Go through the Origin
Page 130
3.5.2. Affine Combinations
Page 131
3.5.3. Affine Spaces
Page 133
3.5.4. Representing an Affine Space as the Solution Set of a Linear System
Page 134
3.5.5. The Two Representations, Revisited
Page 135
3.6. Linear Systems, Homogeneous and Otherwise
Page 139
3.6.1. The Homogeneous Linear System Corresponding to a General Linear System
Page 139
3.6.2. Number of Solutions Revisited
Page 140
3.6.3. Towards Intersecting a Plane and a Line
Page 141
3.6.4. Checksum Functions
Page 141
3.7. Review Questions
Page 142
3.8. Problems
Page 143
4. The Matrix
Page 147
4.1. What Is a Matrix?
Page 147
4.1.1. Traditional Matrices
Page 147
4.1.2. The Matrix Revealed
Page 148
4.1.3. Rows, Columns, and Entries
Page 149
4.1.4. Our Python Implementation of Matrices
Page 150
4.1.5. Identity Matrix
Page 151
4.1.6. Converting between Matrix Representations
Page 151
4.1.7. `matutil.py`
Page 152
4.2. Column Space and Row Space
Page 152
4.3. Matrices as Vectors
Page 153
4.4. Transpose
Page 153
4.5. Matrix-Vector and Vector-Matrix Multiplication in Terms of Linear Combinations
Page 154
4.5.1. Matrix-Vector Multiplication in Terms of Linear Combinations
Page 154
4.5.2. Vector-Matrix Multiplication in Terms of Linear Combinations
Page 155
4.5.3. Formulating Expressing a Given Vector as a Linear Combination as a Matrix-Vector Equation
Page 156
4.5.4. Solving a Matrix-Vector Equation
Page 157
4.6. Matrix-Vector Multiplication in Terms of Dot-Products
Page 159
4.6.1. Definitions
Page 159
4.6.2. Example Applications
Page 160
4.6.3. Formulating a System of Linear Equations as a Matrix-Vector Equation
Page 162
4.6.4. Triangular Systems and Triangular Matrices
Page 163
4.6.5. Algebraic Properties of Matrix-Vector Multiplication
Page 165
4.7. Null Space
Page 165
4.7.1. Homogeneous Linear Systems and Matrix Equations
Page 165
4.7.2. The Solution Space of a Matrix-Vector Equation
Page 166
4.7.3. Introduction to Error-Correcting Codes
Page 167
4.7.4. Linear Codes
Page 168
4.7.5. The Hamming Code
Page 168
4.8. Computing Sparse Matrix-Vector Product
Page 169
4.9. The Matrix Meets the Function
Page 170
4.9.1. From Matrix to Function
Page 170
4.9.2. From Function to Matrix
Page 170
4.9.3. Examples of Deriving the Matrix
Page 171
4.10. Linear Functions
Page 173
4.10.1. Which Functions Can Be Expressed as a Matrix-Vector Product
Page 173
4.10.2. Definition and Simple Examples
Page 173
4.10.3. Linear Functions and Zero Vectors
Page 175
4.10.4. What Do Linear Functions Have to Do with Lines?
Page 176
4.10.5. Linear Functions That Are One-to-One
Page 176
4.10.6. Linear Functions That Are Onto
Page 177
4.10.7. A Linear Function from $F^C$ to $F^R$ Can Be Represented by a Matrix
Page 178
4.10.8. Diagonal Matrices
Page 178
4.11. Matrix-Matrix Multiplication
Page 179
4.11.1. Matrix-Matrix Multiplication in Terms of Matrix-Vector and Vector-Matrix Multiplication
Page 179
4.11.2. Matrix-Matrix Multiplication and Function Composition
Page 182
4.11.3. Transpose of Matrix-Matrix Product
Page 184
4.11.4. Column Vector and Row Vector
Page 185
4.11.5. Every Vector Is Interpreted as a Column Vector
Page 185
4.11.6. Linear Combinations of Linear Combinations Revisited
Page 186
4.12. Inner Product and Outer Product
Page 187
4.12.1. Inner Product
Page 187
4.12.2. Outer Product
Page 187
4.13. From Function Inverse to Matrix Inverse
Page 187
4.13.1. The Inverse of a Linear Function Is Linear
Page 187
4.13.2. The Matrix Inverse
Page 188
4.13.3. Uses of Matrix Inverse
Page 189
4.13.4. More about Matrix Inverse
Page 191
4.14. Lab: Error-Correcting Codes
Page 192
4.14.1. The Check Matrix
Page 192
4.14.2. The Generator Matrix
Page 192
4.14.3. Hamming's Code
Page 192
4.14.4. Decoding
Page 192
4.14.5. Error Syndrome
Page 193
4.14.6. Finding the Error
Page 193
4.14.7. Putting It All Together
Page 194
4.15. Lab: Transformations in 2D Geometry
Page 196
4.15.1. Our Representation for Points in the Plane
Page 196
4.15.2. Transformations
Page 196
4.15.3. Image Representation
Page 197
4.15.4. Loading and Displaying Images
Page 198
4.15.5. Linear Transformations
Page 198
4.15.6. Translation
Page 199
4.15.7. Scaling
Page 199
4.15.8. Rotation
Page 199
4.15.9. Rotation about a Center Other than the Origin
Page 200
4.15.10. Reflection
Page 200
4.15.11. Color Transformations
Page 200
4.15.12. Reflection More Generally
Page 200
4.16. Review Questions
Page 200
4.17. Problems
Page 201
5. The Basis
Page 209
5.1. Coordinate Systems
Page 209
5.1.1. René Descartes' Idea
Page 209
5.1.2. Coordinate Representation
Page 209
5.1.3. Coordinate Representation and Matrix-Vector Multiplication
Page 210
5.2. First Look at Lossy Compression
Page 210
5.2.1. Strategy 1: Replace Vector with Closest Sparse Vector
Page 211
5.2.2. Strategy 2: Represent Image Vector by Its Coordinate Representation
Page 212
5.3. Two Greedy Algorithms for Finding a Set of Generators
Page 213
5.3.1. Grow Algorithm
Page 213
5.3.2. Shrink Algorithm
Page 214
5.3.3. When Greed Fails
Page 214
5.4. Minimum Spanning Forest and GF(2)
Page 216
5.4.1. Definitions
Page 216
5.4.2. The Grow Algorithm and the Shrink Algorithm for Minimum Spanning Forest
Page 217
5.4.3. Formulating Minimum Spanning Forest in Linear Algebra
Page 218
5.5. Linear Dependence
Page 219
5.5.1. The Superfluous-Vector Lemma
Page 219
5.5.2. Defining Linear Dependence
Page 220
5.5.3. Linear Dependence in Minimum Spanning Forest
Page 221
5.5.4. Properties of Linear (In)dependence
Page 222
5.5.5. Analyzing the Grow Algorithm
Page 223
5.5.6. Analyzing the Shrink Algorithm
Page 223
5.6. Basis
Page 224
5.6.1. Defining Basis
Page 224
5.6.2. The Standard Basis for F^L
Page 226
5.6.3. Towards Showing That Every Vector Space Has a Basis
Page 227
5.6.4. Any Finite Set of Vectors Contains a Basis for Its Span
Page 227
5.6.5. Any Linearly Independent Subset of Vectors Belonging to V Can Be Extended to Form a Basis for V
Page 228
5.7. Unique Representation
Page 228
5.7.1. Uniqueness of Representation in Terms of a Basis
Page 228
5.8. Change of Basis, First Look
Page 229
5.8.1. The Function from Representation to Vector
Page 229
5.8.2. From One Representation to Another
Page 229
5.9. Perspective Rendering
Page 230
5.9.1. Points in the World
Page 230
5.9.2. The Camera and the Image Plane
Page 231
5.9.3. The Camera Coordinate System
Page 232
5.9.4. From the Camera Coordinates of a Point in the Scene to the Camera Coordinates of the Corresponding Point in the Image Plane
Page 233
5.9.5. From World Coordinates to Camera Coordinates
Page 235
5.9.6. To Pixel Coordinates
Page 236
5.10. Computational Problems Involving Finding a Basis
Page 236
5.11. The Exchange Lemma
Page 237
5.11.1. The Lemma
Page 237
5.11.2. Proof of Correctness of the Grow Algorithm for MSF
Page 238
5.12. Lab: Perspective Rectification
Page 238
5.12.1. The Camera Basis
Page 240
5.12.2. The Whiteboard Basis
Page 240
5.12.3. Mapping from Pixels to Points on the Whiteboard
Page 241
5.12.4. Mapping a Point Not on the Whiteboard to the Corresponding Point on the Whiteboard
Page 242
5.12.5. The Change-of-Basis Matrix
Page 243
5.12.6. Computing the Change-of-Basis Matrix
Page 243
5.12.7. Image Representation
Page 246
5.12.8. Synthesizing the Perspective-Free Image
Page 246
5.13. Review Questions
Page 247
5.14. Problems
Page 248
6. Dimension
Page 257
6.1. The Size of a Basis
Page 257
6.1.1. The Morphing Lemma and Its Implications
Page 257
6.1.2. Proof of the Morphing Lemma
Page 258
6.2. Dimension and Rank
Page 260
6.2.1. Definitions and Examples
Page 260
6.2.2. Geometry
Page 262
6.2.3. Dimension and Rank in Graphs
Page 262
6.2.4. The Cardinality of a Vector Space over GF(2)
Page 263
6.2.5. The Dimension Principle
Page 263
6.2.6. For a Finite Set D, Every Subspace of F^D Has a Basis
Page 264
6.2.7. The Rank Theorem
Page 264
6.2.8. Simple Authentication Revisited
Page 266
6.3. Direct Sum
Page 267
6.3.1. Definition
Page 267
6.3.2. Generators for the Direct Sum
Page 268
6.3.3. Basis for the Direct Sum
Page 269
6.3.4. Unique Decomposition of a Vector
Page 269
6.3.5. Complementary Subspaces
Page 270
6.4. Dimension and Linear Functions
Page 272
6.4.1. Linear Function Invertibility
Page 272
6.4.2. The Largest Invertible Subfunction
Page 272
6.4.3. The Kernel-Image Theorem
Page 274
6.4.4. Linear Function Invertibility, Revisited
Page 275
6.4.5. The Rank-Nullity Theorem
Page 275
6.4.6. Checksum Problem Revisited
Page 275
6.4.7. Matrix Invertibility
Page 276
6.4.8. Matrix Invertibility and Change of Basis
Page 277
6.5. Duality
Page 278
6.5.1. Conversions between Representations
Page 278
6.5.2. The Dual of a Vector Space
Page 280
6.5.3. The Dual Dimension Theorem
Page 281
6.5.4. From Generators for V to Generators for V*, and Vice Versa
Page 282
6.5.5. The Duality Theorem
Page 282
6.6. Review Questions
Page 283
6.7. Problems
Page 283
7. Gaussian Elimination
Page 291
7.1. Echelon Form
Page 292
7.1.1. From Echelon Form to a Basis for Row Space
Page 293
7.1.2. Rowlist in Echelon Form
Page 294
7.1.3. Sorting Rows by Position of the Leftmost Nonzero
Page 294
7.1.4. Elementary Row-Addition Operations
Page 295
7.1.5. Multiplying by an Elementary Row-Addition Matrix
Page 296
7.1.6. Row-Addition Operations Preserve Row Space
Page 297
7.1.7. Basis, Rank, and Linear Independence through Gaussian Elimination
Page 298
7.1.8. When Gaussian Elimination Fails
Page 298
7.1.9. Pivoting, and Numerical Analysis
Page 299
7.2. Gaussian Elimination over GF(2)
Page 299
7.3. Using Gaussian Elimination for Other Problems
Page 300
7.3.1. There Is an Invertible Matrix M Such That M A Is in Echelon Form
Page 301
7.3.2. Computing M without Matrix Multiplications
Page 301
7.4. Solving a Matrix-Vector Equation Using Gaussian Elimination
Page 304
7.4.1. Solving a Matrix-Vector Equation When the Matrix Is in Echelon Form: The Invertible Case
Page 304
7.4.2. Coping with Zero Rows
Page 304
7.4.3. Coping with Irrelevant Columns
Page 305
7.4.4. Finding a Basis for the Null Space
Page 305
7.5. Factoring Integers
Page 306
7.5.1. First Attempt at Factoring
Page 307
7.6. Lab: Threshold Secret-Sharing
Page 308
7.6.1. First Attempt
Page 308
7.6.2. Scheme That Works
Page 309
7.6.3. Implementing the Scheme
Page 310
7.6.4. Generating u
Page 310
7.6.5. Finding Vectors That Satisfy the Requirement
Page 311
7.6.6. Sharing a String
Page 311
7.7. Lab: Factoring Integers
Page 312
7.7.1. First Attempt to Use Square Roots
Page 312
7.7.2. Euclid's Algorithm for Greatest Common Divisor
Page 312
7.7.3. Using Square Roots Revisited
Page 313
7.8. Review Questions
Page 318
8. The Inner Product
Page 325
8.1. The Fire Engine Problem
Page 325
8.1.1. Distance, Length, Norm, Inner Product
Page 326
8.2. The Inner Product for Vectors over the Reals
Page 326
8.2.1. Norms of Vectors over the Reals
Page 326
8.3. Orthogonality
Page 328
8.3.1. Properties of Orthogonality
Page 328
8.3.2. Decomposition of b into Parallel and Perpendicular Components
Page 329
8.3.3. Orthogonality Property of the Solution to the Fire Engine Problem
Page 330
8.3.4. Finding the Projection and the Closest Point
Page 331
8.3.5. Solution to the Fire Engine Problem
Page 332
8.3.6. Outer Product and Projection
Page 333
8.3.7. Towards Solving the Higher-Dimensional Version
Page 334
8.4. Lab: Machine Learning
Page 334
8.4.1. The Data
Page 334
8.4.2. Supervised Learning
Page 335
8.4.3. Selecting the Classifier That Minimizes the Error on the Training Data
Page 335
8.4.4. Nonlinear Optimization by Hill-Climbing
Page 336
8.4.5. Gradient
Page 337
8.4.6. Getting the Data
Page 339
8.4.7. Evaluating a Hypothesis
Page 339
8.4.8. Finding a Good Hypothesis
Page 339
8.5. Review Questions
Page 340
8.6. Problems
Page 340
9. Orthogonalization
Page 343
9.1. Projection Orthogonal to Multiple Vectors
Page 344
9.1.1. Orthogonal to a Set of Vectors
Page 344
9.1.2. Projecting onto and Orthogonal to a Vector Space
Page 345
9.1.3. First Attempt at Projecting Orthogonal to a List of Vectors
Page 346
9.2. Projecting b Orthogonal to a List of Mutually Orthogonal Vectors
Page 347
9.2.1. Proving the Correctness of project_orthogonal
Page 347
9.2.2. Augmenting project_orthogonal
Page 349
9.3. Building an Orthogonal Set of Generators
Page 351
9.3.1. The orthogonalize Procedure
Page 351
9.3.2. Proving the Correctness of orthogonalize
Page 352
9.4. Solving the Computational Problem: Closest Point in the Span of Many Vectors
Page 354
9.5. Solving Other Problems Using orthogonalize
Page 354
9.5.1. Computing a Basis
Page 355
9.5.2. Computing a Subset Basis
Page 355
9.5.3. augmented_orthogonalize
Page 356
9.5.4. Algorithms That Work in the Presence of Rounding Errors
Page 356
9.6. Orthogonal Complement
Page 357
9.6.1. Definition of Orthogonal Complement
Page 357
9.6.2. Orthogonal Complement and Direct Sum
Page 357
9.6.3. Normal to a Plane in R^3 Given as Span or Affine Hull
Page 358
9.6.4. Orthogonal Complement and Null Space and Dual Space
Page 359
9.6.5. Normal to a Plane in R^3 Given by an Equation
Page 359
9.6.6. Computing the Orthogonal Complement
Page 359
9.7. The QR Factorization
Page 360
9.7.1. Orthogonal and Column-Orthogonal Matrices
Page 360
9.7.2. Defining the QR Factorization of a Matrix
Page 361
9.7.3. Requiring A to Have Linearly Independent Columns
Page 361
9.8. Using the QR Factorization to Solve a Matrix Equation Ax = b
Page 362
9.8.1. The Square Case
Page 362
9.8.2. Correctness in the Square Case
Page 363
9.8.3. The Least-Squares Problem
Page 364
9.8.4. The Coordinate Representation in Terms of the Columns of a Column-Orthogonal Matrix
Page 364
9.8.5. Using QR_solve When A Has More Rows Than Columns
Page 365
9.9. Applications of Least Squares
Page 365
9.9.1. Linear Regression (Line-Fitting)
Page 365
9.9.2. Fitting to a Quadratic
Page 366
9.9.3. Fitting to a Quadratic in Two Variables
Page 367
9.9.4. Coping with Approximate Data in the Industrial Espionage Problem
Page 367
9.9.5. Coping with Approximate Data in the Sensor Node Problem
Page 368
9.9.6. Using the Method of Least Squares in the Machine-Learning Problem
Page 369
9.10. Review Questions
Page 370
9.11. Problems
Page 371
10. Special Bases
Page 379
10.1. Closest k-Sparse Vector
Page 379
10.2. Closest Vector Whose Representation with Respect to a Given Basis Is k-Sparse
Page 380
10.2.1. Finding the Coordinate Representation in Terms of an Orthonormal Basis
Page 380
10.2.2. Multiplication by a Column-Orthogonal Matrix Preserves Norm
Page 381
10.3. Wavelets
Page 382
10.3.1. One-Dimensional "Images" of Different Resolutions
Page 382
10.3.2. Decomposing V_n as a Direct Sum
Page 383
10.3.3. The Wavelet Bases
Page 384
10.3.4. The Basis for V_1
Page 385
10.3.5. General n
Page 385
10.3.6. The First Stage of Wavelet Transformation
Page 386
10.3.7. The Subsequent Levels of Wavelet Decomposition
Page 387
10.3.8. Normalizing
Page 389
10.3.9. The Backward Transform
Page 389
10.3.10. Implementation
Page 389
10.4. Polynomial Evaluation and Interpolation
Page 390
10.5. Fourier Transform
Page 391
10.6. Discrete Fourier Transform
Page 393
10.6.1. The Laws of Exponentiation
Page 393
10.6.2. The n Stopwatches
Page 394
10.6.3. Discrete Fourier Space: Sampling the Basis Functions
Page 394
10.6.4. The Inverse of the Fourier Matrix
Page 395
10.6.5. The Fast Fourier Algorithm
Page 397
10.7. The Inner Product for the Field of Complex Numbers
Page 399
10.8. Circulant Matrices
Page 401
10.8.1. Multiplying a Circulant Matrix by a Column of the Fourier Matrix
Page 402
10.8.2. Circulant Matrices and Change of Basis
Page 403
10.9. Lab: Using Wavelets for Compression
Page 403
10.9.1. Unnormalized Forward Transform
Page 404
10.9.2. Normalization in the Forward Transform
Page 405
10.9.3. Compression by Suppression
Page 406
10.9.4. Unnormalizing
Page 407
10.9.5. Unnormalized Backward Transform
Page 407
10.9.6. Backward Transform
Page 407
10.9.7. Auxiliary Procedure
Page 407
10.9.8. Two-Dimensional Wavelet Transform
Page 408
10.9.9. Forward Two-Dimensional Transform
Page 409
10.9.10. More Auxiliary Procedures
Page 410
10.9.11. Two-Dimensional Backward Transform
Page 411
10.9.12. Experimenting with Compression of Images
Page 412
10.10. Review Questions
Page 412
10.11. Problems
Page 412
11. The Singular Value Decomposition
Page 415
11.1. Approximation of a Matrix by a Low-Rank Matrix
Page 415
11.1.1. The Benefits of Low-Rank Matrices
Page 415
11.1.2. Matrix Norm
Page 416
11.2. The Trolley-Line-Location Problem
Page 416
11.2.1. Solution to the Trolley-Line-Location Problem
Page 417
11.2.2. Rank-One Approximation to a Matrix
Page 420
11.2.3. The Best Rank-One Approximation
Page 420
11.2.4. An Expression for the Best Rank-One Approximation
Page 421
11.2.5. The Closest One-Dimensional Affine Space
Page 422
11.3. Closest Dimension-k Vector Space
Page 423
11.3.1. A Gedanken Algorithm to Find the Singular Values and Vectors
Page 423
11.3.2. Properties of the Singular Values and Right Singular Vectors
Page 424
11.3.3. The Singular Value Decomposition
Page 425
11.3.4. Using Right Singular Vectors to Find the Closest k-Dimensional Space
Page 427
11.3.5. Best Rank-k Approximation to A
Page 429
11.3.6. Matrix Form for Best Rank-k Approximation
Page 430
11.3.7. Number of Nonzero Singular Values Is Rank A
Page 430
11.3.8. Numerical Rank
Page 431
11.3.9. Closest k-Dimensional Affine Space
Page 431
11.3.10. Proof That U Is Column-Orthogonal
Page 432
11.4. Using the Singular Value Decomposition
Page 433
11.4.1. Using SVD to Do Least Squares
Page 434
11.5. PCA
Page 434
11.6. Lab: Eigenfaces
Page 435
11.7. Review Questions
Page 437
11.8. Problems
Page 437
12. The Eigenvector
Page 441
12.1. Modeling Discrete Dynamic Processes
Page 441
12.1.1. Two Interest-Bearing Accounts
Page 441
12.1.2. Fibonacci Numbers
Page 442
12.2. Diagonalization of the Fibonacci Matrix
Page 444
12.3. Eigenvalues and Eigenvectors
Page 445
12.3.1. Similarity and Diagonalizability
Page 446
12.4. Coordinate Representation in Terms of Eigenvectors
Page 448
12.5. The Internet Worm
Page 449
12.6. Existence of Eigenvalues
Page 450
12.6.1. Positive-Definite Matrices
Page 450
12.6.2. Matrices with Distinct Eigenvalues
Page 451
12.6.3. Symmetric Matrices
Page 452
12.6.4. Upper-Triangular Matrices
Page 452
12.6.5. General Square Matrices
Page 453
12.7. Power Method
Page 454
12.8. Markov Chains
Page 454
12.8.1. Modeling Population Movement
Page 454
12.8.2. Modeling Randy
Page 455
12.8.3. Markov Chain Definitions
Page 456
12.8.4. Modeling Spatial Locality in Memory Fetches
Page 456
12.8.5. Modeling Documents: Hamlet in Wonderland
Page 457
12.8.6. Modeling Lots of Other Stuff
Page 458
12.8.7. Stationary Distributions of Markov Chains
Page 458
12.8.8. Sufficient Condition for Existence of a Stationary Distribution
Page 458
12.9. Modeling a Web Surfer: PageRank
Page 459
12.10. *The Determinant
Page 459
12.10.1. Areas of Parallelograms
Page 459
12.10.2. Volumes of Parallelepipeds
Page 461
12.10.3. Expressing the Area of a Polygon in Terms of Areas of Parallelograms
Page 462
12.10.4. The Determinant
Page 463
12.10.5. *Characterizing Eigenvalues via the Determinant Function
Page 465
12.11. *Proofs of Some Eigentheorems
Page 466
12.11.1. Existence of Eigenvalues
Page 466
12.11.2. Diagonalization of Symmetric Matrices
Page 467
12.11.3. Triangularization
Page 469
12.12. Lab: PageRank
Page 471
12.12.1. Concepts
Page 471
12.12.2. Working with a Big Dataset
Page 473
12.12.3. Implementing PageRank Using the Power Method
Page 473
12.12.4. The Dataset
Page 475
12.12.5. Handling Queries
Page 475
12.12.6. Biasing the PageRank
Page 476
12.12.7. Optional: Handling Multiword Queries
Page 476
12.13. Review Questions
Page 476
12.14. Problems
Page 477
13. The Linear Program
Page 481
13.1. The Diet Problem
Page 481
13.2. Formulating the Diet Problem as a Linear Program
Page 481
13.3. The Origins of Linear Programming
Page 482
13.3.1. Terminology
Page 483
13.3.2. Linear Programming in Different Forms
Page 484
13.3.3. Integer Linear Programming
Page 484
13.4. Geometry of Linear Programming: Polyhedra and Vertices
Page 484
13.5. There Is an Optimal Solution That Is a Vertex of the Polyhedron
Page 487
13.6. An Enumerative Algorithm for Linear Programming
Page 488
13.7. Introduction to Linear-Programming Duality
Page 488
13.8. The Simplex Algorithm
Page 490
13.8.1. Termination
Page 490
13.8.2. Representing the Current Solution
Page 491
13.8.3. A Pivot Step
Page 491
13.8.4. Simple Example
Page 493
13.9. Finding a Vertex
Page 495
13.10. Game Theory
Page 497
13.11. Formulation as a Linear Program
Page 499
13.12. Nonzero-Sum Games
Page 500
13.13. Lab: Learning through Linear Programming
Page 500
13.13.1. Reading in the Training Data
Page 501
13.13.2. Setting up the Linear Program
Page 501
13.13.3. Main Constraints
Page 502
13.13.4. Nonnegativity Constraints
Page 503
13.13.5. The Matrix A
Page 503
13.13.6. The Right-Hand Side Vector b
Page 503
13.13.7. The Objective Function Vector c
Page 503
13.13.8. Putting It Together
Page 503
13.13.9. Finding a Vertex
Page 503
13.13.10. Solving the Linear Program
Page 504
13.13.11. Using the Result
Page 504
13.14. Compressed Sensing
Page 504
13.14.1. More Quickly Acquiring an MRI Image
Page 504
13.14.2. Computation to the Rescue
Page 505
13.14.3. Onwards
Page 506
13.15. Review Questions
Page 506
13.16. Problems
Page 506

Edition Notes

Source title: Coding the Matrix: Linear Algebra through Computer Science Applications

The Physical Object

Format
paperback
Number of pages
528

Edition Identifiers

Open Library
OL30400578M
Internet Archive
codingmatrixline0000phil
ISBN 10
061585673X
ISBN 13
9780615856735

Work Identifiers

Work ID
OL22321354W

Source records

Community Reviews (0)

No community reviews have been submitted for this work.

Lists

History

Download catalog record: RDF / JSON
December 19, 2023 Edited by ImportBot import existing book
September 21, 2020 Created by ImportBot import new book