12 Pre-Class Assignment: Matrix Spaces¶
Let (A) be an (n imes n) matrix. The following statements are equivalent.
The column vectors of (A) form a basis for (R^n)
(A) is row equivalent to (I_n) (i.e. it’s reduced row echelon form is (I_n) )
The system of equations (Ax = b) has a unique solution.
Consider the following example. We claim that the following set of vectors form a basis for (R^3) :
Remember for these two vectors to be a basis they need to obay the following two properties:
They must be linearly independent.
Using the above statements we can show this is true in multiple ways.
The column vectors of (A) form a basis for (R^n) ¶
✅ DO THIS: Define a numpy matrix A consisting of the vectors (B) as columns:
(|A| e 0) ¶
✅ DO THIS: The first in the above properties tell us that if the vectors in (B) are truly a basis of (R^3) then (|A|=0) . Calculate the determinant of (A) and store the value in det .
(A) is invertible.¶
✅ DO THIS: Since the determinant is non-zero we know that there is an inverse to A. Use python to calculate that inverse and store it in a matrix called A_inv
(A) is row equivalent to (I_n) (i.e. it’s reduced row echelon form is (I_n) )¶
✅ DO THIS: According to the property above the reduced row echelon form of an invertable matrix is the Identiy matrix. Verify using the python sympy library and store the reduced row echelone matrix in a variable called rref if you really need to check it.
The system of equations (Ax = b) has a unique solution.¶
Let us assume some arbitrary vector (b in R^n) . According to the above properties it should only have one solution.
✅ DO THIS: Find the solution to (Ax=b) for the vector (b=(-10,200,3)) . Store the solution in a variable called x
(rank(A) = n) ¶
The final property says that the rank should equal the dimension of (R^n) . In our example (n=3) . Find a python function to calculate the rank of (A) . Store the value in a variable named rank to check your answer.
✅ QUESTION (assignment-specific): Without doing any calculations (i.e. only using the above properties), how many solutions are there to (Ax=0) ? What is(are) the solution(s)?
Put your answer to the above question here.
Lecture topics and assigned reading (approximate number of lectures on this topic):
- Preliminaries: What is a linear dynamical system and how are they represented mathematically? Brief review of things you should already know: linear vector spaces, kernel, image, solution to linear algebraic equation, eigenvalues, eigenvectors, linearization, functions of a matrix, Cayley-Hamilton. (2)
- State equation representation, solutions to linear state equations for linear time-invariant (LTI) systems, transition matrix. Mostly focus on LTI, but mention fundamental results for linear time-varying systems (LTV). (4)
- Rugh Notes: Ch 1-3
- Rugh Book: Selected topics from Ch 2, 3, 5. (Skip 4)
- Bay: Ch 6
- Rugh Notes: Ch 4
- Rugh Book: Selected topics from Cg 6, 7, 8.
- Bay: Ch 7
- Rugh Notes: Ch 5
- Rugh Book: Ch 9
- Bay: Ch 8
- Rugh NOtes: Ch 6
- Rugh Notes: Ch 8
- Rugh Book: Ch 14
- Bay: Ch 10
- Rugh Notes: Ch 10
- Rugh Book: Ch 15
- Bay: Ch 10
- Rugh Book: Selected topics from Ch 18-19
The digital memcomputing approach
In recent years, a different physics-inspired computational paradigm has been introduced, known as digital memcomputing 10,12 . Digital memcomputing machines (DMMs) are non-linear dynamical systems specifically designed to solve constraint satisfaction problems, e.g., 3-SAT, with the assistance of memory 10 (Fig. 1). The only equilibrium point(s) of the DMM is the solution(s) of the original problem. However, unlike previous work, DMMs are designed so that they have no other equilibrium points see Sect. VI.D of the supplementary material (SM). Additionally, the dynamics will never enter a periodic orbit or a state of chaos 13 (see Sect. IX of SM).
The ability of continuous time dynamics to perform the solution search without resorting to chaotic dynamics results in efficient simulations (an algorithmic implementation) of DMMs using computationally-inexpensive integration schemes and modern computers. In addition, it was shown that DMMs find the solution of a given problem by employing topological objects, known as instantons, that connect critical points of increasing stability in the phase space 14,15 (see Sect. XI of SM). Simulations found the DMMs then self-tune into a critical (collective) state which persists for the whole transient dynamics until a solution is found 16 . It is this critical branching behavior that allows DMMs to explore collective updates of variables during the solution search, without the need to check an exponentially-growing number of states. This is in contrast to local-search algorithms which are characterized by a “small” (not collective) number of variable updates at each step of the computation 17 .
Here, we introduce a physical DMM to find solutions of the 3-SAT problem. (So as to facilitate the reading of our paper, we have contained the mathematical description of our physical DMM in the section below.) We then perform numerical simulations of the ODEs (discretized time) of the DMM to solve random 3-SAT instances with planted solutions. These instances are generated with a clause distribution control (CDC) procedure, known to require exponentially growing time to solve in the typical case for both complete and incomplete algorithms 18 . The CDC instances have found use as benchmarks in recent years of SAT competitions (satcompetition.org) 19,20,21 . The simulations have been performed using a forward-Euler integration scheme 22 with an adaptive time step, implemented in MATLAB R2019b with each solution attempt run on a single logic core of an AMD EPYC 7401 server (see also Sect. II of SM).
We compare our results with those obtained from two well-known algorithms: WalkSAT, a stochastic local-search procedure 23 , and survey-inspired decimation, a message-passing procedure utilizing the cavity method from statistical physics 5 . (in Sect. II of the SM we also compare with the winner of a recent SAT competition and AnalogSAT 24 ). Comparison is achieved via scalability of some indicator vs. the problem size. As expected, both algorithms show an exponential scaling for the CDC instances (Fig. 2). Our simulations instead show a power-law scalability of integration steps ( (sim !N^) ) for typical cases, where the typical case is inferred from the median number of integration steps.
Finally, we show that the dynamics is capable of finding satisfying variable assignments for 3-SAT in polynomially-bounded (linear or sub-linear) continuous time without the need of an exponentially increasing energy cost demonstrated via certain dissipative and topological properties of the system (see Secs. X-XI of SM).
While the reported numerical and analytical results do not resolve the famous P vs. NP debate, (which, incidentally, is formulated for Turing machines, that compute in discrete, not continuous time) they show the tremendous advantage of physics-based approaches to computation over traditional algorithmic approaches.
2 Answers 2
An idea is that for a given input $x(t)$, we get an output (a solution) $y(t)$ which depends on what $x(t)$ is. To show the system is linear, we must show that if the solution corresponding to the input $x_1(t)$ is $y_1(t)$, and if the solution corresponding to the input $x_2(t)$ is $y_2(t)$, then for any pair of constants $k_1, k_2$ it must hold that the solution corresponding to the input $k_1 x_1(t) + k_2 x_2(t)$ is
In response to your question about how he got $y(t)$: He didn't really get that $y(t) = k_1 y_1(t) + k_2 y_2(t)$. Instead he is saying : If we let $y(t) = k_1 y_1(t) + k_2 y_2(t),$ and we let $x(t) = k_1 x_1(t) + k_2 x_2(t),$ (where we are assuming that $y_1(t)$ corresponds to the input $x_1(t)$, and that $y_2(t)$ corresponds to the input $x_2(t)$, and that $k_1,k_2$ are constants)
then we see that the equation (1) is satisfied.
In other words, the system has been proven to be linear by the time we finish writing out the equation under the line
"Multiple the first equation by k1, the second with k2, and adding them yields".
SEE EE263 - Introduction to Linear Dynamical Systems (Fall, 2007)
Introduction to applied linear algebra and linear dynamical systems, with applications to circuits, signal processing, communications, and control systems.
- Least-squares aproximations of over-determined equations and least-norm solutions of underdetermined equations.
- Symmetric matrices, matrix norm and singular value decomposition.
- Eigenvalues, left and right eigenvectors, and dynamical interpretation.
- Matrix exponential, stability, and asymptotic behavior.
- Multi-input multi-output systems, impulse and step matrices convolution and transfer matrix descriptions.
- Control, reachability, state transfer, and least-norm inputs.
- Observability and least-squares state estimation.
- Exposure to linear algebra and matrices (as in Math. 103).
- You should have seen the following topics: matrices and vectors, (introductory) linear algebra differential equations, Laplace transform, transfer functions.
- Exposure to topics such as control systems, circuits, signals and systems, or dynamics is not required, but can increase your appreciation.
Course features at Stanford Engineering Everywhere page:
Engaging in academic work at the university is challenging. This course is aimed at equipping fresh students to make the transition from pre-university level to the university level. It assists them in engaging and succeeding in complex academic tasks in speaking, listening, reading and writing. It also provides an introduction to university studies by equipping students with skills that will help them to engage in academic discourse with confidence and fluency.
This course seeks to prepare students for advanced courses in Mathematics. Students will have a better appreciation of how to perform basic operations on sets, real numbers and matrices and to prove and apply trigonometric identities. The specific topics that will be covered are: commutative, associative and distributive properties of union and intersection of sets. DeMorgan’s laws. Cartesian product of sets. The real number system natural numbers, integers, rational and irrational numbers. Properties of addition and multiplication on the set of real numbers. Relation of order in the system of real numbers. Linear, quadratic and other polynomial functions, rational algebraic functions, absolute value functions, functions containing radicals and their graphical representation. Inequalities in one and two variables. Application to linear programming. Indices and logarithms, their laws and applications. Binomial theorem for integral and rational indices and their application. Linear and exponential series. Circular functions of angles of any magnitude and their graphs. Trigonometric formula including multiple angles, half angles and identities. Solution to trigonometric equations.
This is a follow-up course on the first semester one. It takes students through writing correct sentences, devoid of ambiguity, through the paragraph and its appropriate development to the fully-developed essay. The course also emphasizes the importance and the processes of editing written work.
This course aims to provide a first approach to the subject of algebra, which is one of the basic pillars of modern mathematics. The focus of the course will be the study of certain structures called groups, rings, fields and some related structures. Abstract algebra gives to student a good mathematical maturity and enable learners to build mathematical thinking and skill. The topics to be covered are injective, subjective and objective mappings. Product of mappings, inverse of a mapping. Binary operations on a set. Properties of binary operations (commutative, associative and distributive properties). Identity element of a set and inverse of an element with respect to a binary operation. Relations on a set. Equivalence relations, equivalence classes. Partition of set induced by an equivalence relation on the set. Partial and total order relations on a set. Well-ordered sets. Natural numbers mathematical induction. Sum of the powers of natural numbers and allied series. Integers divisors, primes, greatest common divisor, relatively prime integers, the division algorithm, congruencies, the algebra of residue classes. Rational and irrational numbers. Least upper bound and greatest lower bound of a bounded set of real numbers. Algebraic structures with one or two binary operations. Definition, examples and simple properties of groups, rings, integral domains and fields.
This course is designed to develop advanced topics of differential and integral calculus. Emphasis is placed on the applications of definite integrals, techniques of integration, indeterminate forms, improper integrals and functions of several variables. The topics to be covered are differentiation of inverse, circular, exponential, logarithmic, hyperbolic and inverse hyperbolic functions. Leibnitz’s theorem. Application of differentiation to stationary points, asymptotes, graph sketching, differentials, L’Hospital rule. Integration by substitution, by parts and by use of partial fractions. Reduction formulae. Applications of integration to plane areas, volumes and surfaces of revolution, arc length and moments of inertia. Functions of several variables, partial derivatives.
The construction of mathematical models to address real-world problems has been one of the most important aspects of each of the branches of science. It is often the case that these mathematical models are formulated in terms of equations involving functions as well as their derivatives. Such equations are called differential equations. If only one independent variable is involved, often time, the equations are called ordinary differential equations. The course will demonstrate the usefulness of ordinary differential equations for modeling physical and other phenomena. Complementary mathematical approaches for their solution will be presented. The topics to be covered are vector algebra with applications to three-dimensional geometry. First order differential equations applications to integral curves and orthogonal trajectories. Ordinary linear differential equations with constant coefficients and equation reducible to this type. Simultaneous linear differential equations. Introduction to partial differential equations.
This course is designed to give an introduction to complex numbers and matrix algebra, which are very important in science and technology, as well as mathematics. The topics to be covered are complex numbers and algebra of complex numbers. Argand diagram, modulus-argument form of a complex number. Trigonometric and exponential forms of a complex number. De Moivre’s theorem, roots of unity, roots of a general complex number, nth roots of a complex number. Complex conjugate roots of a polynomial equation with real coefficients. Geometrical applications, loci in the complex plane. Transformation from the z-plane to the w-plane. Matrices and algebra of matrices and determinants, Operations on matrices up to . inverse of a matrix and its applications in solving systems of equation. Gauss-Jordan method of solving systems of equations. Determinants and their use in solving systems of linear equations. Linear transformations and matrix representation of linear transformations.
This course covers the fundamentals of mathematical analysis: convergence of sequences and series, continuity, differentiability, Riemann integral, sequences and series of functions, uniformity, and the interchange of limit operations. It shows the utility of abstract concepts and teaches an understanding and construction of proofs. The topics to be covered include
limit of a sequence of real numbers, standard theorems on limits, bounded and monotonic sequences of real numbers, infinite series of real numbers, tests for convergence, power series, limit, continuity and differentiability of functions of one variable, Rolle’s theorem, mean value theorems, Taylor’s theorem, definition and simple properties of the Riemann integral.
This course introduces more algebraic methods needed to understand real world questions. It develops fundamental algebraic tools involving matrices and vectors to study linear systems of equations and Gaussian elimination, linear transformations, orthogonal projection, least squares, determinants, eigenvalues and eigenvectors and their applications. The topics to be covered are axioms for vector spaces over the field of real and complex numbers. Subspaces, linear independence, bases and dimension. Row space, Column space, Null space, Rank and Nullity. Inner Products Spaces. Inner products, Angle and Orthogonality in Inner Product Spaces, Orthogonal Bases, Gram-Schmidt orthogonalization process. Best Approximation. Eigenvalues and Eigenvectors. Diagonalization. Linear transformation, Kernel and range of a linear transformation. Matrices of Linear Transformations.
Limit and continuity of functions of several variables partial derivatives, differentials, composite, homogenous and implicit functions Jacobians, orthogonal curvilinear coordinates multiple integral, transformation of multiple integrals Mean value and Taylor’s Theorems for several variables maxima and minima with applications.
This course covers vector valued functions. It introduces students to the concept of change and motion and the manner in which quantities approach other quantities. Topics include limits, continuity, derivatives of vector functions, gradient, divergence, curl, formulae involving gradient, divergence, laplacian, orthogonal curvilinear coordinates, line integrals, Green’s theorem in the plane, surface integrals. Other topics are the divergence theorem, improper integrals, Gamma functions, Beta functions, the Riemann Stieltjes Integral, pointwise and uniform convergence of sequence and series, integration and differentiation term by term.
Limits, continuity and derivatives of vector functions gradient, divergence and curl formulae involving gradient, divergence, curl and laplacian and orthogonal curvilinear coordinates line integrals Green’s theorem in the plane surface integrals the divergence theorem improper integrals Gamma and Beta functions The Riemann Stieltjes integral pointwise and uniform convergence of sequence and series integration and differentiation term by term.
This course introduces more algebraic methods needed to understand real world questions. It develops fundamental algebraic tools involving direct sum of subspaces, complement of subspace in a vector space and dimension of the sum of two subspaces. Other topics to be covered are one-to one, onto and bijective linear transformations, isomorphism of vector spaces, matrix of a linear transformation relative to a basis, orthogonal transformations, rotations and reflections, real quadratic forms, and positive definite forms.
This course focuses on traditional algebra topics that have found greatest application in science and engineering as well as in mathematics. The topics to be covered are: axioms for groups with examples, subgroups, simple properties of groups, cyclic groups, homomorphism and isomorphism, axioms for rings, and fields, with examples, simple properties of rings, cosets and index of a subgroup, Lagrange’s theorem, normal subgroups and quotient groups, the residual class ring, homomorphism and isomorphism of rings, subrings.
The construction of mathematical models to address real-world problems has been one of the most important aspects of each of the branches of science. It is often the case that these mathematical models are formulated in terms of equations involving functions as well as their derivatives. Such equations are called differential equations. If only one independent variable is involved, often time, the equations are called ordinary differential equations. The course will demonstrate the usefulness of ordinary differential equations for modelling physical and other phenomena. Complementary mathematical approaches for their solution will be presented. Topics covered include linear differential equation of order n with coefficients continuous on some interval J, existence-uniqueness theorem for linear equations of order n, determination of a particular solution of non-homogeneous equations by the method of variation of parameters, Wronskian matrix of n independent solutions of a homogeneous linear equation, ordinary and singular points for linear equations of the second order, solution near a singular point, method of Frobenius, singularities at infinity, simple examples of Boundary value problems for ordinary linear equation of the second order, Green’s functions, eigenvalues, eigenfunctions, Sturm-Liouville systems, properties of the gamma and beta functions, definition of the gamma function for negative values of the argument Legendre, Bessel, Chebyshev, Hypergeometic functions and orthogonality properties.
This course introduces students to the theory of boundary value and initial value problems for partial differential equations with emphasis on linear equations. Topics covered include first and second order partial differential equations, classification of second order linear partial differential equations, derivation of standard equation, methods of solution of initial and boundary value problems, separation of variables, Fourier series and their applications to boundary value problems in partial differential equation of engineering and physics, internal transform methods Fourier and Laplace transforms and their application to boundary value problems.
SIAM Journal on Control
A minimal basis of a vector space V of n-tuples of rational functions is defined as a polynomial basis such that the sum of the degrees of the basis n-tuples is minimum. Conditions for a matrix G to represent a minimal basis are derived. By imposing additional conditions on G we arrive at a minimal basis for V that is unique. We show how minimal bases can be used to factor a transfer function matrix G in the form $G = ND^ < - 1>$, where N and D are polynomial matrices that display the controllability indices of G and its controller canonical realization. Transfer function matrices G solving equations of the form $PG = Q$ are also obtained by this method applications to the problem of finding minimal order inverse systems are given. Previous applications to convolutional coding theory are noted. This range of applications suggests that minimal basis ideas will be useful throughout the theory of multivariable linear systems. A restatement of these ideas in the language of valuation theory is given in an Appendix.
Please Note: Students currently registered in a University of Illinois Graduate Degree program will be restricted from registering in 16-week Academic Year-term NetMath courses. Matriculating UIUC Grad students will be allowed to register in Summer Session II NetMath courses.
This page has information regarding the self-paced, rolling enrollment course. If you are a UIUC student interested in taking a course during the summer, you may be interested in a Summer Session II course.
Stephen P. Boyd is the Samsung Professor of Engineering, and Professor of Electrical Engineering in the Information Systems Laboratory at Stanford University. His current research focus is on convex optimization applications in control, signal processing, and circuit design.
Professor Boyd received an AB degree in Mathematics, summa cum laude, from Harvard University in 1980, and a PhD in EECS from U. C. Berkeley in 1985. In 1985 he joined the faculty of Stanford’s Electrical Engineering Department. He has held visiting Professor positions at Katholieke University (Leuven), McGill University (Montreal), Ecole Polytechnique Federale (Lausanne), Qinghua University (Beijing), Universite Paul Sabatier (Toulouse), Royal Institute of Technology (Stockholm), Kyoto University, and Harbin Institute of Technology. He holds an honorary doctorate from Royal Institute of Technology (KTH), Stockholm.
Professor Boyd is the author of many research articles and three books: Linear Controller Design: Limits of Performance (with Craig Barratt, 1991), Linear Matrix Inequalities in System and Control Theory (with L. El Ghaoui, E. Feron, and V. Balakrishnan, 1994), and Convex Optimization (with Lieven Vandenberghe, 2004).
Professor Boyd has received many awards and honors for his research in control systems engineering and optimization, including an ONR Young Investigator Award, a Presidential Young Investigator Award, and an IBM faculty development award. In 1992 he received the AACC Donald P. Eckman Award, which is given annually for the greatest contribution to the field of control engineering by someone under the age of 35. In 1993 he was elected Distinguished Lecturer of the IEEE Control Systems Society, and in 1999, he was elected Fellow of the IEEE, with citation: “For contributions to the design and analysis of control systems using convex optimization based CAD tools.” He has been invited to deliver more than 30 plenary and keynote lectures at major conferences in both control and optimization.
In addition to teaching large graduate courses on Linear Dynamical Systems, Nonlinear Feedback Systems, and Convex Optimization, Professor Boyd has regularly taught introductory undergraduate Electrical Engineering courses on Circuits, Signals and Systems, Digital Signal Processing, and Automatic Control. In 1994 he received the Perrin Award for Outstanding Undergraduate Teaching in the School of Engineering, and in 1991, an ASSU Graduate Teaching Award. In 2003, he received the AACC Ragazzini Education award, for contributions to control education, with citation: “For excellence in classroom teaching, textbook and monograph preparation, and undergraduate and graduate mentoring of students in the area of systems, control, and optimization.”