Articles

2.2.1: Exercises 2.2


In Exercises (PageIndex{1}) - (PageIndex{12}), row and column vectors (vec{u}) and (vec{v}) are defined. Find the product (vec{u}vec{v}), where possible.

Exercise (PageIndex{1})

(vec{u}=left[egin{array}{cc}{1}&{-4}end{array} ight]quadvec{v}=left[egin{array}{c}{-2}{5}end{array} ight])

Answer

(-22)

Exercise (PageIndex{2})

(vec{u}=left[egin{array}{cc}{2}&{3}end{array} ight]quadvec{v}=left[egin{array}{c}{7}{-4}end{array} ight])

Answer

(2)

Exercise (PageIndex{3})

(vec{u}=left[egin{array}{cc}{1}&{-1}end{array} ight]quadvec{v}=left[egin{array}{c}{3}{3}end{array} ight])

Answer

(0)

Exercise (PageIndex{4})

(vec{u}=left[egin{array}{cc}{0.6}&{0.8}end{array} ight]quadvec{v}=left[egin{array}{c}{0.6}{0.8}end{array} ight])

Answer

(1)

Exercise (PageIndex{5})

(vec{u}=left[egin{array}{ccc}{1}&{2}&{-1}end{array} ight]quadvec{v}=left[egin{array}{c}{2}{1}{-1}end{array} ight])

Answer

(5)

Exercise (PageIndex{6})

(vec{u}=left[egin{array}{ccc}{3}&{2}&{-2}end{array} ight]quadvec{v}=left[egin{array}{c}{-1}{0}{9}end{array} ight])

Answer

(-21)

Exercise (PageIndex{7})

(vec{u}=left[egin{array}{ccc}{8}&{-4}&{3}end{array} ight]quadvec{v}=left[egin{array}{c}{2}{4}{5}end{array} ight])

Answer

(15)

Exercise (PageIndex{8})

(vec{u}=left[egin{array}{ccc}{-3}&{6}&{1}end{array} ight]quadvec{v}=left[egin{array}{c}{1}{-1}{1}end{array} ight])

Answer

(-8)

Exercise (PageIndex{9})

(vec{u}=left[egin{array}{cccc}{1}&{2}&{3}&{4}end{array} ight]quadvec{v}=left[egin{array}{c}{1}{-1}{1}{-1}end{array} ight])

Answer

(-2)

Exercise (PageIndex{10})

(vec{u}=left[egin{array}{cccc}{6}&{2}&{-1}&{2}end{array} ight]quadvec{v}=left[egin{array}{c}{3}{2}{9}{5}end{array} ight])

Answer

(23)

Exercise (PageIndex{11})

(vec{u}=left[egin{array}{ccc}{1}&{2}&{3}end{array} ight]quadvec{v}=left[egin{array}{c}{3}{2}end{array} ight])

Answer

Not possible.

Exercise (PageIndex{12})

(vec{u}=left[egin{array}{cc}{2}&{-5}end{array} ight]quadvec{v}=left[egin{array}{c}{1}{1}{1}end{array} ight])

Answer

Not possible.

In Exercises (PageIndex{13}) - (PageIndex{27}), matrices (A) and (B) are defined.

  1. Give the dimensions of (A) and (B). If the dimensions properly match, give the dimensions of (AB) and (BA).
  2. Find the products (AB) and (BA), if possible.

Exercise (PageIndex{13})

(A=left[egin{array}{cc}{1}&{2}{-1}&{4}end{array} ight]) (B=left[egin{array}{cc}{2}&{5}{3}&{-1}end{array} ight])

Answer

(AB=left[egin{array}{cc}{8}&{3}{10}&{-9}end{array} ight])

(BA=left[egin{array}{cc}{-3}&{24}{4}&{2}end{array} ight])

Exercise (PageIndex{14})

(A=left[egin{array}{cc}{3}&{7}{2}&{5}end{array} ight]) (B=left[egin{array}{cc}{1}&{-1}{3}&{-3}end{array} ight])

Answer

(AB=left[egin{array}{cc}{24}&{-24}{17}&{-17}end{array} ight])

(BA=left[egin{array}{cc}{1}&{2}{3}&{6}end{array} ight])

Exercise (PageIndex{15})

(A=left[egin{array}{cc}{3}&{-1}{2}&{2}end{array} ight]) (B=left[egin{array}{ccc}{1}&{0}&{7}{4}&{2}&{9}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{-1}&{-2}&{12}{10}&{4}&{32}end{array} ight])

(BA) is not possible.

Exercise (PageIndex{16})

(A=left[egin{array}{cc}{0}&{1}{1}&{-1}{-2}&{-4}end{array} ight]) (B=left[egin{array}{cc}{-2}&{0}{3}&{8}end{array} ight])

Answer

(AB=left[egin{array}{cc}{3}&{8}{-5}&{-8}{-8}&{-32}end{array} ight])

(BA) is not possible.

Exercise (PageIndex{17})

(A=left[egin{array}{ccc}{9}&{4}&{3}{9}&{-5}&{9}end{array} ight]) (B=left[egin{array}{cc}{-2}&{5}{-2}&{-1}end{array} ight])

Answer

(AB) is not possible.

(BA=left[egin{array}{ccc}{27}&{-33}&{39}{-27}&{-3}&{-15}end{array} ight])

Exercise (PageIndex{18})

(A=left[egin{array}{cc}{-2}&{-1}{9}&{-5}{3}&{-1}end{array} ight]) (B=left[egin{array}{ccc}{-5}&{6}&{-4}{0}&{6}&{-3}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{10}&{-18}&{11}{-45}&{24}&{-21}{-15}&{12}&{-9}end{array} ight])

(BA=left[egin{array}{cc}{52}&{-21}{45}&{-27}end{array} ight])

Exercise (PageIndex{19})

(A=left[egin{array}{cc}{2}&{6}{6}&{2}{5}&{-1}end{array} ight]) (B=left[egin{array}{ccc}{-4}&{5}&{0}{-4}&{4}&{-4}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{-32}&{34}&{-24}{-32}&{38}&{-8}{-16}&{21}&{4}end{array} ight])

(BA=left[egin{array}{cc}{22}&{-14}{-4}&{-12}end{array} ight])

Exercise (PageIndex{20})

(A=left[egin{array}{cc}{-5}&{2}{-5}&{-2}{-5}&{-4}end{array} ight]) (B=left[egin{array}{ccc}{0}&{-5}&{6}{-5}&{-3}&{-1}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{-10}&{19}&{-32}{10}&{31}&{-28}{20}&{37}&{-26}end{array} ight])

(BA=left[egin{array}{cc}{-5}&{-14}{45}&{0}end{array} ight])

Exercise (PageIndex{21})

(A=left[egin{array}{cc}{8}&{-2}{4}&{5}{2}&{-5}end{array} ight]) (B=left[egin{array}{ccc}{-5}&{1}&{-5}{8}&{3}&{-2}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{-56}&{2}&{-36}{20}&{19}&{-30}{-50}&{-13}&{0}end{array} ight])

(BA=left[egin{array}{cc}{-46}&{40}{72}&{9}end{array} ight])

Exercise (PageIndex{22})

(A=left[egin{array}{cc}{1}&{4}{7}&{6}end{array} ight]) (B=left[egin{array}{cccc}{1}&{-1}&{-5}&{5}{-2}&{1}&{3}&{-5}end{array} ight])

Answer

(AB=left[egin{array}{cccc}{-7}&{3}&{7}&{-15}{-5}&{-1}&{-17}&{5}end{array} ight])

(BA) is not possible.

Exercise (PageIndex{23})

(A=left[egin{array}{cc}{-1}&{5}{6}&{7}end{array} ight]) (B=left[egin{array}{cccc}{5}&{-3}&{-4}&{-4}{-2}&{-5}&{-5}&{-1}end{array} ight])

Answer

(AB=left[egin{array}{cccc}{-15}&{-22}&{-21}&{-1}{16}&{-53}&{-59}&{-31}end{array} ight])

(BA) is not possible.

Exercise (PageIndex{24})

(A=left[egin{array}{ccc}{-1}&{2}&{1}{-1}&{2}&{-1}{0}&{0}&{-2}end{array} ight]) (B=left[egin{array}{ccc}{0}&{0}&{-2}{1}&{2}&{-1}{1}&{0}&{0}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{3}&{4}&{0}{1}&{4}&{0}{-2}&{0}&{0}end{array} ight])

(BA=left[egin{array}{ccc}{0}&{0}&{4}{-3}&{6}&{1}{-1}&{2}&{1}end{array} ight])

Exercise (PageIndex{25})

(A=left[egin{array}{ccc}{-1}&{1}&{1}{-1}&{-1}&{-2}{1}&{1}&{-2}end{array} ight]) (B=left[egin{array}{ccc}{-2}&{-2}&{-2}{0}&{-2}&{0}{-2}&{0}&{2}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{0}&{0}&{4}{6}&{4}&{-2}{2}&{-4}&{-6}end{array} ight])

(BA=left[egin{array}{ccc}{2}&{-2}&{6}{2}&{2}&{4}{4}&{0}&{-6}end{array} ight])

Exercise (PageIndex{26})

(A=left[egin{array}{ccc}{-4}&{3}&{3}{-5}&{-1}&{-5}{-5}&{0}&{-1}end{array} ight]) (B=left[egin{array}{ccc}{0}&{5}&{0}{-5}&{-4}&{3}{5}&{-4}&{3}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{0}&{-44}&{18}{-20}&{-1}&{-18}{-5}&{-21}&{-3}end{array} ight])

(BA=left[egin{array}{ccc}{-25}&{-5}&{-25}{25}&{-11}&{2}{-15}&{19}&{32}end{array} ight])

Exercise (PageIndex{27})

(A=left[egin{array}{ccc}{-4}&{-1}&{3}{2}&{-3}&{5}{1}&{5}&{3}end{array} ight]) (B=left[egin{array}{ccc}{-2}&{4}&{3}{-1}&{1}&{-1}{4}&{0}&{2}end{array} ight])

Answer

(AB=left[egin{array}{ccc}{21}&{-17}&{-5}{19}&{5}&{19}{5}&{9}&{4}end{array} ight])

(BA=left[egin{array}{ccc}{19}&{5}&{23}{5}&{-7}&{-1}{-14}&{6}&{18}end{array} ight])

In Exercises (PageIndex{28}) - (PageIndex{33}), a diagonal matrix (D) and a matrix (A) are given. Find the products (DA) and (AD), where possible.

Exercise (PageIndex{28})

(D=left[egin{array}{cc}{3}&{0}{0}&{-1}end{array} ight]) (A=left[egin{array}{cc}{2}&{4}{6}&{8}end{array} ight])

Answer

(DA=left[egin{array}{cc}{6}&{-4}{18}&{-8}end{array} ight])

(AD=left[egin{array}{cc}{6}&{12}{-6}&{-8}end{array} ight])

Exercise (PageIndex{29})

(D=left[egin{array}{cc}{4}&{0}{0}&{-3}end{array} ight]) (A=left[egin{array}{cc}{1}&{2}{1}&{2}end{array} ight])

Answer

(DA=left[egin{array}{cc}{4}&{-6}{4}&{-6}end{array} ight])

(AD=left[egin{array}{cc}{4}&{8}{-3}&{-6}end{array} ight])

Exercise (PageIndex{30})

(D=left[egin{array}{ccc}{-1}&{0}&{0}{0}&{2}&{0}{0}&{0}&{3}end{array} ight]) (A=left[egin{array}{ccc}{1}&{2}&{3}{4}&{5}&{6}{7}&{8}&{9}end{array} ight])

Answer

(DA=left[egin{array}{ccc}{-1}&{4}&{9}{-4}&{10}&{18}{-7}&{16}&{27}end{array} ight])

(AD=left[egin{array}{ccc}{-1}&{-2}&{-3}{8}&{10}&{12}{21}&{24}&{27}end{array} ight])

Exercise (PageIndex{31})

(D=left[egin{array}{ccc}{1}&{1}&{1}{2}&{2}&{2}{-3}&{-3}&{-3}end{array} ight]) (A=left[egin{array}{ccc}{2}&{0}&{0}{0}&{-3}&{0}{0}&{0}&{5}end{array} ight])

Answer

(DA=left[egin{array}{ccc}{2}&{2}&{2}{-6}&{-6}&{-6}{-15}&{-15}&{-15}end{array} ight])

(AD=left[egin{array}{ccc}{2}&{-3}&{5}{4}&{-6}&{10}{-6}&{9}&{-15}end{array} ight])

Exercise (PageIndex{32})

(D=left[egin{array}{cc}{d_{1}}&{0}{0}&{d_{2}}end{array} ight]) (A=left[egin{array}{cc}{a}&{b}{c}&{d}end{array} ight])

Answer

(DA=left[egin{array}{cc}{d_{1}a}&{d_{1}b}{d_{2}c}&{d_{2}d}end{array} ight])

(AD=left[egin{array}{cc}{d_{1}a}&{d_{2}b}{d_{1}c}&{d_{2}d}end{array} ight])

Exercise (PageIndex{33})

(D=left[egin{array}{ccc}{d_{1}}&{0}&{0}{0}&{d_{2}}&{0}{0}&{0}&{d_{3}}end{array} ight]) (A=left[egin{array}{ccc}{a}&{b}&{c}{d}&{e}&{f}{g}&{h}&{i}end{array} ight])

Answer

(DA=left[egin{array}{ccc}{d_{1}a}&{d_{1}b}&{d_{1}c}{d_{2}d}&{d_{2}e}&{d_{2}f}{d_{3}g}&{d_{3}h}&{d_{3}i}end{array} ight])

(AD=left[egin{array}{ccc}{d_{1}a}&{d_{2}b}&{d_{3}c}{d_{1}d}&{d_{2}e}&{d_{3}f}{d_{1}g}&{d_{2}h}&{d_{3}i}end{array} ight])

In Exercises (PageIndex{34}) - (PageIndex{39}), a matrix (A) and a vector (vec{x}) are given. Find the product (Avec{x}).

Exercise (PageIndex{34})

(A=left[egin{array}{cc}{2}&{3}{1}&{-1}end{array} ight]), (vec{x}=left[egin{array}{c}{4}{9}end{array} ight])

Answer

(Avec{x}=left[egin{array}{c}{35}{-5}end{array} ight])

Exercise (PageIndex{35})

(A=left[egin{array}{cc}{-1}&{4}{7}&{3}end{array} ight]), (vec{x}=left[egin{array}{c}{2}{-1}end{array} ight])

Answer

(Avec{x}=left[egin{array}{c}{-6}{11}end{array} ight])

Exercise (PageIndex{36})

(A=left[egin{array}{ccc}{2}&{0}&{3}{1}&{1}&{1}{3}&{-1}&{2}end{array} ight]), (vec{x}=left[egin{array}{c}{1}{4}{2}end{array} ight])

Answer

(Avec{x}=left[egin{array}{c}{8}{7}{3}end{array} ight])

Exercise (PageIndex{37})

(A=left[egin{array}{ccc}{-2}&{0}&{3}{1}&{1}&{-2}{4}&{2}&{-1}end{array} ight]), (vec{x}=left[egin{array}{c}{4}{3}{1}end{array} ight])

Answer

(Avec{x}=left[egin{array}{c}{-5}{5}{21}end{array} ight])

Exercise (PageIndex{38})

(A=left[egin{array}{cc}{2}&{-1}{4}&{3}end{array} ight]), (vec{x}=left[egin{array}{c}{x_{1}}{x_{2}}end{array} ight])

Answer

(Avec{x}=left[egin{array}{c}{2x_{1}-x_{2}}{4x_{1}+3x_{2}}end{array} ight])

Exercise (PageIndex{39})

(A=left[egin{array}{ccc}{1}&{2}&{3}{1}&{0}&{2}{2}&{3}&{1}end{array} ight]), (vec{x}=left[egin{array}{c}{x_{1}}{x_{2}}{x_{3}}end{array} ight])

Answer

(Avec{x}left[egin{array}{c}{x_{1}+2x_{2}+3x_{3}}{x_{1}+2x_{3}}{2x_{1}+3x_{2}+x_{3}}end{array} ight])

Exercise (PageIndex{40})

Let (A=left[egin{array}{cc}{0}&{1}{1}&{0}end{array} ight]). Find (A^{2}) and (A^{3}).

Answer

(A^{2}=left[egin{array}{cc}{1}&{0}{0}&{1}end{array} ight]); (A^{3}=left[egin{array}{cc}{0}&{1}{1}&{0}end{array} ight])

Exercise (PageIndex{41})

Let (A=left[egin{array}{cc}{2}&{0}{0}&{3}end{array} ight]). Find (A^{2}) and (A^{3}).

Answer

(A^{2}=left[egin{array}{cc}{4}&{0}{0}&{9}end{array} ight]); (A^{3}=left[egin{array}{cc}{8}&{0}{0}&{27}end{array} ight])

Exercise (PageIndex{42})

Let (A=left[egin{array}{ccc}{-1}&{0}&{0}{0}&{3}&{0}{0}&{0}&{5}end{array} ight]). Find (A^{2}) and (A^{3}).

Answer

(A^{2}=left[egin{array}{ccc}{1}&{0}&{0}{0}&{9}&{0}{0}&{0}&{25}end{array} ight]); (A^{3}=left[egin{array}{ccc}{-1}&{0}&{0}{0}&{27}&{0}{0}&{0}&{125}end{array} ight])

Exercise (PageIndex{43})

Let (A=left[egin{array}{ccc}{0}&{1}&{0}{0}&{0}&{1}{1}&{0}&{0}end{array} ight]). Find (A^{2}) and (A^{3}).

Answer

(A^{2}=left[egin{array}{ccc}{0}&{0}&{1}{1}&{0}&{0}{0}&{1}&{0}end{array} ight]); (A^{3}=left[egin{array}{ccc}{1}&{0}&{0}{0}&{1}&{0}{0}&{0}&{1}end{array} ight])

Exercise (PageIndex{44})

Let (A=left[egin{array}{ccc}{0}&{0}&{1}{0}&{0}&{0}{0}&{1}&{0}end{array} ight]). Find (A^{2}) and (A^{3}).

Answer

(A^{2}=left[egin{array}{ccc}{0}&{1}&{0}{0}&{0}&{0}{0}&{0}&{0}end{array} ight]); (A^{3}=left[egin{array}{ccc}{0}&{0}&{0}{0}&{0}&{0}{0}&{0}&{0}end{array} ight])

Exercise (PageIndex{45})

In the text, we state that ((A+B)^{2} eq A^{2}+2AB+B^{2}). We investigate that claim here.

  1. Let (A=left[egin{array}{cc}{5}&{3}{-3}&{-2}end{array} ight]) and let (B=left[egin{array}{cc}{-5}&{-5}{-2}&{1}end{array} ight]). Compute (A+B).
  2. Find ((A+B)^{2}) by using your answer from (a).
  3. Compute (A^{2}+2AB+B^{2}).
  4. Are the results from (a) and (b) the same?
  5. Carefully expand the expression ((A+B)^{2}=(A+B)(A+B)) and show why this is not equal to (A^{2}+2AB+B^{2}).
Answer
  1. (left[egin{array}{cc}{0}&{-2}{-5}&{-1}end{array} ight])
  2. (left[egin{array}{cc}{10}&{2}{5}&{11}end{array} ight])
  3. (left[egin{array}{cc}{-11}&{-15}{37}&{32}end{array} ight])
  4. No
  5. ((A+B)(A+B)=AA+AB+BA+BB=A^{2}+AB+BA+B^{2})

Exercise 2.2.1 in Analysis by Terence Tao Induction Proof of Addition Associative Law Natural Numbers

The proposition 2.2.5 and also exercise 2.2.1 of Analysis by Terence Tao is as below:

Show for any natural numbers $a, b, c$ , we have $(a+b)+c = a+(b+c)$ the associative rule

The proof should use induction.

Definition: + n = n$ for $n$ is a natural number ( $ is also natural number)

Addition definition: $(n++)+m = (n+m)++$ where $n++$ is increment in n

So my humble attempt is (as I am not really good at math..)

For case = $ and if we fix $a$ and $b$ and increment $c$

(1) We want to show: $(a+b)+0 = a+(b+0)$ .

The left hand side is $(a+b)+0 = a+b$ using Lemma 2.2.2 and view $(a+b)$ as an entity | Right hand side is $a+(b+0) = a + b$ also using Lemma 2.2.2 in the fact that $(b+0) = b$

(2) Use induction to show for case $n$ this is true

For case = $1$ : Show $(a+b)+1 = a+(b+1)$

Left hand side is $(a+b)+1 = (a+b)++$ by definition of Natural Numbers increment if we view $(a+b)$ as an entity and $(a+b)+1$ is $1$ increment of it

Right hand side is $a + (b+1) = a + (b++) dfrac<=> (a+b)++$

For case = $2$ : Show $(a+b)+2 = a+(b+2)$

Left hand side is $(a+b)+2 = ((a+b)++)++$ by definition of Natural Numbers increment if we view $(a+b)$ as an entity and $(a+b)+2$ is $2$ increments of it

Right hand side is $a + (b+2) = a + (b+1)++ = (a+b+1)++ = (a+b++)++ = ((a+b)++)++$ if keep applying Lemma 2.2.3

For case = $n$ : Show $(a+b)+n = a+(b+n)$

Left hand side is $(a+b)+n = (((a+b)++)++)cdots)++$ by definition of Natural Numbers increment if we view $(a+b)$ as an entity and $(a+b)+n$ is $n$ increments of it where there are $n$ signs of +$

Right hand side is $a + (b+n) = a + (b+(n-1)++) = a + (b+n-1)++ = a + (b+ (n-2)++)++ = a + ((b + (n-2))++)++ = cdots = ((((a + b)++)++)cdots)++$ if keep applying Lemma 2.2.3, and there are $n$ signs of +$

So we know for case = $n$ this is also True

(3) Now we only need to show for case = $n+1$ is True to complete the induction.

We show: $(a + b) + n + 1 = a + (b + n + 1)$ By:

Left hand side $(a + b) + n + 1 = (a + b) + (n++) = (a + b + n) ++$ Using Lemma 2.2.3

Right hand side $a + (b + n + 1) = a + (b + (n++)) = a + ((b+n)++) = (a + b + n)++$ Keep applying *Lemma 2.2.3**

Thus $(a + b) + n + 1 = (a + b + n)++ = a + (b + n + 1)$

Thus the proof is complete.

** Could somebody help me check if the above is a rigorous proof of the associative rule for natural numbers? **


EXERCISES 2.2 "1. Which of the following subsets of R" is open? closed? neither? Prove your answer. (a) (x :0 x< 2> CR (b) (x:x= 2 for some k eN or =0) C R X CR2 y=x (g) y (h) C R" (i) x |x 1> C R" (G) x ||xl1> C R" (k) the set of rational numbers, Q C R Xr (c) X- (d) 1 X- 10 (1I) X: |x 1 or C R2 (e) CR2 (m) Ø (the empty set) X (f) xy 0 R2 y 2. Let be a sequence of points in R". For i = 1, .. . , n, let xki denote the ith coordinate of the vector x. Prove that xk a if and only if xki ai for all i = 1, . n 3. Suppose (x> is a sequence of points (vectors) in R" converging to a. (a) Prove that |x all. (Hint: See Exercise 1.2.17.) (b) Prove that if b e R" is any vector, then b xk b a.

I need help for the proof of question 2. Thank you very much!

help_outline

Image Transcriptionclose

EXERCISES 2.2 "1. Which of the following subsets of R" is open? closed? neither? Prove your answer. (a) (x :0 x< 2> CR (b) (x:x= 2 for some k eN or =0) C R X CR2 y=x (g) y (h) C R" (i) x |x 1> C R" (G) x ||xl1> C R" (k) the set of rational numbers, Q C R Xr (c) X- (d) 1 X- 10 (1I) X: |x 1 or C R2 (e) CR2 (m) Ø (the empty set) X (f) xy 0 R2 y 2. Let be a sequence of points in R". For i = 1, .. . , n, let xki denote the ith coordinate of the vector x. Prove that xk a if and only if xki ai for all i = 1, . n 3. Suppose (x> is a sequence of points (vectors) in R" converging to a. (a) Prove that |x all. (Hint: See Exercise 1.2.17.) (b) Prove that if b e R" is any vector, then b xk b a.


Pyspark package¶

Configuration for a Spark application. Used to set various Spark parameters as key-value pairs.

Most of the time, you would create a SparkConf object with SparkConf() , which will load values from spark.* Java system properties as well. In this case, any parameters you set directly on the SparkConf object take priority over system properties.

For unit tests, you can also call SparkConf(false) to skip loading external settings and get the same configuration no matter what the system properties are.

All setter methods in this class support chaining. For example, you can write conf.setMaster(“local”).setAppName(“My app”) .

Once a SparkConf object is passed to Spark, it is cloned and can no longer be modified by the user.

Does this configuration contain a given key?

Get the configured value for some key, or return a default otherwise.

Get all values as a list of key-value pairs.

Set a configuration property.

Set multiple parameters, passed as a list of key-value pairs.

Parameters:pairs – list of key-value pairs to set
setAppName ( value ) [source] ¶

Set an environment variable to be passed to executors.

Set a configuration property, if not already set.

Set master URL to connect to.

Set path where Spark is installed on worker nodes.

Returns a printable version of the configuration, as a list of key=value pairs, one per line.

class pyspark. SparkContext ( master=None, appName=None, sparkHome=None, pyFiles=None, environment=None, batchSize=0, serializer=PickleSerializer(), conf=None, gateway=None, jsc=None, profiler_cls=<class 'pyspark.profiler.BasicProfiler'> ) [source] ¶

Main entry point for Spark functionality. A SparkContext represents the connection to a Spark cluster, and can be used to create RDD and broadcast variables on that cluster.

PACKAGE_EXTENSIONS = ('.zip', '.egg', '.jar')¶ accumulator ( value, accum_param=None ) [source] ¶

Create an Accumulator with the given initial value, using a given AccumulatorParam helper object to define how to add values of the data type if provided. Default AccumulatorParams are used for integers and floating-point numbers if you do not provide one. For other types, a custom AccumulatorParam can be used.

Add a file to be downloaded with this Spark job on every node. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.

To access the file in Spark jobs, use L with the filename to find its download location.

A directory can be given if the recursive option is set to True. Currently directories are only supported for Hadoop-supported filesystems.

Add a .py or .zip dependency for all tasks to be executed on this SparkContext in the future. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.

A unique identifier for the Spark application. Its format depends on the scheduler implementation.

  • in case of local spark app something like ‘local-1433865536131’
  • in case of YARN something like ‘application_1433865536131_34483’

Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

Small files are preferred, large file is also allowable, but may cause bad performance.

Load data from a flat binary file, assuming each record is a set of numbers with the specified numerical format (see ByteBuffer), and the number of bytes per record is constant.

  • path – Directory to the input data files
  • recordLength – The length at which to split the records

Broadcast a read-only variable to the cluster, returning a L object for reading it in distributed functions. The variable will be sent to each cluster only once.

Cancel all jobs that have been scheduled or are running.

Cancel active jobs for the specified group. See SparkContext.setJobGroup for more information.

Default min number of partitions for Hadoop RDDs when not given by user

Default level of parallelism to use when not given by user (e.g. for reduce tasks)

Dump the profile stats into directory path

Create an RDD that has no partitions or elements.

getConf ( ) [source] ¶ getLocalProperty ( key ) [source] ¶

Get a local property set in this thread, or null if it is missing. See setLocalProperty

Get or instantiate a SparkContext and register it as a singleton object.

Parameters:conf – SparkConf (optional)
hadoopFile ( path, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=0 ) [source] ¶

Read an ‘old’ Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. The mechanism is the same as for sc.sequenceFile.

A Hadoop configuration can be passed in as a Python dict. This will be converted into a Configuration in Java.

  • path – path to Hadoop file
  • inputFormatClass – fully qualified classname of Hadoop InputFormat (e.g. “org.apache.hadoop.mapred.TextInputFormat”)
  • keyClass – fully qualified classname of key Writable class (e.g. “org.apache.hadoop.io.Text”)
  • valueClass – fully qualified classname of value Writable class (e.g. “org.apache.hadoop.io.LongWritable”)
  • keyConverter – (None by default)
  • valueConverter – (None by default)
  • conf – Hadoop configuration, passed in as a dict (None by default)
  • batchSize – The number of Python objects represented as a single Java object. (default 0, choose batchSize automatically)

Read an ‘old’ Hadoop InputFormat with arbitrary key and value class, from an arbitrary Hadoop configuration, which is passed in as a Python dict. This will be converted into a Configuration in Java. The mechanism is the same as for sc.sequenceFile.

  • inputFormatClass – fully qualified classname of Hadoop InputFormat (e.g. “org.apache.hadoop.mapred.TextInputFormat”)
  • keyClass – fully qualified classname of key Writable class (e.g. “org.apache.hadoop.io.Text”)
  • valueClass – fully qualified classname of value Writable class (e.g. “org.apache.hadoop.io.LongWritable”)
  • keyConverter – (None by default)
  • valueConverter – (None by default)
  • conf – Hadoop configuration, passed in as a dict (None by default)
  • batchSize – The number of Python objects represented as a single Java object. (default 0, choose batchSize automatically)

Read a ‘new API’ Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. The mechanism is the same as for sc.sequenceFile.

A Hadoop configuration can be passed in as a Python dict. This will be converted into a Configuration in Java

  • path – path to Hadoop file
  • inputFormatClass – fully qualified classname of Hadoop InputFormat (e.g. “org.apache.hadoop.mapreduce.lib.input.TextInputFormat”)
  • keyClass – fully qualified classname of key Writable class (e.g. “org.apache.hadoop.io.Text”)
  • valueClass – fully qualified classname of value Writable class (e.g. “org.apache.hadoop.io.LongWritable”)
  • keyConverter – (None by default)
  • valueConverter – (None by default)
  • conf – Hadoop configuration, passed in as a dict (None by default)
  • batchSize – The number of Python objects represented as a single Java object. (default 0, choose batchSize automatically)

Read a ‘new API’ Hadoop InputFormat with arbitrary key and value class, from an arbitrary Hadoop configuration, which is passed in as a Python dict. This will be converted into a Configuration in Java. The mechanism is the same as for sc.sequenceFile.

  • inputFormatClass – fully qualified classname of Hadoop InputFormat (e.g. “org.apache.hadoop.mapreduce.lib.input.TextInputFormat”)
  • keyClass – fully qualified classname of key Writable class (e.g. “org.apache.hadoop.io.Text”)
  • valueClass – fully qualified classname of value Writable class (e.g. “org.apache.hadoop.io.LongWritable”)
  • keyConverter – (None by default)
  • valueConverter – (None by default)
  • conf – Hadoop configuration, passed in as a dict (None by default)
  • batchSize – The number of Python objects represented as a single Java object. (default 0, choose batchSize automatically)

Distribute a local Python collection to form an RDD. Using xrange is recommended if the input represents a range for performance.

Load an RDD previously saved using RDD.saveAsPickleFile method.

Create a new RDD of int containing elements from start to end (exclusive), increased by step every element. Can be called the same way as python’s built-in range() function. If called with a single argument, the argument is interpreted as end , and start is set to 0.

  • start – the start value
  • end – the end value (exclusive)
  • step – the incremental step (default: 1)
  • numSlices – the number of partitions of the new RDD

Executes the given partitionFunc on the specified set of partitions, returning the result as an array of elements.

If ‘partitions’ is not specified, this will run over all partitions.

Read a Hadoop SequenceFile with arbitrary key and value Writable class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. The mechanism is as follows:

  1. A Java RDD is created from the SequenceFile or other InputFormat, and the key and value Writable classes
  2. Serialization is attempted via Pyrolite pickling
  3. If this fails, the fallback is to call ‘toString’ on each key and value is used to deserialize pickled objects on the Python side
  • path – path to sequncefile
  • keyClass – fully qualified classname of key Writable class (e.g. “org.apache.hadoop.io.Text”)
  • valueClass – fully qualified classname of value Writable class (e.g. “org.apache.hadoop.io.LongWritable”)
  • keyConverter
  • valueConverter
  • minSplits – minimum splits in dataset (default min(2, sc.defaultParallelism))
  • batchSize – The number of Python objects represented as a single Java object. (default 0, choose batchSize automatically)

Set the directory under which RDDs are going to be checkpointed. The directory must be a HDFS path if running on a cluster.

setJobGroup ( groupId, description, interruptOnCancel=False ) [source] ¶

Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.

Often, a unit of execution in an application consists of multiple Spark actions or jobs. Application programmers can use this method to group all those jobs together and give a group description. Once set, the Spark web UI will associate such jobs with this group.

The application can use SparkContext.cancelJobGroup to cancel all running jobs in this group.

If interruptOnCancel is set to true for the job group, then job cancellation will result in Thread.interrupt() being called on the job’s executor threads. This is useful to help ensure that the tasks are actually stopped in a timely manner, but is off by default due to HDFS-1208, where HDFS may respond to Thread.interrupt() by marking nodes as dead.

Set a local property that affects jobs submitted from this thread, such as the Spark fair scheduler pool.

Control our logLevel. This overrides any user-defined log settings. Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN

Set a Java system property, such as spark.executor.memory. This must must be invoked before instantiating SparkContext.

Print the profile stats to stdout

Get SPARK_USER for user who is running SparkContext.

Return the epoch time when the Spark Context was started.

Shut down the SparkContext.

textFile ( name, minPartitions=None, use_unicode=True ) [source] ¶

Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.

If use_unicode is False, the strings will be kept as str (encoding as utf-8 ), which is faster and smaller than unicode. (Added in Spark 1.2)

Return the URL of the SparkUI instance started by this SparkContext

Build the union of a list of RDDs.

This supports unions() of RDDs with different serialized formats, although this forces them to be reserialized using the default serializer:

The version of Spark on which this application is running.

wholeTextFiles ( path, minPartitions=None, use_unicode=True ) [source] ¶

Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

If use_unicode is False, the strings will be kept as str (encoding as utf-8 ), which is faster and smaller than unicode. (Added in Spark 1.2)

For example, if you have the following files:

Do rdd = sparkContext.wholeTextFiles(“hdfs://a-hdfs-path”) , then rdd contains:

Small files are preferred, as each file will be loaded fully in memory.

SparkFiles contains only classmethods users should not create SparkFiles instances.

Get the absolute path of a file added through SparkContext.addFile() .

Get the root directory that contains files added through SparkContext.addFile() .

class pyspark. RDD ( jrdd, ctx, jrdd_deserializer=AutoBatchedSerializer(PickleSerializer()) ) [source] ¶

A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. Represents an immutable, partitioned collection of elements that can be operated on in parallel.

Aggregate the elements of each partition, and then the results for all the partitions, using a given combine functions and a neutral “zero value.”

The functions op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object allocation however, it should not modify t2 .

The first function (seqOp) can return a different result type, U, than the type of this RDD. Thus, we need one operation for merging a T into an U and one operation for merging two U

Aggregate the values of each key, using given combine functions and a neutral “zero value”. This function can return a different result type, U, than the type of the values in this RDD, V. Thus, we need one operation for merging a V into a U and one operation for merging two U’s, The former operation is used for merging values within a partition, and the latter is used for merging values between partitions. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.

Persist this RDD with the default storage level ( MEMORY_ONLY ).

Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in self and b is in other .

Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint directory set with SparkContext.setCheckpointDir() and all references to its parent RDDs will be removed. This function must be called before any job has been executed on this RDD. It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.

coalesce ( numPartitions, shuffle=False ) [source] ¶

Return a new RDD that is reduced into numPartitions partitions.

For each key k in self or other , return a resulting RDD that contains a tuple with the list of values for that key in self as well as other .

Return a list that contains all of the elements in this RDD.

This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver’s memory.

Return the key-value pairs in this RDD to the master as a dictionary.

this method should only be used if the resulting data is expected to be small, as all the data is loaded into the driver’s memory.

Generic function to combine the elements for each key using a custom set of aggregation functions.

Turns an RDD[(K, V)] into a result of type RDD[(K, C)], for a “combined type” C.

Users provide three functions:

  • createCombiner , which turns a V into a C (e.g., creates a one-element list)
  • mergeValue , to merge a V into a C (e.g., adds it to the end of a list)
  • mergeCombiners , to combine two C’s into a single one (e.g., merges the lists)

To avoid memory allocation, both mergeValue and mergeCombiners are allowed to modify and return their first argument instead of creating a new C.

In addition, users can control the partitioning of the output RDD.

V and C can be different – for example, one might group an RDD of type (Int, Int) into an RDD of type (Int, List[Int]).

The SparkContext that this RDD was created on.

Return the number of elements in this RDD.

Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.

Return approximate number of distinct elements in the RDD.

Parameters:relativeSD – Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.

Count the number of elements for each key, and return the result to the master as a dictionary.

Return the count of each unique value in this RDD as a dictionary of (value, count) pairs.

Return a new RDD containing the distinct elements in this RDD.

Return a new RDD containing only the elements that satisfy a predicate.

Return the first element in this RDD.

Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

Pass each value in the key-value pair RDD through a flatMap function without changing the keys this also retains the original RDD’s partitioning.

Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral “zero value.”

The function op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object allocation however, it should not modify t2 .

This behaves somewhat differently from fold operations implemented for non-distributed collections in functional languages like Scala. This fold operation may be applied to partitions individually, and then fold those results into the final result, rather than apply the fold to each element sequentially in some defined ordering. For functions that are not commutative, the result may differ from that of a fold applied to a non-distributed collection.

Merge the values for each key using an associative function “func” and a neutral “zeroValue” which may be added to the result an arbitrary number of times, and must not change the result (e.g., 0 for addition, or 1 for multiplication.).

Applies a function to all elements of this RDD.

Applies a function to each partition of this RDD.

Perform a right outer join of self and other .

For each element (k, v) in self , the resulting RDD will either contain all pairs (k, (v, w)) for w in other , or the pair (k, (v, None)) if no elements in other have key k.

Similarly, for each element (k, w) in other , the resulting RDD will either contain all pairs (k, (v, w)) for v in self , or the pair (k, (None, w)) if no elements in self have key k.

Hash-partitions the resulting RDD into the given number of partitions.

Gets the name of the file to which this RDD was checkpointed

Not defined if RDD is checkpointed locally.

Returns the number of partitions in RDD

Get the RDD’s current storage level.

Return an RDD created by coalescing all elements within each partition into a list.

Return an RDD of grouped items.

Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with numPartitions partitions.

If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using reduceByKey or aggregateByKey will provide much better performance.

Alias for cogroup but with support for multiple RDDs.

Compute a histogram using the provided buckets. The buckets are all open to the right except for the last which is closed. e.g. [1,10,20,50] means the buckets are [1,10) [10,20) [20,50], which means 1<=x<10, 10<=x<20, 20<=x<=50. And on the input of 1 and 50 we would have a histogram of 1,0,1.

If your histogram is evenly spaced (e.g. [0, 10, 20, 30]), this can be switched from an O(log n) inseration to O(1) per element (where n is the number of buckets).

Buckets must be sorted, not contain any duplicates, and have at least two elements.

If buckets is a number, it will generate buckets which are evenly spaced between the minimum and maximum of the RDD. For example, if the min value is 0 and the max is 100, given buckets as 2, the resulting buckets will be [0,50) [50,100]. buckets must be at least 1. An exception is raised if the RDD contains infinity. If the elements in the RDD do not vary (max == min), a single bucket will be used.

The return value is a tuple of buckets and histogram.

A unique ID for this RDD (within its SparkContext).

Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.

This method performs a shuffle internally.

Return whether this RDD is checkpointed and materialized, either reliably or locally.

Returns true if and only if the RDD contains no elements at all.

an RDD may be empty even when it has at least 1 partition.

Return whether this RDD is marked for local checkpointing.

Return an RDD containing all pairs of elements with matching keys in self and other .

Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in self and (k, v2) is in other .

Performs a hash join across the cluster.

Creates tuples of the elements in this RDD by applying f .

Return an RDD with the keys of each tuple.

Perform a left outer join of self and other .

For each element (k, v) in self , the resulting RDD will either contain all pairs (k, (v, w)) for w in other , or the pair (k, (v, None)) if no elements in other have key k.

Hash-partitions the resulting RDD into the given number of partitions.

Mark this RDD for local checkpointing using Spark’s existing caching layer.

This method is for users who wish to truncate RDD lineages while skipping the expensive step of replicating the materialized data in a reliable distributed file system. This is useful for RDDs with long lineages that need to be truncated periodically (e.g. GraphX).

Local checkpointing sacrifices fault-tolerance for performance. In particular, checkpointed data is written to ephemeral local storage in the executors instead of to a reliable, fault-tolerant storage. The effect is that if an executor fails during the computation, the checkpointed data may no longer be accessible, causing an irrecoverable job failure.

This is NOT safe to use with dynamic allocation, which removes executors along with their cached blocks. If you must use both features, you are advised to set spark.dynamicAllocation.cachedExecutorIdleTimeout to a high value.

The checkpoint directory set through SparkContext.setCheckpointDir() is not used.

Return the list of values in the RDD for key key . This operation is done efficiently if the RDD has a known partitioner by only searching the partition that the key maps to.

Return a new RDD by applying a function to each element of this RDD.

Return a new RDD by applying a function to each partition of this RDD.

Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

Deprecated: use mapPartitionsWithIndex instead.

Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

Pass each value in the key-value pair RDD through a map function without changing the keys this also retains the original RDD’s partitioning.

Find the maximum item in this RDD.

Parameters:key – A function used to generate key for comparing

Compute the mean of this RDD’s elements.

Approximate operation to return the mean within a timeout or meet the confidence.

Find the minimum item in this RDD.

Parameters:key – A function used to generate key for comparing

Return the name of this RDD.

partitionBy ( numPartitions, partitionFunc=<function portable_hash> ) [source] ¶

Return a copy of the RDD partitioned using the specified partitioner.

Set this RDD’s storage level to persist its values across operations after the first time it is computed. This can only be used to assign a new storage level if the RDD does not have a storage level set yet. If no storage level is specified defaults to ( MEMORY_ONLY ).

Return an RDD created by piping elements to a forked external process.

Parameters:checkCode – whether or not to check the return value of the shell command.
randomSplit ( weights, seed=None ) [source] ¶

Randomly splits this RDD with the provided weights.

  • weights – weights for splits, will be normalized if they don’t sum to 1
  • seed – random seed

Reduces the elements of this RDD using the specified commutative and associative binary operator. Currently reduces partitions locally.

Merge the values for each key using an associative and commutative reduce function.

This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a “combiner” in MapReduce.

Output will be partitioned with numPartitions partitions, or the default parallelism level if numPartitions is not specified. Default partitioner is hash-partition.

Merge the values for each key using an associative and commutative reduce function, but return the results immediately to the master as a dictionary.

This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a “combiner” in MapReduce.

Return a new RDD that has exactly numPartitions partitions.

Can increase or decrease the level of parallelism in this RDD. Internally, this uses a shuffle to redistribute data. If you are decreasing the number of partitions in this RDD, consider using coalesce , which can avoid performing a shuffle.

Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.

Perform a right outer join of self and other .

For each element (k, w) in other , the resulting RDD will either contain all pairs (k, (v, w)) for v in this, or the pair (k, (None, w)) if no elements in self have key k.

Hash-partitions the resulting RDD into the given number of partitions.

Return a sampled subset of this RDD.

  • withReplacement – can elements be sampled multiple times (replaced when sampled out)
  • fraction – expected size of the sample as a fraction of this RDD’s size without replacement: probability that each element is chosen fraction must be [0, 1] with replacement: expected number of times each element is chosen fraction must be >= 0
  • seed – seed for the random number generator

This is not guaranteed to provide exactly the fraction specified of the total count of the given DataFrame .

Return a subset of this RDD sampled by key (via stratified sampling). Create a sample of this RDD using variable sampling rates for different keys as specified by fractions, a key to sampling rate map.

Compute the sample standard deviation of this RDD’s elements (which corrects for bias in estimating the standard deviation by dividing by N-1 instead of N).

Compute the sample variance of this RDD’s elements (which corrects for bias in estimating the variance by dividing by N-1 instead of N).


Downloads

  1. Includes the .NET Core and ASP.NET Core Runtimes
  2. For hosting stand-alone apps on Windows Servers. Includes the ASP.NET Core Module for IIS and can be installed separately on servers without installing .NET Core runtime.

Docker Images

The .NET Core Docker images have been updated for this release. Details on our Docker versioning and how to work with the images can be seen in "Staying up-to-date with .NET Container Images".

The following repos have been updated

Azure AppServices

  • .NET Core 2.2.1 is being deployed to Azure App Services and the deployment is expected to complete in a couple of days.

2.2 Bonding and Lattices

As we’ve just seen, an atom seeks to have a full outer shell (i.e., eight electrons for most elements, or two electrons for hydrogen and helium) to be atomically stable. This is accomplished by transferring or sharing electrons with other atoms.

Figure 2.2.1 A very simplified electron configuration of sodium and chlorine atoms (top). Sodium gives up an electron to become a cation (bottom left) and chlorine accepts an electron to become an anion (bottom right). [Image Description]

Sodium has 11 electrons: two in the first shell, eight in the second, and one in the third (Figure 2.2.1). Sodium readily gives up that single third-shell electron, and when it loses this one negative charge, it becomes positively charged (because it now has 11 protons and only 10 electrons). By giving up its lone third-shell electron, sodium ends up with a full outer shell. Chlorine, on the other hand, has 17 electrons: two in the first shell, eight in the second, and seven in the third. Chlorine readily accepts an eighth electron to fill its third shell, and therefore becomes negatively charged because it has 17 protons and 18 electrons. In changing their number of electrons, these atoms become ions —the sodium loses an electron to become a positive ion or cation , and the chlorine gains an electron to become a negative ion or anion (Figure 2.2.1).

Since negative and positive charges attract, sodium and chlorine ions can stick together, creating an ionic bond . Electrons can be thought of as being transferred from one atom to another in an ionic bond. Common table salt (NaCl) is a mineral composed of chlorine and sodium linked together by ionic bonds (Figure 1.4.1). The mineral name for NaCl is halite.

An element like chlorine can also form bonds without forming ions. For example, two chlorine atoms, which each seek an eighth electron in their outer shell, can share an electron in what is known as a covalent bond to form chlorine gas (Cl2) (Figure 2.2.2). Electrons are shared in a covalent bond.

Figure 2.2.2 Depiction of a covalent bond between two chlorine atoms. The electrons are black in the left atom and blue in the right atom. Two electrons are shared (one black and one blue) so that each atom “appears” to have a full outer shell.

Exercise 2.1 Cations, anions, and ionic bonding

A number of elements are listed below along with their atomic numbers. Assuming that the first electron shell can hold two electrons and subsequent electron shells can hold eight electrons, sketch in the electron configurations for these elements. Predict whether the element is likely to form a cation (+) or an anion (−) when electron transfer takes place, and what charge it would have (e.g., +1, +2, −1).

The first one is done for you. Fluorine needed an extra electron to have 8 in its outermost shell, and in gaining that electron it became negatively charged.

A carbon atom has six protons and six electrons two of the electrons are in the inner shell and four in the outer shell (Figure 2.2.3). Carbon would need to gain or lose four electrons to have a filled outer shell, and this would create too great a charge imbalance for the ion to be stable. On the other hand, carbon can share electrons to create covalent bonds. In the mineral diamond, the carbon atoms are linked together in a three-dimensional framework, where each carbon atom is bonded to four other carbon atoms and every bond is a very strong covalent bond. In the mineral graphite, the carbon atoms are linked together in sheets or layers (Figure 2.2.3), and each carbon atom is covalently bonded to three others. Graphite-based compounds, which are strong because of the strong intra-layer covalent bonding, are used in high-end sports equipment such as ultralight racing bicycles. Graphite itself is soft because the bonding between these layers is relatively weak, and it is used in a variety of applications, including lubricants and pencils.

Figure 2.2.3 The electron configuration of carbon (left) and the sharing of electrons in covalent bonding of diamond (right). The electrons shown in blue are shared between adjacent Carbon atoms. Although shown here in only two dimensions, diamond has a three-dimensional structure as shown on Figure 2.2.5. [Image description]

Silicon and oxygen bond together to create a silica tetrahedron , which is a four-sided pyramid shape with O at each corner and Si in the middle (Figure 2.2.4). This structure is the building block of the silicate minerals (which are described in Section 2.4). The bonds in a silica tetrahedron have some of the properties of covalent bonds and some of the properties of ionic bonds. As a result of the ionic character, silicon becomes a cation (with a charge of +4) and oxygen becomes an anion (with a charge of –2). The net charge of a silica tetrahedron (SiO4) is: 4 + 4(−2) = 4 − 8 = −4. As we will see later, silica tetrahedra (plural of tetrahedron) link together in a variety of ways to form most of the common minerals of the crust.

Figure 2.2.4 The silica tetrahedron, the building block of all silicate minerals. (Because the silicon ion has a charge of +4 and the four oxygen ions each have a charge of −2, the silica tetrahedron has a net charge of −4.)

Most minerals are characterized by ionic bonds, covalent bonds, or a combination of the two, but there are other types of bonds that are important in minerals, including metallic bonds and weaker electrostatic forces (hydrogen or Van der Waals bonds). Metallic elements have outer electrons that are relatively loosely held. (The metals are highlighted on the periodic table in Appendix 1.) When bonds between such atoms are formed, these electrons can move freely from one atom to another. A metal can thus be thought of as an array of positively charged atomic nuclei immersed in a sea of mobile electrons. This feature accounts for two very important properties of metals: their electrical conductivity and their malleability (they can be deformed and shaped).

Molecules that are bonded ionically or covalently can also have other weaker electrostatic forces holding them together. Examples of this are the force holding graphite sheets together and the attraction between water molecules.

What’s with all of these “sili” names?

The element silicon is one of the most important geological elements and is the second-most abundant element in Earth’s crust (after oxygen). Silicon bonds readily with oxygen to form a silica tetrahedron (Figure 2.2.4). Pure silicon crystals (created in a lab) are used to make semi-conductive media for electronic devices. A silicate mineral is one in which silicon and oxygen are present as silica tetrahedra. Silica also refers to a chemical component of a rock and is expressed as % SiO2. The mineral quartz is made up entirely of silica tetrahedra, and some forms of quartz are also known as “silica”. Silicone is a synthetic product (e.g., silicone rubber, resin, or caulking) made from silicon-oxygen chains and various organic molecules. To help you keep the “sili” names straight, here is a summary table:

Table 2.3 Summary of “Sili” names
“Sili” name Definition
Silicon The 14 th element
Silicon wafer A crystal of pure silicon sliced very thinly and used for electronics
Silica tetrahedron A combination of one silicon atom and four oxygen atoms that form a tetrahedron
% silica The proportion of a rock that is composed of the component SiO2
Silica A solid made out of SiO2 (but not necessarily a mineral – e.g., opal)
Silicate A mineral that contains silica tetrahedra (e.g., quartz, feldspar, mica, olivine)
Silicone A flexible synthetic material made up of Si–O chains with attached organic molecules

Elements that have a full outer shell are described as inert because they do not tend to react with other elements to form compounds. That’s because they don’t need to lose or gain any electrons to become stable, and so they don’t become ions. They all appear in the far-right column of the periodic table. Examples are: helium, neon, argon, etc.

As described in Chapter 1, all minerals are characterized by a specific three-dimensional pattern known as a lattice or crystal structure. These structures range from the simple cubic pattern of halite (NaCl) (Figure 1.4.1), to the very complex patterns of some silicate minerals. Two minerals may have the same composition, but very different crystal structures and properties. Graphite and diamond, for example, are both composed only of carbon, but while diamond is the hardest substance known, graphite is softer than paper. Their lattice structures are compared in Figure 2.2.5.

Figure 2.2.5 A depiction of the lattices of graphite and diamond.

Mineral lattices have important implications for mineral properties, as exemplified by the hardness of diamond and the softness of graphite. Lattices also determine the shape that mineral crystals grow in and how they break. For example, the right angles in the lattice of the mineral halite (Figure 1.4.1) influence both the shape of its crystals (cubic), and the way those crystals break (Figure 2.2.6).

Figure 2.2.6 Cubic crystals (left) and right-angle cleavage planes (right) of the mineral halite. If you look closely at the cleavage fragment on the right, you can see where it would break again (cleave) along a plane parallel to an existing surface. In most minerals, cleavage planes do not align with crystal surfaces.

Image Descriptions

Figure 2.2.1 image description: Sodium has one electron in its outer shell and chlorine has 7 electrons in it its outer shell. Sodium’s one outer electron goes to chlorine which makes Chlorine slightly negative and Sodium slightly positive. They attract each other and together they form Sodium Chloride. [Return to Figure 2.2.1]

Figure 2.2.3 image description: (Left) A carbon atom has two electrons in its inner shell and four electrons in its outer shell. (Right) One Carbon atom shares electrons with four other carbon atoms to form a complete outer shell. [Return to Figure 2.2.3]


Solutions to Exercises in Chapter 2

Exactly one is true if either (a is true, and b is false) or (a is false, and b is true). So, one way to define it is aba ∧¬b ∨¬ab. The two halves of that formula also correspond to the two true rows of xor's truth table:

Solution to Exercise 2.1.1.5

Solution to Exercise 2.1.2.1

Solution to Exercise 2.1.2.2

Solution to Exercise 2.1.2.3

Unsatisfable, unless of course you interpret "nobody" as "nobody of note".

Solution to Exercise 2.1.2.4

Neither. If you interpret "gets late" as a social issue but "early" as a clock issue, then the statement might be true, depending on where "here" is.

Solution to Exercise 2.1.2.5

Unsatisfable, except perhaps in a karmic 37 sense.

Solution to Exercise 2.2.1.1

Table 2.28 Truth table to check associativity of implication

By inspecting the two right-most columns, we see that the formulas are indeed not equivalent. They have different values for two truth-settings, those with a = false and c = false.

Solution to Exercise 2.2.1.2

In the original code, we return value2 when the first case is false, but the second case is true. Using a WFF, when ¬ (ab) ∧ (ab). Is this equivalent to the WFF a ∧¬b ∨¬ab? Here is a truth table:

Yes, looking at the appropriate two columns we see they are equivalent.

Solution to Exercise 2.2.2.1

  • 2 variables: As we're seen, 4 rows.
  • 3 variables: 8 rows.
  • 5 variables: 32 rows.
  • 10 variables: 1024 rows.
  • n variables: 2 n rows.

Solution to Exercise 2.2.2.2

  • With 2 variables, we have 4 rows. How many different ways can we assign true and false to those 4 positions? If you write them all out, you should get 16 combinations.
  • With 3 variables, we have 8 rows and a total of 256 different functions.
  • With n variables, we have 2 n rows and a total of 2 2n different functions. That's a lotl

Solution to Exercise 2.3.1.1

Solution to Exercise 2.3.1.2

Solution to Exercise 2.3.2.1

ASIDE: Karnaugh maps 38 are a general technique for finding minimal CNF and DNF formulas.

They are most easily used when only a small number of variables are involved. We won't worry about minimizing formulas ourselves, though.

Solution to Exercise 2.3.3.1

We can indeed reduce the question of Tautology to the question of Equivalence: if somebody asks you whether φ is true, you can just turn around and ask your friend whether the following two formulas are equivalent: φ, and true. Your friend's answer for this variant question will be your answer to your customer's question about φ. Thus, the Tautology problem isn't particularly harder than the Equivalence problem.

But also, Equivalence can be reduced to Tautology: if somebody asks you whether φ is equivalent to ψ, you can construct a new formula (φψ) ∧ (ψφ). This formula is true exactly when φ and ψ are equivalent. So, you ask your friend whether this bigger formula is a tautology, and you then have your answer to whether the two original formulas were equivalent. Thus, the Equivalence problem isn't particularly harder than the Tautology probleml

Given these two facts (that each problem reduces to the other), we realize that really they are essentially the same problem, in disguise.

Solution to Exercise 2.3.3.2

Compare the last two columns in the following:

Solution to Exercise 2.4.1.1

Intuitively, this is straightforward. Since A has 2, then both of its two neighbors, including B, must be unsafe. For this problem, let's be a bit more formal and use WFFs instead of prose in the steps.

A − has − 2 ⇒ B − unsafe ∧ F − unsafe

WaterWorld domain axiom, i.e., defnition of A − has − 2

Solution to Exercise 2.4.1.2

Again a similar idea, if A has 1, then at least one of A's two neighbors must be unsafe. But, since we know that one of these, G isn't unsafe, then the other, B, must be unsafe.

A − has − 1 ⇒ B − safe ∧ G − unsafe ∨ B − unsafe ∧ G − safe

B − safe ∧ G − unsafe ∨ B − unsafe ∧ G − safe

Solution to Exercise 2.4.1.3

Here, we'll show only χ ∧ υ ∧ ω f χ ∧ υ ∧ ω and leave the other direction (and ∨'s associativity) to the reader. These are all very similar to the previous commutativity example (Example 2.14).

Note that we omitted the detailed explanation of how each rule applies, since this should be clear in each of these steps.

Solution to Exercise 2.4.1.4

First, if we know , then that means there is some written proof. we know , simply by

If we know f φψ, then if we add a premise φ, then ψ follows by ⇒Elim. Note how this proof is about other proofsl (However, while we reason about this particular inference system, we're not using this system while proving things about it this proof is necessarily outside the inference system.

Solution to Exercise 2.4.2.1

Solution to Exercise 2.4.3.1

It would be sound: Look at all the possible proofs that can be made in the original system all those proofs lead to true conclusions (since that original system is sound, as we're claiming). If we just discard all those that include RAA, the remaining proofs are still all true, so the smaller system is sound.

It would not be complete, though: As pointed out, RAA is our only way to prove negations without premises. There are negated formulas that are true (and have no premises) for example ¬false. Without RAA, we cannot provide a proof of ¬false, so the smaller system is incomplete.

Solution to Exercise 2.5.1

Lots of possible counterexamples. " It is bad to be depressed. Doing homework makes me depressed so it's good to not do my homework. " Or, " It is bad for people to be in physical pain. Childbirth causes pain. Therefore childbirth needs be avoided by all people. " If the original conclusion is really correct, Tracy needs to elucidate some of his unspoken assumptions.

The faw seems to be along the lines of, " avoiding bad in the short run may not always be good in the long run " (or equivalently, sometimes you have to choose the lesser of two evils). No, you weren't asked to name a specifc faw, and reasonable people can difer on precisely what the faw is. (And, formal logic is not particularly helpful here.) Nonetheless, uncovering hidden assumptions in arguments often helps understand the real issues involved.

ASIDE: For fun, pick up the front page of the daily newspaper, and see how many arguments use faulty rules of inference andjor rely on unspoken premises (which not all might agree with). In particular, political issues as spun to the mainstream press are often riddled with error, even though there are usually reasonable arguments on both sides which policy-makers and courts debate.

Solution to Exercise 2.5.2

" Terry claims that encouraging human-rights is more important than playing Tetris. But Terry played Tetris yesterday rather than volunteering with Amnesty International 39 . " Most people wouldn't condemn Terry as a hypocrite just because of this even the most dedicated of people are entitled to some free time. If your friend wants to prove Terry hypocritical, they'll have to provide further evidence or arguments.

Or similarly, "Politician X claims to support science funding, but voted against a proposal to shift all Medicare funds to NASA. "

Solution to Exercise 2.5.3

  1. It can be socially acceptable to wear my swimsuit into a fast-food restaurant. My underwear is less revealing than my swimsuit, and yet it would still raise many more eyebrows to go to that restaurant in my underwear, than my swimsuit. Clothes (and style in general) somehow encompass a form of communication, and people may object to an outft's mood or message without actually objecting to how much the outft reveals. (Other examples of communication-through-style include team logos, t-shirts with humorous slogans, and arm bands.)
  2. Buses are a lot cheaper than light rail. Yet, the light-rail here in Houston demonstrates that many people who wouldn't routinely take a bus are willing to take light rail. (Only after we recognize this, can we try to fgure out what why the diference exists, and then brainstorm to find a better overall solution.)

Solution to Exercise 2.5.5

  1. r ∧¬q
  2. rp Think of the English being reworded to " If you got an A in this class, you must have gotten an A on the fnal. "
  3. 3. pqr

Solution to Exercise 2.5.7

Poppins Mary

Solution to Exercise 2.5.10

  1. There are many simple answers, such as Y − has − 1, ¬W − has − 1, .
  2. There are many simple answers, such as a, N − has − 1, J − has − 3, .

For each, there are also many such formulas composed with connectives such as ∧ and ∨.


Tuesday, July 22, 2008

Chapter 2 Exercises

2.1 Consider the context free grammar

a) Show how the string aa+a* can be generated by this grammar.

b) Construct a parse tree for this string.

c) What language is generated by this grammar? Justify your answer.

This grammar generates a language that consists of any possible arithmetic operations involving a with the use of only the + and * operations and the postfix notation. The only non-terminal symbol in the grammar is a and it replaces all occurrences of S , leaving only a statement consisting of a series of a 's and the + and * operations.

2.2 What language is generated by the following grammars?

This grammar generates a language consisting of a series of 0's and 1's coming between a 0 and a 1. There are to be at least two 0's and two 1's and there are the same number of 0's as 1's.

b) S → +SS | -SS | a

This grammar generates a language that consists of any possible arithmetic operations involving a with the use of only the + and * operations and the prefix notation.

This grammar generates a language that consists of a series of adjacent and nested, matched pairs of parentheses.

This grammar generates a language that consists of equal number of a 's and b 's in no particular order.

e) S → a | S + S | S S | S * | ( S )

This grammar generates a language that consists of any possible arithmetic operations involving a with the use of only the + and * operations and sets of matched parentheses. It may involve both postfix and infix notations.

2.3 Which of the grammars in Exercise 2.2 are ambiguous?

Letter e in Exercise 2.2 is ambiguous. The string a+a* can be parsed in more than one way:

2.4 Construct unambiguous context-free grammars for each of the following languages.

a) Arithmetic expressions in postfix notation.

expr → expr expr +
expr → expr expr -
expr → digit
digit → 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

b) Left-associative lists of identifiers separated by commas.

c) Right-associative lists of identifiers separated by commas.

d) Arithmetic expressions of integers and identifiers with the four binary operators + , - , * , /.

expr → expr + term | expr - term | term
term → term * factor | term / factor | factor
factor → digit | id | (expr)

e) Add unary plus and minus to the arithmetic operators of (d).

expr → expr + term | expr - term | term
term → term * factor | term / factor | factor
factor → digit | id | (expr) | +factor | -factor


Reference record for OID 1.3.6.1.2.1.2.2.1.10

The total number of octets received on the
interface, including framing characters.

Parsed from file MRVINREACH.mib
Module: MRVINREACH

Description by cisco_v1

The total number of octets received on the interface,
including framing characters.

Discontinuities in the value of this counter can occur at
re-initialization of the management system, and at other
times as indicated by the value of
ifCounterDiscontinuityTime.

Description by oid_info

Total number of octets received on the interface, including framing characters.
View at oid-info.com

Description by mibdepot

The total number of octets received on the
interface, including framing characters.

Parsed from file msh100.mib.txt
Company: None
Module: LBMSH-MIB

Description by cisco

The total number of octets received on the interface,
including framing characters.

Discontinuities in the value of this counter can occur at
re-initialization of the management system, and at other
times as indicated by the value of
ifCounterDiscontinuityTime.


2.2.1: Exercises 2.2

The Apache Maven team would like to announce the release of Maven 2.2.1.

Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central place.

The core release is independent of the plugins available. Further releases of plugins will be made separately. See the Plugin List for more information.

We hope you enjoy using Maven! If you have any questions, please consult:

2.2.1 Release Notes

Maven 2.2.1 aims to correct several critical regressions related to the selection of the HttpClient-based Wagon implementation for HTTP/HTTPS transfers in Maven 2.2.0. The new release reverts this selection, reinstating the Sun-based - or lightweight - Wagon implementation as the default for this sort of traffic. However, Maven 2.2.1 goes a step further to provide a means of selecting which provider - or implementation - the user wishes to use for a particular transfer protocol. More information on providers can be found in our Guide to Wagon Providers.

In addition, Maven 2.2.1 addresses some long-standing problems related to injecting custom lifecycle mappings and artifact handlers. These custom components are now correctly loaded regardless of whether they come from a plugin with the extensions flag enabled, or from a pure build extension. In addition, custom artifact handlers now will be used to configure the attributes of the main project artifact in addition to any artifacts related to dependencies or project attachments created during the build.

The full list of changes can be found in our issue management system, and is reproduced below.


Watch the video: Μαθήματα ΑΟΔΕ ΓΕΠΑΛ - Κεφάλαιο 2: Μάρκετινγκ Marketing (October 2021).