INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Calculate area of Ellipse without calculus? I like the way integration works, but the final formula $\pi ab$ is too simple.
I know there is a more deeper way to derive it. I just don't like to use calculus here, too many equations.
I'd like to use simple math, which does offer deeper insight into it.
| You may use an affine map $\varphi$ to send an ellipse into a circle. Since affine maps preserve the ratios between areas, the area of the ellipse is $\frac{\text{Area}(\text{circle})}{\left|\det\varphi\right|}=\pi a b$.
|
$x$ is a solution of $0<|x-c|<\delta$
$x$ is a solution of $0<|x-c|<\delta$ iff $c-\delta<x<c+\delta$.
The book says that statement is false, but I can't understand why.
$0<|x-c|$ by definition of the absolute value and it's not 0 unless $x=c$.
$$x-c>0:x-c<\delta\iff x<c+\delta$$
$$x-c<0:c-x<\delta\iff x>c-\delta$$
Doesn't it prove it true?
| If $x=c$ then $c-\delta<x<c+\delta$ is true, but $|x-c|=0$ and $0<|x-c|<\delta$ is false. If we changed the first line to $0\le|x-c|<\delta$ then it would be true: $0\le|x-c|$ and not $0<|x-c|$ by the absolute value definition.
|
Why is the energy of Korteweg-deVries equation constant in time? The Korteweg-deVries is $u_t+6uu_x+u_{xxx}=0$ I know that its energy is
$$E(t)=\int_{-\infty}^\infty (\frac12(u_x)^2-u^3)dx$$.
I know $u(x,t)$ and $u'(x,t)$ decays to 0 as $x\rightarrow \pm\infty$. If $E(t)$ is constant w.r.t. $t$ is it sufficient to only show $E'(t)=0$?
| This isn't a complete answer but it gives some insight into why the energy is constant for wave solutions, $u(x,t) = f(x - ct)$.
Plugging this into $u_t+6uu_x+u_{xxx}=0$ reduces to a third-order ODE, $-cf' +6ff' + f'''=0$, which can be integrated to $-cf + 3f^2 + f''= C_1$.
If you multiply by $f'$ and integrate a second time you get a constant of motion:
$$-\dfrac{1}{2} c f^2 + f^3 + \dfrac{1}{2}(f')^2 - C_1 f = E$$
This is conservation of energy for the KdV equation with wave solutions.
|
Prove that the gcd is 1. I need to show that for $x$ odd the $\gcd\bigg(x,\dfrac{x^2-1}{2},\dfrac{x^2+1}{2}\bigg)=1$. Im trying it doing it pairwise. I already show that $\gcd\bigg(\dfrac{x^2-1}{2},\dfrac{x^2+1}{2}\bigg)=1$, but I dont have an idea how to show that $\gcd\bigg(x,\dfrac{x^2-1}{2}\bigg)=1$ and $\gcd\bigg(x,\dfrac{x^2+1}{2}\bigg)=1$.
I will appreciate the help.
| If $p|x$ then $p|x^2$ and $p\not |x^2 \pm 1$ so $p\not|\frac{x^2 \pm 1}k$.
So $x$ and $\frac{x^2 \pm 1}2$ have no prime factors in common. So $\gcd(x, \frac{x^2 -1}2) = \gcd(x,\frac{x^2 +1}2) = 1$.
And $\gcd(\frac{x^2-1}2,\frac{x^2 + 1}2) = \gcd(\frac{x^2 -1}2, \frac{x^2 +1}2 - \frac{x^2-1}2) = \gcd(\frac{x^2-1}2, 1) = 1$.
|
When does $\frac{\partial f}{\partial y} =0$ imply $f(x,y)=g(x)$? Let $U\subseteq\mathbb R^2$ be an open subset and $f:U\to\mathbb R$ a continuously differentiable function such that $$\forall (x,y)\in U:\frac{\partial f}{\partial y} =0.$$
Which conditions must $U$ and $f$ fulfill for $f$ to be a function in $x$ only, i. e. $\exists g:\mathbb R\to\mathbb R\forall (x,y)\in U:f(x,y)=g(x)$?
I was thinking about $U$ having to be connected, because elsewise we could have something like $U=\left\lbrace (x,y)\in\mathbb R^2\mid y<1\vee y>2\right\rbrace$ and define $f$ as $$f(x,y)=\begin{cases}x,&y<1\\2x,&y>2\end{cases}$$ which would make the partial derivative with respect to $y$ equal to zero without $f$ being independent of $y$.
However, is $U$ being a connected space a sufficient condition? How could I prove that or what would a counterexample look like?
| Here I make a proof of the theorem with the hypothesis of convexity.
If you use the "directional mean value theorem", you obtain that $\forall \ z_1=(x,y), \ z_2=(x,y') \in U,$ the line segment $\gamma$ between $z_1$ and $z_2$ is totally contained in $U.$ So, you have that $$f(x,y)-f(x,y')=(y'-y) \ \partial_yf(x,c_{y,y'})=0,$$ where $c_{y,y'} \in (y,y'),$ which means that $f(x,y)=f(x,y')=g(x).$
|
Determining which group of order 30 $G$ is given a relation of its elements. Let's say we have a group $G$ where $|G|=30$. I am familiar with the traditional argument using Sylow's theorems that gives rise to the fact that $G$ is isomorphic to one of the following groups:
$$\mathbb{Z}_{30},\mathbb{Z}_3\times D_5, \mathbb{Z}_5\times D_3, D_{15}$$
Obviously these groups are not isomorphic to one another since they have different numbers of elements of certain orders. We are given the additional information that $a,b\in G$ such that $|a|=2,|b|=15$, $ba=ab^4$. I am told that in this case $G\cong \mathbb{Z}_3\times D_5$; yet, I am not quite sure why this is the case.
If we write $ba=ab^4 \implies baa=ab^4a \implies b=ab^4a\implies b=ab^4a^{-1}$. Since $b^4$ and $b$ are conjugate in $G$, we have that $|b^4|=|b|=15$. Not quite sure where I'm going with this approach, though. A hint would be helpful!
| The order of $b^4$ is the same as that of $b$ without reference to $a,$ since $4$ is relatively prime to $15.$ However, notice that the fact that
$$b^4 = a b a^{-1}$$ implies (by raising both sides to fifth power) that $$b^5 = a b^5 a^{-1}.$$ Which means that $b^5$ is central. Further, $b^5$ generates the Sylow $3$-subgroup, which means there is exactly one such. Consider now $a$ and $b^3.$ Note that $ab^3 a^-1 = b^{12} = b^{-3},$ so $a$ and $b^3$ generate a $D_5.$ I claim now that $G$ is the direct product of the $C_3$ and the $D_5.$ Indeed, the intersection of the two of them has to be trivial (by order considerations), and pairs of elements commute (since one of them is central).
|
Why is $[\mathbb{Q}(\sqrt2,\sqrt3):\mathbb{Q}]=4$? I need to use the so called Ring Tower Theorem to show that $[\mathbb{Q}(\sqrt2,\sqrt3):\mathbb{Q}]=4$, but I'm quite confused with the notation and some concepts. First of all, my book says that if $a$ in some extension $E$ of $\mathbb{Q}$ is algebraic over $\mathbb{Q}$ then the elements of $\mathbb{Q}(a)$ can be written in the form $p+qa$, with $p,q\in\mathbb{Q}$. Hence we know that, since $\mathbb{Q}(\sqrt2,\sqrt3)=\mathbb{Q}(\sqrt2+\sqrt3)$, elements of $\mathbb{Q}(\sqrt2,\sqrt3)$ can be written in the form $p+(\sqrt2+\sqrt3)q$. But doesn't this imply that the the dimension of the basis for $\mathbb{Q}(\sqrt2,\sqrt3)$ is 2, like it is for $\mathbb{Q}(\sqrt2)$?
The Tower Theorem says that if $E$ is a finite extension field of the field $G$ and $G$ is a finite extension of the field $F$, then $E$ is a finite extension of the $F$, and $[E:F]=[E:G][G:F]$.
If I try to apply this theorem to the case in question, I would do as follows:
$$[\mathbb{Q}(\sqrt2,\sqrt3):\mathbb{Q}]=[\mathbb{Q}(\sqrt2,\sqrt3):\mathbb{Q}(\sqrt2)][\mathbb{Q}(\sqrt2):\mathbb{Q}] $$
So I know that $[\mathbb{Q}(\sqrt2):\mathbb{Q}]=2$, but how do I find $[\mathbb{Q}(\sqrt2,\sqrt3):\mathbb{Q}(\sqrt2)]$?
| Let $x=\sqrt 2+\sqrt 3$. Then $x-\sqrt 3=\sqrt 2$. So $x^2-2\sqrt 3x+1=0$. But this is not the minimum degree polynomial over $\mathbb{Q}$ since $-2\sqrt 3 \notin\mathbb{Q}$. Thus with $x^2+1=2\sqrt 3x$ squaring both sides gives $x^4-10x^2+1=0$. Hence $[\mathbb{Q}(\sqrt2,\sqrt3):\mathbb{Q}]=4$.
|
Find the number of members of chess club The members of a chess club took part in Round Robin competition in which each plays everyone else once. All members scored the same number of points, except 4 juniors whose total score were 17.5. How many members were there in the club? Assume that for each win a player scores 1 point, for draw 1/2 point and zero for losing.
Attempt:-
The total no. of games= C(n,2) . n being no. of members.
Every game leads to the business of 1 point, Hence sum of points of every member = C(n,2).
Total points must remain conserved => C(n,2) = 17.5 + points scored by seniors in terms of n .
I m not able to think upon "points scored by seniors in terms of n".
Also i m unable to use the fact that the points of seniors are equal.
Answer=27
| Building on the hints and terminology of @Michael we can write$$\frac{(s+4)(s+3)}{2}=\frac{sm}{2}+17.5$$
or $$s^2+(7-m)s-23=0$$
or $$s=\frac 1 2 ((m-7)\pm \sqrt{m^2-14m+141})$$
The problem then becomes finding a whole number $m$ which gives a whole number solution for $s$. I don't know what method @Michael envisioned for solving this, but brute force computation quickly gives the answer $$m=29$$
resulting in $$s=23$$
and hence $$n=27$$
|
How find this maximum of the value $\sum_{i=1}^{6}x_{i}x_{i+1}x_{i+2}x_{i+3}$? Let
$$x_{1},x_{2},x_{3},x_{5},x_{6}\ge 0$$ such that
$$x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}=1$$
Find the maximum of the value of
$$\sum_{i=1}^{6}x_{i}\;x_{i+1}\;x_{i+2}\;x_{i+3}$$
where
$$x_{7}=x_{1},\quad x_{8}=x_{2},\quad x_{9}=x_{3}\,.$$
| Let $x_1=a,\ x_2=b,\ x_3=c,\ x_4=d,\ x_5=e,\ x_6=f.$
Objective function is
$$Z(a,b,c,d,e,f) = abcd + bcde + cdef + defa + efab + fabc.$$
Let maximize $Z(a,b,c,d,e,f)$ using Lagrange mulptiplyers method for the function
$$F(a,b,c,d,e,f,\lambda) = abcd + bcde + cdef + defa + efab + fabc + \lambda(1-a-b-c-d-e-f).$$
Equaling to zero partial derivatives $F'_a,\ F'_b,\ F'_c,\ F'_d,\ F'_e,\ F'_f,\ $ one can obtain the system:
$$\begin{cases}
bcd+bcf+bef+def = \lambda\\
acd+acf+aef+cde = \lambda\\
abd+abf+bde+def = \lambda\\
abc+aef+bce+cef = \lambda\\
abf+adf+bcd+cdf = \lambda\\
abc+abe+ade+cde = \lambda\\
a+b+c+d+e+f = 1.
\end{cases}$$
Note that:
*
*First and third equations contains the common term $def.$
*Second and fourth - $aef.$
*Third and fifth - $abf.$
*Fourth and six - $abc.$
*Fifth and first - $bcd.$
*Sixth and second - $cde.$
So
$$\begin{cases}
bcd+bcf+bef = abd+abf+bde\qquad(1)\\
acd+acf+cde = abc+bce+cef\qquad(2)\\
abd+bde+def = adf+bcd+cdf\qquad(3)\\
aef+bce+cef = abe+ade+cde\qquad(4)\\
abf+adf+cdf = bcf+bef+def\qquad(5)\\
abc+abe+ade = acd+acf+aef\qquad(6)\\
a+b+c+d+e+f = 1.\qquad\qquad\qquad(7)
\end{cases}$$
Easy to see that:
*
*Both $Z(a,b,c,d,e,f)$ and accordingly the system $(1-7)$ WLOG allows any cyclic permutation of $(a,b,c,d,e,f)$.
*Substitution of any pair of zero unknowns equals objective function to zero if that unknowns are not neighbours in the list of unknowns.
Let us consider zero cases in detail.
"Neighbouring zeros" case $b=c=0$ (and cyclic permutations).
Equations $(1)$ and $(2)$ give $LHS=RHS=0,$ remaining equations can be simplified to the form of
$$a=e,\quad d=f,\quad a+d = \frac12,$$
with the object function
$$Z(a, 0, 0, d, a, d) = a^2\left(\frac12-a\right)^2\quad\text{for } a\in\left(0,\dfrac12\right).$$
That gives
$$Z_{max} = \frac1{256}\text{ for } a=\frac14.$$
"Single zero" case $b=0, a>0, c>0.$ (and cyclic permutations).
And condition $Z>0$ requires $d>0, e>0, f>0.$
The equations $(3,4,6)$ forms a system
$$\begin{cases}
def = adf+cdf\\
aef+cef = ade+cde\\
ade = acd+acf+aef
\end{cases}\rightarrow
\begin{cases}
e = a+c\\
(a+c)f = (a+c)d\\
de = cd+cf+ef,
\end{cases}$$
$ef=de,\ c(d+f)=0.$
This contradicts the conditions $c>0,\ d>0,\ f>0.$
Case $abcdef>0.$
This case allows reduction of $(1-6)$:
$$\begin{cases}
cd+cf+ef = ad+af+de\qquad(1a)\\
ad+af+de = ab+be+ef\qquad(2a)\\
ab+be+ef = af+bc+cf\qquad(3a)\\
af+bc+cf = ab+ad+cd\qquad(4a)\\
ab+ad+cd = bc+be+de\qquad(5a)\\
bc+be+de = cd+cf+ef\qquad(6a)\\
a+b+c+d+e+f = 1.\qquad\quad(7)
\end{cases}$$
Summations $(1a+2a),\ (2a+3a),\ (3a+4a),\ (4a+5a),\ (5a+6a),\ $ leads to the system
$$\begin{cases}
cd+cf = ab+be\qquad\qquad(1b)\\
ad+de = bc+cf\qquad\qquad(2b)\\
be+ef = ad+cd\qquad\qquad(3b)\\
af+cf = be+de\qquad\qquad(4b)\\
ab+ad = cf+ef\qquad\qquad(5b)\\
a+b+c+d+e+f = 1.\quad(7)
\end{cases}$$
Now weighted sums $(1b)\cdot d+(2b)\cdot b,$ $(2b)\cdot e+(3b)\cdot,$ $(3b)\cdot f+(4b)\cdot d,$ $(4b)\cdot a+(5b)\cdot c$ form the system
$$\begin{cases}
cd^2+cdf = b^2c+bcf\\
ade+de^2 = acd+c^2d\\
bef+ef^2 = bde+d^2e\\
a^2f+acf = cef+e^2f\\
a+b+c+d+e+f = 1
\end{cases}\rightarrow
\begin{cases}
(d-b)\cdot(b+d+f) = 0\\
(e-c)\cdot(c+e+a) = 0\\
(f-d)\cdot(f+d+b) = 0\\
(a-e)\cdot(a+e+c) = 0\\
a+b+c+d+e+f = 1
\end{cases}$$
$$f=d=b,\quad e=c=a,\quad a+b = \dfrac13.$$
The object function is
$$Z(a, b, a, b,a,b) = 6a^3\left(\dfrac13 - a\right)^3$$
for $a\in\left(0,\dfrac13\right)$.
That gives
$$Z_{max}(a) = \frac1{216}\text{ for } a=\frac16,$$
and finally
$$\boxed{\max Z(\vec x) = \dfrac 1{216}\quad\text{ for all }x_i=\dfrac16.}$$
|
Find 2 basis $M=\left\{v_1,v_2,v_3\right\}$ and $N=\left\{w_1,w_2,w_3\right\}$ such that $T_{MN(f)}$
$A=\begin{pmatrix} 1 & 3 & -1\\
-1 & 0 & -2\\ 1 & 1 & 1 \end{pmatrix}$ is a real matrix and $f: \mathbb{R}^3 \rightarrow \mathbb{R}^3, \text{ } f(x)= A \cdot x$ is a
linear mapping.
Find two basis $M= \left\{v_1,v_2,v_3\right\}$ and $N=
\left\{w_1,w_2,w_3\right\}$ of $\mathbb{R}^3$ such that $T_{MN}(f) =
\begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0 \end{pmatrix}$
I have absolutely no idea how to do this task, nor understand the notation $T_{MN}$. Maybe I could do it if I would understand the notation. I hope some will explain me what it means / how this could be solved?
If it matters, $Ker(f)=\left\{\begin{pmatrix}
-2z\\
z\\
z
\end{pmatrix} \mid z \in \mathbb{R}\right\}$ and $Im(f)= span\left(\left\{\begin{pmatrix}
1\\
-1\\
1
\end{pmatrix},\begin{pmatrix}
0\\
-3\\
2
\end{pmatrix}\right\}\right)$
(This is no homework!)
I will set a bounty on this question of 200 rep because I really want understand it.
| If $V$ is a vector space with finite dimension, then, there exists a number $d$ (and only one) such that every basis of $V$ has exactly $d$ vector; said $n$ is the dimension of $V.$
Assume $B = (v_1, \ldots, v_d)$ is one such basis of $V.$ I write a $d$-tuple instead of the more (erroneously used) common usage of a set because the order of the vectors matters. One can create a mapping, which happens to be both linear and bijective, from $V$ to $\Bbb R^d$ via the base $B$ by means of
$$v = \sum_{k = 1}^d \lambda_k v_k \mapsto (\lambda_1, \ldots, \lambda_d).$$
This mapping is called the coordinates of $V$ (relative to the chose of base $B$). Denote it by $[ \cdot ]_B:v \mapsto [v]_B.$
The coordinates, as I said before, is a linear bijective mapping and hence, it is an isomorphisim of vector spaces.
Given two finite dimensional vector spaces $V$ and $W$ with corresponding basis $M$ and $N,$ one can create the coordinate spaces $[V]_M = \Bbb R^p$ and $[W]_N = \Bbb R^q,$ where $p$ and $q$ are the respective dimensions of $V$ and $W.$ Basically, what this does is changing the "abstract" spaces $V$ and $W$ for two more familiar ones, viz $\Bbb R^p$ and $\Bbb R^q.$
For a linear transformation $f:V \to W$ one can create a matrix and only one $T_{M,N} = [f]_M^N$ with the defining property that
$$[f]_M^N[v]_M = [f(v)]_N, \quad v \in V.$$
To your exercise. You know that $f(\xi_1, \xi_2, \xi_3) = (\xi_1 + 3\xi_2 - \xi_3, -\xi_1 - 2\xi_3, \xi_1 + \xi_2 + \xi_3).$ So, you can calculate $f(v_1), f(v_2)$ and $f(v_3).$ You want a basis $N = (w_1, w_2, w_3)$ of $\Bbb R^3$ such that $[f(v_1)]_N = (1, 0, 0),$ $[f(v_2)]_N = (0,1,0)$ and $[f(v_3)]_N = (0,0,0).$ Notice that you want $[f(v_3)]_N = 0$ (so by means of the isomorphism, $v_3 \in \ker f$) and $f(v_1)$ and $f(v_2)$ are the first and second vector in $N.$ The natural candidate for $M$ is $v_1 = (1, -1, 1), v_2 = (0, -3, 2)$ and $v_3 = (-2, 1, 1).$ Make $N = (f(v_1), f(v_2), w_3),$ where $w_3$ is any vector that completes the base. Since $f(v_1) = (3,-3,1), f(v_2) = (-11,-4,-1)$ one can take $w_1 = (3,-3,1),$ $w_2 = (-11,-4,-1)$ and $w_3 = (0,0,1).$
|
Why is $P(X \geq a)$ the same as $E[\mathbf{1}_{X \geq a}]$ in a proof of the markov's inequality I didn't understand the last step
the markov's inequality : $$P(X \geq a) \leq \frac{E[X]}{a}$$
let $X : \Omega \to \Bbb{R}$ be a random variable
let $$\mathbf{1}_{X \geq a} :\mathbb{R} \to \{0,1\}$$
$$\mathbf{1}_{X \geq a}(X) =
\begin{cases}
0, & \text{if $X<a$ } \\
1, & \text{if $X\geq a$ }
\end{cases}$$
and let $$g = a \cdot \mathbf{1}_{X \geq a} :\mathbb{R} \to \{0,a\}$$
$$g(X) =
\begin{cases}
0, & \text{if $X<a$ } \\
a, & \text{if $X\geq a$ }
\end{cases}$$
then $$g(X) \leq X$$
$$E[g(X)] \leq E[X]$$
$$a\cdot E[\mathbf{1}_{X \geq a}(X)] \leq E[X]$$
$$E[\mathbf{1}_{X \geq a}(X)] \leq \frac{E[X]}{a} \implies P(X \geq a) \leq \frac{E[X]}{a}$$
I didn't understand this last step can someone please elucidate.
| Taking expectation is to integrate a random variable over the outcome space w.r.t the probability measure. Recall that $\int_{\Omega} 1_{A} dP = \int_{A} 1 dP = P(A)$ by basic properties of Lebesgue integration.
Also note that the set $\{ X \geq a \}$ is the shorthand for $\{ \omega \in \Omega \mid X(\omega) \geq a \}$ which is always a subset of $\Omega$ and which is actually simply the preimage $X^{-1}([a, +\infty])$.
|
Exercise in Weibel's book Lie cohomology, Homology Can you please help me out to prove the following (rather easy (but not for me obviously)) exercise which I didn't manage to solve (Exercise 7.4.2, page 229)?
Assume $\mathfrak{g}$ is the free $\mathbb{k}$-module on basis $\{ e_1, e_2,..., e_n \}$, made it into abelian Lie algebra. Show that $H_p(\mathfrak{g}, \mathbb{k}) = \bigwedge^k (\mathfrak{g})$, the $p^{th}$ exterior power of the $\mathbb{k}$-module $\mathfrak{g}$.
Thank you!
| The $a$ be a finite dimensional abelian Lie algebra, let $\{x_1,\dots,x_n\}$ be a basis for $a$ and let $U$ be its enveloping algebra. It is easy to see that $U$ is a polinomial ring with generators $x_1,\dots,x_n$. You can check that $x_1,\dots,x_n$ is a regular sequence in $U$, so using the result of Chapter 4 in the book that refer to th Koszul complex, you get a free resolution of the module $U/(x_1,\dots,x_n)$, which is just th trivial module over $a$, as a let $U$-module. Use that resolution to compute $Ext_U(k,k)$, which is the Lie cohomology $H^*(a,k)$.
|
Prove that $d(x,y) = \sum_{i\in\mathbb{N}} a_i\frac{d_i(x_i,y_i)}{1+d_i(x_i,y_i)}$ satisfies the triangle inequality Let $(X_i
, d_i), i ∈ \Bbb N$, be a collection of metric spaces.
Define the metric \begin{align}d(x,y) = \sum_{i\in\mathbb{N}} a_i\frac{d_i(x_i,y_i)}{1+d_i(x_i,y_i)} \end{align} on the infinite product $\prod_{i \in \Bbb N} X_i.$
Note that $(a_i)_{i\in\mathbb{N}}$ is positive and satisfies $\sum_{i\in\mathbb{N}} a_i < +\infty$. For example $a_i = 2^{-i}$.
I am wondering how should I go about to prove that this metric satisfy the triangle inequality?
| Let $\rho_k(x_k,y_k) = \frac{d_k(x_k,y_k)}{1+d_k(x_k,y_k)}$ and note that each $\rho_k$ are a metric, so for all $k$
$$
\rho_k(x_k,y_k)\le \rho_k(x_k,z_k) + \rho_k(z_k,y_k).
$$
Then multiplying by $a_k$ and summing from $k=1$ up to $k=n$ we have
$$
\sum_{k=1}^na_k\rho_k(x_k,y_k)\le \sum_{k=1}^na_k\rho_k(x_k,z_k) + \sum_{k=1}^na_k\rho_k(z_k,y_k)
$$
Once both sums on the right side converge, we can make $n\to\infty$ and the limit on the right side will give $d(x,z)+d(z,y)$ as we wanted.
|
A general way to solve $p(x)\cdot T(x)=1$ distributional equation?
DISCLAMER : I first should apologize, it could be that my question does not make much sense or could be imprecise. Just take this question as a naive question from a physics guy who is trying to understand what he is doing...
Is there a generic way to solve distributional equations like :
$$
p(x)\cdot T(x)=1
$$
where $p$ would be a second order polynome $p(x)=(x-z_1)(x-z_2)$ for instance ?
I do know that the general solution to the distributional equation $x\cdot T(x)=1$ reads :
$$
T(x)=\text{vp}\frac{1}{x}+\alpha\,\delta(x)\;,\quad\alpha\in\mathbb{C}
$$
For instance, is there a way to treat the equation $p\cdot T=1$ "locally around the roots" of $p$ as we would treat the equation $x\cdot T=1$ and give a solution which would look like :
$$
T(x)=\text{vp}\frac{1}{x-z_1}+\alpha_1\,\delta(x-z_1)+\text{vp}\frac{1}{x-z_2}+\alpha_2\,\delta(x-z_2)
$$
Any advice/ressources is appreciated. Thanks by advance.
| If the roots of $p$ are real and distinct, then the intuition suggests that $$T" =" \frac{1}{(x-x_1)(x-x_2)}.$$ We can rewrite it as $$T(x) "=" \frac{1}{x_2-x_1}\left(\frac{1}{x-x_1} - \frac{1}{x-x_2}\right).$$ Now we apply a standard workaround to avoid non-local-integrability of terms $\frac{1}{x-x_i}$, we consider principal values (and add add delta-functions arising from the non-uniqueness of the solution):
$$T = \frac{1}{x_2-x_1}\left(PV\left(\frac{1}{x-x_1}\right) - PV\left(\frac{1}{x-x_2}\right)\right) + c_1\delta_{x_1} + c_2\delta_{x_2}.$$
If a root, say, $x_1$ has a non-zero imaginary part, then the function $x\to x-x_1$ is $C^\infty$ on $\Bbb R$ and is never zero there, so we can safely divide both parts of the equation (and therefore reduce our problem to a well-known one):
$$(x-x_2) T = \frac{1}{x-x_1}.$$
Finally, if $x_1=x_2\in\Bbb R$, then neither of the cases above apply. Let us take for simplicity $x_1=x_2=0$. We need to guess a solution of the equation $x^2T=1$. Obviously, we need to play around $PV(1/x)$. Let us take $G = -\left(PV(1/x)\right)'$, then
$$\langle x^2 G,\phi\rangle = \langle G,x^2\phi\rangle = \langle PV(1/x),x^2\phi'+2x\phi\rangle = $$
$$=\lim_{\varepsilon\to 0}\left( \int_{\varepsilon}^{+\infty}\frac{x^2\phi'(x)+2x\phi(x)}{x}dx+\int_{-\infty}^{\varepsilon}\frac{x^2\phi'(x)+2x\phi(x)}{x}dx\right)$$
$$ = \int_{\Bbb R}(x\phi'(x)+2\phi(x))dx=\int_{\Bbb R}\left( \left( x\phi(x) \right)'+\phi(x) \right)dx=\langle 1,\varphi\rangle,$$
hence $G$ is a solution of $x^2T=1$. We can conclude
$$T = -\left(PV(1/x)\right)' + c_0\delta_0 + c_1\delta_{0}' .$$
|
The dimension of the kernel of $X \to AX-XA$ is the sum of the squares of the multiplicities of the eigenvalues of $A$ Let $A$ be a diagonalizable matrix and let $F$ be this application
$$F:M(n,\mathbb{R})\to M(n,\mathbb{R})$$
$$F(X)=AX-XA$$
Prove that the dimension of the kernel of F is the sum of the multiplicities of the eigenvalues of A.
First I tried to find a comfortable basis to write a good associated matrix, but I didn't find anything, then I saw that if B is a matrix that diagonalizes A ($B^{-1}AB=D$) then a generic X in the kernel satisfy
$$B^{-1}XBD=DB^{-1}XB$$
How can I proceed?
| Let $E_{ij}$ be the elementary matrix whose only nonzero entry is in the $i$th row and $j$th column.
Lemma: Let $D \in Mat_n (\mathbb{R})$ be a diagonal matrix. Then $[ D, E_{ij} ] = 0 \iff D_{ii} = D_{jj}$.
Proof: $D E_{ij} = D_{ii} E_{ij}$ and $D E_{ij} = D_{jj} E_{ij}$.
Lemma: for a diagonal matrix $D \in Mat_n (\mathbb{R})$ define the $\mathbb{R}$-linear map $\Phi : Mat_n (\mathbb{R}) \rightarrow Mat_n (\mathbb{R})$ by sending $X$ to $[D, X]$. The kernel has basis $\{ E_{ij} | D_{ii} = D_{jj} \}$, the length of which is clearly $\sum_{\lambda \in \Lambda } \mu (\lambda)^2$ where $\Lambda$ is the set of eigenvalues and $\mu$ is the multiplicity of each eigenvalue.
Reduction from the case of diagonalizable matrices to the case of diagonal matrices: Take a diagonalizable matrix $M \in Mat_n (\mathbb{R} )$ with $A^{-1} M A = D$ for an invertible matrix $A$ and a diagonal matrix $D$. Then $\alpha_A: Mat_n (\mathbb{R}) \rightarrow Mat_n (\mathbb{R})$ where $X \mapsto A^{-1} X A$ is an isomorphism. As before, define an $\mathbb{R}$-linear map $\Phi : Mat_n (\mathbb{R}) \rightarrow Mat_n (\mathbb{R})$ by sending $X$ to $[M, X]$. Define an $\mathbb{R}$-linear map $\Psi : Mat_n (\mathbb{R}) \rightarrow Mat_n (\mathbb{R})$ by sending $X$ to $[D, X]$. $\Phi \circ \alpha_A = \alpha_A \circ \Psi$. We can conclude that $ker(\Phi) \cong ker(\Psi)$ as $\mathbb{R}$-vector spaces, so that their dimension is the same.
The claim therefore follows from the lemmas above. More detail upon request.
|
Show such a function has a maximum Let $f:[0, \infty)$ be a continuous function.
$f(0) = 1 $ and $\forall x \in [0, \infty)$ $f(x)\leq \frac{x+2}{x+1}$
Show that $f$ gets a maximal value in $[0, \infty)$.
My intuition:
if $f(0)$ is the maximum i'm done if not the function you showed me converges to $1$ I want to show that from a certain point it will be lower than $1.$ Can you help me formalize it?
| Assume that $\sup_{x\geq 0}f(x)>1$ and that the maximum does not exist. Then there is a sequence $(x_n)_{n\in\Bbb N}$ in $[0,\infty)$ (wlog monotonically increasing) such that $f(x_n)\to\sup_{x\geq 0}f(x)$ for $n\to\infty$. Furthermore, the sequence $(x_n)_{n\in\Bbb N}$ can be chosen as $x_n\to\infty$ (otherwise the sequence were contained in a compact set $K$ and (because of monotonicity) it would converge to a $x_0\in K$, which is necessarily the maximum of $f$ on $K$ (because of continuity) - that is a contradiction to the assumption).
Now we obtain
$$1<\sup_{x\geq 0}f(x) = \lim_{n\to\infty}f(x_n) \leq \lim_{n\to\infty}\frac{x+2}{x+1} =1,$$
which is a contradiction. Therefore the assumption was wrong and the maximum exists or $\sup_{x\geq 0}f(x)=1$ holds. But then with $f(0)=1$ the maximum exists, too.
|
Prove $\lim_{(x,y) \to (-1,8)} xy = -8$ using only the definition. I'm having problems trying to prove this limit using only the definition with delta and epsilon.
I need to see that:
$$ \forall\epsilon \ \exists \delta \ : \sqrt{(x+1)^2+(y-8)^2} < \delta \Rightarrow |xy+8|<\epsilon$$
I want to make $|xy+8|$ look like the first half. I start by replacing $t=x+1$, so $x=t-1$ and $s=y-8$, so $y=s+8$. So I get:
$$ |(t-1)(s+8)+8| = |ts+8t-s-8+8| = |s(t-1)+8t| \leq |s(t-1)|+8|t|$$
Here I'm stuck, if I use that $\delta < 1$ then I can say that $t-1 > 0$ and since $|s(t-1)| = |s||t-1|$ then $|s||t-1| = |s|t-|s|$, which means I get:
$$ |(t-1)(s+8)+8| \leq |s|t-|s|+8|t| \leq |s|t+8|t| $$
I think I need to bound it by $||(s,t)||$, so I can add/substract and simplify, but I can't see how to do it. Any tips you can give me?
| For a slightly different approach consider the following inequality. \begin{align} |xy + 8| &= |(x+1)(y-8) \, \, + 8x - y + 16 |
\\ &= |(x+1)(y-8)\, \, + 8(x+1) - (y-8)|
\\&\leq |(x+1)(y-8)|+ 8|x+1| + |y-8|
\\&= |x+1|\cdot|y-8|\, + 8|x+1| + |y-8|\end{align}
If you want to use the euclidean norm, then note that using the above inequality we have
\begin{align} \| (x,y) - (-1,8) \| < \delta_0 &\Rightarrow |x+1| < \delta_0 \text{ and } |y-8| < \delta_0 \\&\Rightarrow | xy + 8 | < \delta_0^2 + 8\delta_0 + \delta_0.\end{align}
Therefore given $\epsilon > 0$, any $\delta$ such that $0 < \delta^2 + 8\delta + \delta < \epsilon$ will work.
|
Tan function and isosceles triangles I have a non-right-angled isosceles triangle with two longer sides, X, and a short base Y.
I know the length of the long sides, X.
I also know the acute, vertex angle opposite the base Y, let's call it angle 'a'
I have been told I can calculate the length of the base Y by:
Y = tan(a) x X
I've sketched this out with a few hand drawn triangles and it does seem to work...... But why?
I can't derive that formula from any of the trigonometry I know. What am I missing?
| Cut the iscoles triangle in half to get a two right triangles with opposite side $\frac 12 y$ and hypotenuse $x$.
$\frac 12 Y = \sin (\frac 12 a) x$
So apparently this is claiming $ 2\sin(\frac 12 a) = \tan a$ which isn't true but is apparently an approximation. $\tan a = \frac{\sin (\frac 12 a + \frac 12 a)}{\cos(\frac 12 a + \frac 12 a)} = \frac {2\sin \frac 12 a\cos \frac 12 a}{\cos^2 \frac 12 a - \sin^2 \frac 12 a}=2\sin\frac 12 a*\frac {\cos \frac 12a}{\cos^2 \frac 12 a - \sin^2 \frac 12 a}$
And $\frac {\cos \frac 12a}{\cos^2 \frac 12 a - \sin^2 \frac 12 a}=\frac {\cos \frac 12a}{1 - 2\sin^2 \frac 12 a}$ which, I guess for small values of $a$ is close to 1. (you said $x > y$ so $a < 60$ and $\frac 12 a < 30$ So for $a = 60$ then term is $\frac{4\sqrt{3}}6=1.1547$ and as $a$ decreases it gets closer to $1$... I guess.)
|
Taking a sum of exponentials and turning it into a fraction I'm curious as to how they made this jump in logic:
$$A e^{i \omega t} \left[1 + e^{i \phi} + e^{2i\phi} + \dots + e^{(N-1)i\phi}\right] = A e^{i\omega t} \frac{e^{i N\phi} - 1}{e^{i\phi} - 1}$$
How did they convert the sum within the brackets into the expression below?
| They used:
1. Power laws: $(e^x)^a=e^{xa}$
2. The sum of a geometric series
|
Prove $X^tX$, where $X$ is a matrix of full column rank, is positive definite? Let $X$ be a matrix of dimension $n\times k$ where $n>k$, $\text{rk}(X)=k$ so $X$ is of full column rank. Then how do I prove $X^tX$ is always positive definite, where $X^t$ is transpose of $X$? This is given sortta like a lemma in our lecture slides without proof but would like to have some reasoning behind this. Thank you for your help!
| For any real invertible matrix $X$ we can show that the product $X^tX$ is a positive defined matrix. In fact, let's just take a vector $v$ non-zero. So we can easily see that: $$v^tX^tXv=||Xv||^2>0$$
because since the matrix $X$ is invertible, $Xv\neq 0.$
|
Why does $\epsilon$ come first in the $\epsilon-\delta$ definition of limit? As we know, $\underset{x\rightarrow c}{\lim}f(x)=L\Leftrightarrow$ for every $\epsilon>0$ there exists $\delta>0$ such that if $0<|x-c|<\delta$, then $|f(x)-L|<\epsilon$.
My question is: why do we say that for every $\epsilon>0$ there exists $\delta>0$ and not vice versa, why not for every $\delta>0$ there exists $\epsilon>0$?
| Good question!
The answer is that if we put $\delta$ first, then our definition would no longer correspond to what we mean by the limit of a function.
Proposition Let $f$ be the constant function $0$. Let $c,L$ be any real numbers. Then:
For all $\delta>0$ there exists $\epsilon>0$ such that if $0<|x-c|<\delta$, then $|f(x)-L|<\epsilon$.
In other words, if we define $\lim'$ by putting the $\delta$ first, then $\lim_{x\to c}'f(x)=L$.
Proof. Let $\delta>0$ and let $\epsilon=|L|+1$. If $0<|x-c|<\delta$ then $$|f(x)-L| = |0-L|=|L|<|L|+1=\epsilon\quad\Box$$
So our constant function $0$, which should clearly converge to $0$ at every point, under the new definition converges to every real number at every point.
In particular, limits aren't unique and the properties we expect to hold for a 'limit' no longer hold. It just isn't a very interesting definition and it no longer captures our intuition of what limits should be.
By contrast, the definition with $\epsilon$ first exactly captures the notion of a 'limit'.
|
problem involving inertia
Update my lagrangian is
$$L=\frac{ml^2}2\left(\dot \theta_1^2+\frac 13\dot \theta_2^2+\dot \theta_1\dot \theta_2\cos\delta\right)+mgl(cos\theta_1+\frac 12\cos\theta_2)$$
and for my linearised equations of motion question (5) i got the following
$$(\ddot \theta_1) +\frac{2}{3}\ddot\theta_2=\frac{-2g}{l}\theta_2$$
and $$2\ddot \theta_1 +\ddot \theta_2=\frac{-2g}{l}\theta_1$$
These seem to give numerically unpleasant normal mode frequencies though, so I'm not entirely sure that they're right, maybe i went wrong on calculating the equations of motion. a futher solution would be ideal. Thanks
my differentials were
$$\frac{dL}{d\theta_1}=-\frac{ml^2}{2}\dot\theta_1\dot\theta_2\cos(\theta_1)sin(\theta_2)-\frac{ml^2}{2}\dot\theta_1\dot\theta_2\sin(\theta_1)\cos(\theta_2)-mgl\sin(\theta_1)$$
$$\frac{dL}{d\theta_2}=-\frac{ml^2}{2}\dot\theta_1\dot\theta_2\cos(\theta_1)\sin(\theta_2)+\frac{ml^2}{2}\dot\theta_1\dot\theta_2\sin(\theta_1)\cos(\theta_2)-\frac{mgl}{2}\sin(\theta_2)$$
$$\frac{dL}{d\dot\theta_1}=ml^2\dot\theta_1+\frac{ml^2}{2}\dot\theta_2\cos(\theta_2-\theta_1)$$
and finally $$\frac{dL}{d\dot\theta_2}=\frac{ml^2}{3}\dot\theta_2+\frac{ml^2}{2}\dot\theta_1\cos(\theta_2-\theta_1)$$
so the two equations of motion are
$$ml^2\ddot\theta_1+\frac{ml^2}{2}\dot\theta_1\dot\theta_2\cos(\theta_1)sin(\theta_2)+\frac{ml^2}{2}\dot\theta_1\dot\theta_2\sin(\theta_1)\cos(\theta_2)+mgl\sin(\theta_1)=0$$
and $$\frac{ml^2}{3}\ddot\theta_2+\frac{ml^2}{2}\dot\theta_1\dot\theta_2\cos(\theta_1)\sin(\theta_2)-\frac{ml^2}{2}\dot\theta_1\dot\theta_2\sin(\theta_1)\cos(\theta_2)+\frac{mgl}{2}\sin(\theta_2)=0$$
| *
*For an arbitrary point on the rod, let $p$ be its distance from the point of connection to the wire. Since the mass is uniformly distributed, then
$$dm=\frac ml dp$$
and the distance of that point from the reference of rotation can be obtained from the law of cosines:
$$r^2=l^2+p^2+2lp\cos\delta$$
Here, $\delta=\theta_2 - \theta_1\,$. The moment of intertia is defined by:
$$dI=r^2 dm\implies I=\frac ml\int_0^l r^2 dp$$
Consequently
$$I=\frac ml\int_0^l(l^2+p^2+2lp\cos\delta)dp=m l^2\left(\frac 43+\cos(\theta_2-\theta_1)\right)$$
*The kinetic energy is calculated from a similar procedure.
$$dK=\frac 12v^2dm\implies K=\frac m{2l}\int_l v^2 dp$$
where
$$\begin{align}v^2&=v_x^2+v_y^2=[\partial_t(l\sin\theta_1+p\sin\theta_2)]^2+[\partial_t(l\cos\theta_1+p\cos\theta_2)]^2\\&=
(l\dot{\theta_1}\cos\theta_1+p\dot{\theta_2}\cos\theta_2)^2+(l\dot{\theta_1}\sin\theta_1+p\dot{\theta_2}\sin\theta_2)^2\\&=
(l\dot{\theta_1})^2+(p\dot{\theta_2})^2+2lp\dot{\theta_1}\dot{\theta_2}\cos\delta\end{align}$$
Thus
$$K=\frac{ml^2}2\left(\dot{\theta_1}^2+\frac 13\dot{\theta_2}^2+\dot{\theta_1}\dot{\theta_2}\cos\delta\right)$$
*And similarly, the potential energy is the result of an integral as well:
$$dU=-gh\,dm\implies U=-\frac {mg}l\int_0^l hdp$$
where
$$h=l\cos\theta_1+p\cos\theta_2$$
which results in $$U=-mgl(\cos\theta_1+\frac 12\cos\theta_2)$$
*Lagrangian is given by $K-U$. Let $\omega_1=\dot\theta_1$ and $\omega_2=\dot\theta_2$ then
$$L=\frac{ml^2}2\left(\omega_1^2+\frac 13\omega_2^2+\omega_1\omega_2\cos(\theta_2-\theta_1)+\frac gl(2\cos\theta_1+\cos\theta_2)\right)$$
and from Euler-Lagrange equations
$$\frac d{dt}\left(\frac{\partial L}{\partial\omega_i}\right)=\frac{\partial L}{\partial\theta_i},\quad i=1,2$$
we have
$$2\dot\omega_1+\dot\omega_2\cos(\theta_2-\theta_1)-\omega_2^2\sin(\theta_2-\theta_1)+2\frac gl\sin\theta_1=0\\
\frac 23\dot\omega_2+\dot\omega_1\cos(\theta_2-\theta_1)+\omega_1^2\sin(\theta_2-\theta_1)+\frac gl\sin\theta_2=0$$
Thus the state-space equations, knowing that the state variables are $\theta_1,\theta_2,\omega_1,\omega_2$ can be written as:
$$\begin{align}\pmatrix{\dot\theta_1\\\dot\theta_2}&=\pmatrix{\omega_1\\ \omega_2}\\
\pmatrix{\dot\omega_1\\\dot\omega_2}&=\pmatrix{2&\cos(\theta_2-\theta_1)\\ \cos(\theta_2-\theta_1)&\frac 23}^{-1}
\pmatrix{\omega_2^2\sin(\theta_2-\theta_1)-2\frac gl\sin\theta_1\\
-\omega_1^2\sin(\theta_2-\theta_1)-\frac gl\sin\theta_2}
\end{align}$$
and you can linearize the system by calculating the Jacobian, $\frac{\partial f}{\partial x}|_{x=0}$ which is a straightforward but tedious process.
$$x=(\theta_1,\theta_2,\omega_1,\omega_2)^T\longrightarrow\left[\frac{\partial f}{\partial x}\right]_{x=0}=\pmatrix{0&I_{2\times 2}\\J_{2\times 2}&0}$$
where
$$J=\frac gl\pmatrix{-4&3\\6&-6}$$
|
Equality of limit and a sequence and its terms Two numbers $a$ and $b$ are equal if and only if for every $\epsilon > 0 $,
$|a-b| < \epsilon $
A sequence is said to converge to a limit $L$ if for any $\epsilon >0,\
\exists m \in $ N such that $ |x_n - L| <\epsilon\ for\ all\ n\ge m $
Does the second statement imply that after a certain $m$, all the terms in the sequence are equal to the limit?
| No, here is a counterexample. Let $x_n = \frac{1}{n}$. We want to show that it converges to $L = 0$. Let $\epsilon > 0$. Then pick some $m$ such that $m > \frac{1}{\epsilon}$. For all $n \geq m - \frac{1}{\epsilon}$ we have
$$ |x_n - 0 | = | \frac{1}{n} | \leq |\frac{1}{m} | < \epsilon. $$
But, obviously, no $x_n$ is equal to $0$ for any $n$.
The reason why you can't use the definition of equality in the equation $|x_n - L |< \epsilon$ is that the choice of $n$ depends on the choice of $\epsilon$. When taking the limit, you do the following:
*
*Pick an $\epsilon > 0$. Find a lower bound $m_\epsilon$ (corresponding to that value of $\epsilon$.)
*For all $n \geq m$, verify $|x_n - L| < \epsilon$.
When testing for equality of $x_n$ and $L$, you do the following:
*
*Pick some $x_n$ and $L$.
*Now for all $\epsilon > 0$, verify $|x_n - L| < \epsilon$.
The key here is that $\forall A \exists B$ is not the same as $\exists B s.t. \forall A$. The order of the quantifiers you use here matters. (A similar principle is at play with continuity and uniform continuity.)
|
Looking for help in understanding a solution to a Calc III problem about surfaces Studying for a Calc III midterm and I'm trying to shore up my intuitions. The question I'm looking at asks:
Show that the curve with parametric equations $x = sin (t)$, $y = cos
(t)$, $z = sin^2 (t)$ is the curve of intersection of the surfaces $z
= x^2$ and $x^2 + y^2 = 1$.
Now I have the solution to the above problem but I'd like to understand why its the right answer.
The solution takes the equations of the two surfaces and sets the equal to each other, as both equations equal 0 with some algebraic manipulation. The resultant equation is:
$2x^2+y^2-z\:=\:1$
Straightforward enough. My first thought is that this equation represents a surface in 3d space, no? I believe its a parabolid. So this is currently the surface that represents the intersection of the two surfaces. But the question gives us a parametric equation of a curve (as an aside, are parametric equations typically used to describe surfaces, or only curves/lines?), so obviously we have more work to do. Is the curve basically the outline of the surface or am I missing something in my visualization of what this should look like?
The parametric equations can be subbed in for x, y, and z and put into the equation for the surface intersection given above (the $2x^2+y^2-z\:=\:1$ equation). After some more algebra, the resulting statement is found:
$sin^2\left(t\right)+cos^2\left(t\right)=1$
So this is a well know trig identity, right? I'm not sure how this shows that the given parametric equations are a curve of the intersection of the two equations above. Could someone unpack this for me a bit?
| It's helpful to visualize what the surfaces you are to trying to intersect. In this case, you have two "cylinders". One is perpedicular to the xy plane, while the other is perpendicular to xz plane. So when you tried intersecting these surfaces, and obtain $$2x^2+y^2-z=1,$$ you have to keep in mind that this intersection is now a curve not a surface. And to set that in stone, you introduce a parametrization for the curve. Set $x=\sin(t) $ and $y=\cos(t)$ (as you said), then solve for $z=2x^2+y^2-1=2\sin^2(t)+\cos^2(t)-1=\sin^2(t)$.
|
Differentiating $ x^{a}y^{b} = c $, in its simplest form. $$ x^{a}y^{b} = c, $$
where a, b and c are constants. My attempts so far
$$ \frac{dy}{dx} = ax^{a - 1}by^{b - 1}$$
$$ \frac{d^2y}{dx^2} = (a^2 - a)x^{a-2}(b^2 - b)y^{b - 2} $$
I think that these first and second derivatives are correct, however my issue is, are these the derivatives in their simplest form?
Any hints or inputs are welcomed.
| Assuming this is implicit then,
$$\frac{d}{dx}y^n=ny^{n-1}\frac{dy}{dx}$$
Use together with the product rule for correct solution
|
Dense transfer of a set with positive lebesgue measure: is it conull? I'm facing a problem in measure theory and I need to prove the following conjecture to move on.
Attention: I'm not sure the following statement is true.
Let $A \subset \mathbb{R}$ be a measurable set such that $m(A)>0$ and $H$ be a countable, dense subset of $\mathbb{R}$. If $A+H=\{a+h: a \in A, h \in H\}$, prove that $m((A+H)^c)=0$.
I'm totally stuck. It's easy to see that $A+H=\displaystyle{\bigcup_{h \in H} A+h}$, so it's definitely a measurable set, but that's the only progress I've been able to make. Any help would be greatly appreciated!
| Use Lebesgue density theorem (LDT) which has an elementary proof.
Towards a contradiction, suppose $B = \mathbb{R} \setminus (A + H)$ has positive measure. Using LDT, choose open intervals $I, J$ of same length such that $B \cap I$ has $\geq 99$ percent measure of $I$ and $A \cap J$ has $\geq 99$ percent measure of $J$. Choose $h \in H$ (using the density of $H$) such that $J + h$ meets $I$ on a set of measure $\geq 99$ percent of $I$ (which has same length as $J$). Do you see a problem now?
|
Square classes p-adic numbers isomorphism I was reading about the Hilbert symbol and the Hasse-Minkowski theorem and found this statement in the book I was reading :
$ |\frac{Q_p^{*}}{(Q_p^{*})^{2}} |=2^r$; with $r=2$ for $p \neq 2$ and $r=3$ for $p=2$
I was thinking about proving it directly by showing an isomorpishm, but I got a bit stuck. I'd apreciate any advice or hint you could give me in order to prove it.
Thank you so much in advance.
| It may be possible to prove this via Hensel's lemma, but I think a more persuasive argument (perhaps the most standard one) is by p-adic versions of exponential and logarithm (yes, taking p-adic inputs and producing p-adic outputs). The $n!$ in the denominator of the exponential is no longer large, p-adically, indeed, so this causes some trouble, rather than helping convergence. The specifics about the convergence (in most textbooks...) show, qualitatively, that for given $n$, anything sufficiently near $1$ has an $n$-th root. The details work out to give what you are trying to prove in the square-root/square case. (Among other sources, my notes on alg no. th. cover such things: http://www-users.math.umn.edu/~garrett/m/number_theory/.)
|
How do you construct a function that is continuous over $(0,1)$ whose image is the entire real line? How do you construct a continuous function over the interval $(0,1)$ whose image is the entire real line?
When I first saw this problem, I thought $\frac{1}{x(x-1)}$ might work since it is continuous on $(0,1)$, but when I graphed it, I saw that there is a minimum at $(1/2,4)$, so the image is $[4,\infty)$ and not $(-\infty,\infty)$.
Apparently, one answer to this question is:
$$\frac{2x-1}{x(x-1)}$$
But how is one supposed to arrive at this answer without using a graphing calculator?
| Here's another example. Take the continuous function $f:(0,1)\to\mathbb R$ defined by $$f(x)=\frac{1}{x}\cos\frac{\pi}x.$$ For $n\in\mathbb N$ we have $f(\frac1{2n})=2n$ and $f(\frac1{2n+1})=-2n-1$, so $[-2n-1,2n]$ lies in the image of $f$ for each $n\in\mathbb N$. Therefore, the image of $f$ is $\mathbb R$.
This example has the further property that $\lim_{x\to1}f(x)=-1\in\mathbb R$ exists. (So $f$ can be extended to yield a surjection from $(0,1]$ onto $\mathbb R$.)
|
Galois Group of any polynomial over $\mathbb{R}$ This problem came up on a homework and I immediately came up with the following: The Galois Group is $Z/2Z$ if the polynomial has a complex root ($\mathbb{C} = \mathbb{R}[i]$ is algebraically closed).
Am I correct or have I made a trivial error?
| This is correct.
By the fundamental theorem of algebra, any polynomial $f(x) \in \mathbb R[x]$ splits completely into linear factors over $\mathbb C$. So the splitting field of $f(x)$ is certainly some intermediate field between $\mathbb R$ and $\mathbb C$.
But $[\mathbb C: \mathbb R] = 2$. So the only such intermediate fields are $\mathbb R$ and $\mathbb C$ themselves!
Therefore, there are two cases:
*
*All roots of $f(x)$ are in $\mathbb R$. If this is true, then the splitting field of $f(x)$ is $\mathbb R$ itself. The degree of the extension is one, and the Galois group is trivial.
*Some root of $f(x)$ is not contained in $\mathbb R$. In this scenario, the splitting field is bigger than $\mathbb R$, and the only possibility is that it is $\mathbb C$. The degree of the extension is two, and the Galois group is therefore of order two, i.e. the Galois group is $\mathbb Z_2$.
|
If I roll D copies of an S-sided dice, which is the probability that I will get at least M matches? If I roll $D$ copies of an $S$-sided dice, which is the probability that I will get at least $M$ matches?
For example, consider $D=3$, $S=4$, $M=2$: rolling three $4$-sided dice, looking for two or more of a kind.
There are $4^3=64$ possible rolls. The probability is $\dfrac{40}{64}$ ($36$ ways of getting exactly a pair plus four ways of getting three of a kind, out of $4^3$ possible rolls).
What is the probability in the general case? Assuming there isn't a closed-form solution, is there a good upper bound? My application involves values in the range of $D=10^{12}$, $S=10^9$, $M=10^1$.
| Edit: this was an answer given for a version with the question with a typo on it. It's an approximation assuming that $D$ and $S$ are both large, and $D$ is much smaller than $S$.
If we let $X_k$ count the number of dice that come up $k$, then $X_k$ counts lots and lots of independent events that are each very very unlikely: a textbook case for the Poisson approximation. We have $\mathbb E[X_k] = \frac{D}{S}$, so $X_k$ is approximately a Poisson random variable with rate $\frac{D}{S}$: we have $$\Pr[X_k = n] \approx e^{-D/S} \frac{(D/S)^n}{n!}.$$
Moreover, if $D$ is much smaller than $S$, then most values are not going to show up at all, and are even less likely to show up $M$ times. If a value does show up at least $M$ times, it has very good odds of showing up exactly $M$ times: we have $\Pr[X_k = M+1] = \Pr[X_k = M] \cdot \frac{D}{S(M+1)}$, which is on the order of $10^{-4}$ for your application. So we can approximate the probability that $X_k \ge M$ by the probability that $X_k = M$ pretty darn well. Let $$p^* = \Pr[X_k = M] \approx e^{-D/S} \frac{(D/S)^M}{M!}$$ so that we can refer to this value later.
Then we know have very very many events ($S$ of them) that are each very very unlikely ($p^*$ is very small): the event that $X_1 \ge M$, that $X_2 \ge M$, that $X_3 \ge M$, and so on, all the way to $X_S \ge M$. So the Poisson approximation is once again an excellent estimate. If $Y$ is the number of values $k$ such that $X_k \ge M$, then $\mathbb E[Y] \approx p^* S$, so $Y$ is Poisson with rate $p^* S$, and we have $$\Pr[Y=0] \approx e^{-p^*S} \approx \exp\left(-Se^{-D/S} \frac{(D/S)^M}{M!}\right).$$
This, then, is our estimate for the probability that there are no values occurring $M$ times.
|
Prove that the real projective line cannot be embedded into Euclidean space Can the real projective line $RP^1$ be embedded into $\mathbb{R}^n$ for any $n$?
At first, I thought it could because $RP^2$ can be embedded in four-dimensional space, and $RP^1$ seemed like a simpler object to deal with than $RP^2$. But after drawing diagrams and attempting to find an embedding, I'm getting less and less sure that it can be embedded... unrigorously it always seems to require that 'point at infinity' that comes from say the one-point compactification of $\mathbb{R}$, so is there a way that I could prove this one way or the other?
| $\mathbb{R}P^1$ is just the circle $S^1$ and this cannot be embedded in the reals: an infinite connected subset of $\mathbb{R}$ (is an interval so) always has a cut-point (a point we can remove to leave a disconnected subset), and the circle remains connected if we remove any point.
|
Integral and limit: $\lim_{n \to \infty}\int_{-1}^{1}\frac{t{e}^{nt}}{{\left( {{e}^{2nt}}+1 \right)}^{2}}dt$ Show that, $$\underset{n\to \infty }{\mathop{\lim }}\,{{n}^{2}}\int_{-1}^{1}{\frac{t{{e}^{nt}}}{{{\left( {{e}^{2nt}}+1 \right)}^{2}}}dt=-\frac{\pi }{4}}$$
i reached this result after using two steps of subsitution to get the following are $x=nt$ then $dx=ndt$
$$\int_{-\infty}^{+\infty}\frac{xe^x}{(e^{2x}+1)^2}dx$$
i take $y=e^x$ then $dy=e^xdx$ hence i get
$$ \int _0^ {\infty} \:\frac {ln\left (y\right)} {\left (y^2+1\right) ^2\:} dy$$
Now how shall i will continue this problem?
| Hint. Let's consider
$$
\int _0^{\infty} \frac {\ln y} {y^2+a^2} \:dy,\qquad a>0,
$$ then, by the change of variable $y=ax$, $\ln y= \ln a +\ln x$, $dy=adx$, one has
$$
\int _0^{\infty} \frac {\ln y} {y^2+a^2} \:dy=\frac{\ln a}{a}\int _0^{\infty} \frac {1} {x^2+1} \:dx+\frac{1}{a}\int _0^{\infty} \frac {\ln x} {x^2+1} \:dx,
$$ the latter integral being equal to $0$ (make the change of variable $x \to 1/x$), this gives
$$
\int _0^{\infty} \frac {\ln y} {y^2+a^2} \:dy=\frac \pi2\frac{\ln a}{a}
$$ and, by differentiating with respect to $a$,
$$
\int _0^{\infty} \frac {\ln y} {(y^2+a^2)^2} \:dy=\frac \pi4\frac{\ln a}{a^3}-\frac{\pi }{4 a^3},\qquad a>0,
$$ by putting $a=1$ one obtains
$$
\int _0^{\infty} \frac {\ln y} {(y^2+1)^2} \:dy=-\frac{\pi}{4}.
$$
|
Morphism of varieties from $\mathbb A^1 \to \mathbb A^1 \{0\}$?
Suppose $f$ is a morphism of varieties from $\mathbb A^1 \to \mathbb A^1 \backslash \{0\}$. What can we say about $f$ ?
I am unable to deduce much. I know that $f$ will induce a $k$ algebra map $k[x] \to k[x]$ by pulling back the regular functions. What more can be said about $f$?
| Are you aware that $\mathbb A^1\backslash \{ 0 \}$ is also affine? It is isomorphic to $V(xy - 1) \subset \mathbb A^2$ via the mapping $x \mapsto (x, 1/x)$. The coordinate ring of $V(xy - 1) \subset \mathbb A^2$ is
$$ k[x,y]/(xy - 1) \cong k[x,x^{-1}].$$
Therefore, following your line of reasoning, your geometric problem reduces to the algebraic problem of finding $k$-algebra morphisms from $k[x,x^{-1}] $ to $k[x]$. I'll leave this to you.
Alternatively, extend your morphism $\mathbb A^1 \to \mathbb A^1 \backslash \{ 0 \}$ to a morphism $\mathbb A^1 \to \mathbb A^1$ by composing with the inclusion $\mathbb A^1 \backslash \{ 0 \} \hookrightarrow \mathbb A^1$. The composition $\mathbb A^1 \to \mathbb A^1$ must be given by $x \mapsto f(x)$ for some polynomial $f(x) \in k[x]$. Assuming $k$ is algebraically closed, is it possible for the image of this map to avoid $0$ if $f(x)$ is a non-constant polynomial?
|
Preference relation Let $X= \{(a,b)|a,b\in \mathbb R\}$. Suppose that we have a weak preference relation $R$ (its strict part is denoted by $P$ defined on $X$. Assume that $P$ is coordinate-wise strictly monotonic increasing, that is,
if $a>c$, then $(a,b)P(c,b)$ for all $b$, and
if $b>d$, then $(a,b)P(a,d)$ for all $a$.
Now I want to find an example for $R$ with the properties above that can be represented by a value function, and a counter example that can not be represented by a value function.
I was wondering if someone could help me?
| For a positive example, define $(a,b) R (c,d)$ if and only if $a+b>c+d$. The representing function is $v(x,y) = x + y$ which is of course component-wise strictly monotonic.
EDIT
For a counterexample, use the lexicographic preference relation, defined by $(a,b) R (c,d)$ if and only if $a > c$ or [$a=c$ and $b>d$]. The proof that the lexicographic preference relation admits no value function is well-known; see f.i. https://economics.stackexchange.com/questions/6889/lexicographic-preference-relation-cannot-be-represented-by-a-utility-function
|
A multiple choice question related to singularities , poles. 1)Let $f(z)=\frac{1}{e^z-1}$ for all z$\in C$ such that $e^z\ne1$ then
a) f is meromorphic
b) the only singularities of f are poles
c) f has infinitely many poles on the imaginary axis
d) each pole of f is simple.
2)For z $\in C$, define $f(z)=\frac{e^z}{e^z-1}$ then
a) f is entire
b) the only singularities of f are poles
c) f has infinitely many poles on the imaginary axis
d) each pole of f is simple.
For 1st example I guess $e^z$ is entire function so option a) is correct. Also option b) is correct.
For 2nd example, also I guess options a) & b) are correct.
Please help me to get correct options.
| Hint: Using L'Hospital Calulate lim$_{z\to 2n\pi i}\frac{z-2n\pi i}{e^z-1}$,hence deduce that the given function has simple poles at $z= 2n \pi i$, $\forall n \in \mathbb N$
|
Definition of the gamma function for non-integer negative values The gamma function is defined as
$$\Gamma(x)=\int_0^\infty t^{x-1}e^{-t}dt$$
for $x>0$.
Through integration by parts, it can be shown that for $x>0$,
$$\Gamma(x)=\frac{1}{x}\Gamma(x+1).$$
Now, my textbook says we can use this definition to define $\Gamma(x)$ for non-integer negative values. I don't understand why. The latter definition was derived by assuming $x>0$. So shouldn't the whole definition not be valid for any $x$ value less than zero?
P.S. I have read other mathematical sources and most of them explain things in mathematical terms that are beyond my level. It would be appreciated if things could be kept in relatively simple terms.
| The integral $\int_0^\infty t^{x-1}e^{-t}\,dt$ is a "representation" of the Gamma function for $x>0$. That is, for $x>0$
$$\int_0^\infty t^{x-1}e^{-t}\,dt=\Gamma(x)$$
But the Gamma Function exists for all complex values of $x$ provided $x$ is not $0$ or a negative integer.
The idea of "representing" a function on a subset of the domain of definition is introduced in elementary calculus courses. For example, we can represent the function $f(x)=\log(1+x)$ as the series
$$f(x)=\log(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n-1}x^n}{n} \tag 1$$
which is valid for $-1<x\le 1$. However, $f(x)$ exists for all $-1<x$.
|
Graph Theory - Application of Kirchoff's Matrix Tree Theorem Calculate the number of spanning trees of the graph that you obtain by removing one edge from $K_n$.
(Hint: How many of the spanning trees of $K_n$ contain the edge?)
I know the number is $(n-2)n^{n-3}$ and that Kirchoff's matrix tree theorem applies but how do I show this?
| All edges of $K_n$ are identical. So if there are $n^{n-2}$ spanning trees of $K_n$, and each includes $n-1$ edges out of $\binom n2$, then each edge is included in $$\frac{n-1}{\binom n2} \cdot n^{n-2} = \frac2n \cdot n^{n-2} = 2n^{n-3}$$ spanning trees.
(Normally, this would be "included in an average of $2n^{n-3}$ spanning trees", but because all edges of $K_n$ are identical, all of them are contained in exactly the average number.)
|
Sequence and subsequence problem Let $(x_n)$ be a bounded sequence and for each $n ∈ ℕ$ let $s_n:=\sup\{x_k \mid k\geq n\}$ and $S:=\inf\{s_n\}.$ Show that there exist a subsequence of $(x_n)$ that converges to $S$.
I have that $(s_n)$ is a bounded decreasing sequence (by intuition, idk how to prove that).
Because $(s_n)$ is bounded an monotone, converges to $S$ then exists $N_1 ∈ ℕ$ such that $s_n-S=|s_n-S|<1/2 ∀ n\geq N_1$
In particular $s_{N_1}=\sup\{x_k \mid k\geq N_1\} < S+1/2$ then by definition of supremum exists $n_1\ge N_1$ such that $s_{N_1}>x_{n_1}>s_{N_1} - 1/2$ hence $|x_{n_1} - S|<1$.
if I continue the process I get $n_{k+1}>n_k$ such that $|n_{k+1} - S|<1/(k+1)$ then $n_{k+1}\to S$ by squeeze theorem.
First I need help to prove that $s_n$ is bounded an decreasing and if the rest of my proof is correct or I'm missing anything.
| Sn >=Sn+1 hence Sn is contracting sequence.
Sn has a bound
As. Inf(Xn)<=Sn<=Sup(Xn)
now inf of Sn is S. So for €>0 there exist Sn such that S <=Sn < S+€ for n >= k. So S-€ < S <= Sn < S+€
Hence |S-Sn| < € for n>=k
Where one may assume Sn as a subsequence of the given sequence
|
Proving that $T:V \rightarrow W$ has a unique adjoint transformation Let $(V, \langle . , \rangle_{V})$,$(W, \langle . , \rangle_{W})$ be inner product spaces, and let $T: V \rightarrow W$ be a linear transformation. A function $T^{*}: W \rightarrow V$ is called an adjoint of $T$ if $\langle T(x),y \rangle_{W} = \langle x, T^{*}(y) \rangle_{V}$ for all $x \in V$ and $y \in W$.
Show that if $V$ and $W$ are finite dimensional, then there is a unique adjoint $T^{*}$ of $T$, and $T^{*}$ is linear.
My approach to this problem was basically one of experimentation, I feel like I need to construct $T^{*}$ in the general sense, so I began by constructing it in a specific case to gain some sort of intuition, but it doesn't seem helpful.
Let $V = \mathbb{R}^{2}$ and $W = \mathbb{R}^{3}$, both with standard inner product. Let $T: V \rightarrow W$ be defined as $T(x_{1},x_{2}) = (x_{1},x_{2},x_{1}+x_{2})$.
Then $\langle T(x_{1},x_{2}), (y_{1},y_{2},y_{3}) \rangle = \langle (x_{1},x_{2},x_{1}+x_{2}), (y_{1},y_{2},y_{3}) \rangle = x_{1}y_{1} + x_{2}y_{2} + (x_{1}+x_{2})y_{2} = x_{1}(y_{1}+y_{3}) + x_{2}(y_{2}+y_{3})$
This last expression makes it evident that I want $T^{*}(y_{1},y_{2},y_{3}) = (y_{1}+y_{3},y_{2}+y_{3})$, but from this, I'm not really getting any insight to what it looks like in general.
| Hint/sketch: choose orthonormal bases $\{v_i\}$ and $\{w_j\}$ of $V$ and $W$, and represent $T$ by a matrix $A$ with respect to these bases. Then, the conjugate transpose $A^*$ will define a function $T^*:W\to V$, and you can use the orthogonality + the definition of the action of a matrix on a basis element to show that $\langle Av_i,w_j\rangle=\langle v_i,A^*w_j\rangle$ for all $i,j$. Then you can expand to all elements of $V$ and $W$ using bilinearity of the inner product, showing $T^*$ really is adjoint to $T$.
|
Let ${a_n}$ and ${b_n}$ be sequences. Prove that if $\lim_{n\rightarrow \infty}(a_n^2 + b_n^2) = 0$ then $\lim_{n\rightarrow \infty}a_n=0$ Let ${a_n}$ and ${b_n}$ be sequences. Prove that if $\lim_{n\rightarrow \infty}(a_n^2 + b_n^2) = 0$ then $\lim_{n\rightarrow \infty}a_n=0$
This was a question on my Real Analysis undergrad exam. Which I tried to prove by wrongly using the contrapositive. Can someone direct me on the right path?
| Hint Let $\epsilon >0$. If $N$ is such that ${a_n}^2+{b_n}^2 < \epsilon^2$ for all $n >N$, show that
$$\left| a_n -0 \right| < \epsilon \qquad \forall n >N$$
|
Proof C(n,r) = C(n, n-r) Hello just want to see if my proof is right, and if not could someone please guide me because I am not clearly seeing the steps to this proof. I don't know if I correctly solve the proof in the second to last step. If I did any mistake it would be great if someone could point at it.
$$
C(n, n-r) = \frac{n!}{r!(n-r)!} = \frac{n!}{(n-r)!(n-(n-r))!} = \frac{n!}{(n-r)! (r)!} = C(n,r)
$$
| $ C(n, k)$ denotes the number of ways to select $k$ out $n$ objects without regard for the order in which they are selected. To prove $C(n,r) = C(n, n-r)$ one needs to observe that whenever $k$ items are selected, $n-k$ items are left over, (un)selected of sorts.
|
Why is $5^{n+1}+2\cdot 3^n+1$ a multiple of $8$ for every natural number $n$? I have to show by induction that this function is a multiple of 8. I have tried everything but I can only show that is multiple of 4, some hints? The function is
$$5^{n+1}+2\cdot 3^n+1 \hspace{1cm}\forall n\ge 0$$, because it is a multiple of 8, you can say that$$5^{n+1}+2\cdot 3^n+1=8\cdot m \hspace{1cm}\forall m\in\mathbb{N}$$.
| $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
$\ds{\large n\ \mathsf{even}:}$
\begin{align}
5^{n + 1} + 2 \times 3^{n} + 1 & =
5\pars{5^{n} - 1} + 2\pars{3^{n} - 1} + 8 =
5\pars{4\overbrace{\sum_{k = 0}^{n - 1}5^{k}}^{\ds{even}}}\ +\
4\ \overbrace{\sum_{k = 0}^{n - 1}3^{k}}^{\ds{even}} + 8
\end{align}
$\ds{\large n\ \mathsf{odd}:}$
\begin{align}
5^{n + 1} + 2 \times 3^{n} + 1 & =
\pars{5^{n + 1} - 1} + 6\pars{3^{n - 1} - 1} + 8
\\[5mm] & =
4\overbrace{\sum_{k = 0}^{n}5^{k}}^{\ds{odd:\ 2p + 1}}\ +\
12\ \overbrace{\sum_{k = 0}^{n - 2}3^{k}}^{\ds{odd:\ 2q + 1}} + 8
\qquad\qquad\pars{~p\ \mbox{and}\ q\ \mbox{are}\ integers~}
\\[5mm] & = 8p + 24q + 3 \times 8
\end{align}
|
Are regular languages closed by a full-shuffle operation?
Let be two languages $L_1,L_2$ we define the operation full-shuffle as $S(L_1,L_2)=\{w \mid w=w_1w_2...w_k\}$ such that $w_1,w_3,...\in L_1$ and $w_2,w_4...\in L_2$. In other words, the language $L$ only contains words that we can build from a word $L_1$ followed by a word from $L_2$, followed by a word from $L_1$, followed by a word from $L_2$, and so on...
How can I show that regular languages are closed by this operation ?
I have some difficulties to imagine the formalize automata resulting from this.
I did the following from p49, in Introduction to theory of computation from A. Meheshwari and M. Smid.
Let $M_1=(Q_1,\Sigma,\delta_1,q_1,F_1)$ be an NFA such that $A_1=L(M_1)$ and $M_2=(Q_2,\Sigma,\delta_2,q_2,F_2)$ be an NFA such that $A_2=L(M_2)$. We will construct an NFA $M=(Q,\Sigma,\delta,q_0,F)$ such that $L(M)=A_1A_2A_1A_2A_1...$
*
*$Q = Q_1 \cup Q_2$ ($\cup Q_1 \cup Q_2...?$ but it remains the same isn't it ?)
*$q_0=q_1$
*$F=F_1 \cap F_2$ as far as we have to have words from both $L_1$ and $L_2$.
*$\delta : Q\times\Sigma_\epsilon \rightarrow P(Q)$ is defined as folows :
$$\delta(r,a)=\begin{cases}
\delta_1(r,a)\mbox{ if $r\in Q_1$}\\
\delta_2(r,a)\mbox{ if $r\in Q_2$}\\
\end{cases}$$
I'm new to this kind of proof and I'm not sure about it.
| Just use regular expressions instead of automata. Indeed,
$$S(L_1,L_2) = (L_1L_2)^* \cup (L_1L_2)^*L_1$$
|
Sketching for isotherms $T(x, y) = \text{ const}$ Hi I came across this question
Let the temperature $$T(x,y) = 4x^2+16y^2$$ in a body be independent of $z$, identify the isotherms $T(x, y) = \text{ const}$. Sketch it.
I do not understand the question. What type of equation(s) do I have to sketch?
| For several different values of $C$, you should draw lines containing all the points $(x, y)$ that solve $4x^2 + 16y^2 = C$, and you should also describe what the isotherms look like for a general value of $C$. For example, if $C = 16$, then the graph you would draw is $\frac{x^2}{4} + y^2 = 1$ (this is an ellipse with center at the origin, semi-major axis $2$, and semi-minor axis $1$).
|
Doubt in Category Theory about dualization. I have to prove that the Category of Abelian Groups has pushouts.
I want to use a theorem that says that if a category has equalizers and products, then it has pulbacks. Then I want to dualize the statement and say that if a category has coequalizer and coproducts, then it has pushouts.
But I don't understand quite well why dualize works so I don't know if I'm using it the right way. Also, I want to know if my strategy is correct.
Thanks in advance.
| Your strategy is correct; knowing that products and equalizers gives you pullbacks tells you that coproducts and coequalizers give you pushouts.
The reason for this is that for any category $\mathcal C$, you get $\mathcal C^{op}$ simply by having the new domain function of $\mathcal C^{op}$ be the codomain function of $\mathcal C$, and likewise for $\mathcal C^{op}$'s codomain function. If you draw the diagrams for various limits, and swap the directions of all the arrows (i.e. swap domain and codomain), you will see that you end up with colimit diagrams (because dualizing doesn't change commutativity, or unique commutativity, of diagrams).
Since a $\mathcal C$ has coproducts exactly when $\mathcal C^{op}$ has products, and it has coequalizers exactly when $\mathcal C^{op}$ has equalizers, the dual of a category with coequalizers and coproducts will always have pullbacks due to the theorem you know. But the dual of $\mathcal C$ has pullbacks if and only if $\mathcal C$ has pushouts, so you get your desired result.
|
Finding general solution $\frac{d^2f}{dx^2}$ for an implicit function Let's say I have a function $F:\mathbb R \times \mathbb R \rightarrow \mathbb R$, which meets the requirements of the implicit function theorem. Now I would like to find a general expression for:
$$\frac{d^2f}{dx^2}$$
From the theorem I know that $$\frac{df}{dx} = - \Big(\frac{\partial F}{\partial y} \Big )^{-1} \frac{\partial F}{\partial x}$$
So I went on to say that $$\frac{d^2f}{dx^2} = - \frac{\frac{d}{dx}(\frac{\partial F}{\partial x})*\frac{\partial F}{\partial y}-\frac{\partial F}{\partial x}*\frac{d}{dx}(\frac{\partial F}{\partial y})}{(\frac{\partial F}{\partial y})^2}$$
By using the quotient rule. Is this correct, and if so: How can I simplify this any further? I found online that $$\frac{d^2f}{dx^2}=-\frac
{\frac{\partial^2 F}{\partial x^2}\left(\frac{\partial F}{\partial y}\right)^2
-2·\frac{\partial^2 F}{\partial x\partial y}·\frac{\partial F}{\partial y}·\frac{\partial F}{\partial x}
+\frac{\partial^2 F}{\partial y^2}\left(\frac{\partial F}{\partial x}\right)^2}
{\left(\frac{\partial F}{\partial y}\right)^3}$$
which is correct I guess, but I do not quite know how to get there. Any help is greatly appreciated!
| HINT:
Let $\displaystyle F_x(x,y)=\frac{\partial F(x,y)}{\partial x}$ and $\displaystyle F_y(x,y)=\frac{\partial F(x,y)}{\partial y}$.
$$\begin{align}
\frac{d}{dx}F_x(x,f(x))&=F_{xx}(x,f(x))+F_{xy}(x,f(x))\frac{df(x)}{dx}\\\\
&=F_{xx}(x,f(x))-F_{xy}(x,f(x))\frac{F_x(x,f(x))}{F_y(x,f(x))}
\end{align}$$
|
Solving $\lim_{ x \to 0^+} \sqrt{\tan x}^{\sqrt{x}}$ The limit to be calculated is:
$$\lim_{x \to 0^+}\sqrt{\tan x}^{\sqrt{x}}$$
I tried:
$$L =\lim_{x \to 0^+}\sqrt{\tan x}^{\sqrt{x}}$$
$$\log L = \lim _{x\to 0^+} \ \ \dfrac{1}{2}\cdot\dfrac{\sqrt{x}}{\frac{1}{\log(\tan x)}}$$
To apply the L'hospital theorem but failed, as it went more complex.
How can we solve this, with or without L'Hospital?
| Well, we have
$$
\begin{align}\lim_{x\to 0^+}\sqrt{x}\ln\sqrt{\tan x}&=\lim_{x\to 0^+}\frac{\sqrt{x}}{2}\ln(\tan x)\\
&=\lim_{x\to 0^+}\frac{\ln(\tan x)}{\frac{2}{\sqrt{x}}}\qquad\text{which takes the form } \frac{-\infty}{+\infty}\text{and apply LHR to get}\\
&=\lim_{x\to 0^+}\frac{\frac{\sec^2x}{\tan x}}{-\frac{1}{\sqrt{x^3}}}\\
&=\lim_{x\to 0^+}\frac{\frac{2}{\sin 2x}}{-\frac{1}{x\sqrt{x}}}\\
&=\lim_{x\to 0^+}\bigg[-\sqrt{x}\cdot\frac{2x}{\sin 2x}\bigg]\\
&=-\sqrt{0}\cdot 1=0.
\end{align}$$
Hence, the required limit is
$$\lim_{x \to 0^+}\sqrt{\tan x}^{\sqrt{x}}=e^{\lim_{x \to 0^+}\ln\sqrt{\tan x}^{\sqrt{x}}}=e^{\lim_{x\to 0^+}\sqrt{x}\ln\sqrt{\tan x}}=e^0=1.$$
|
A function with convex level sets but is not a convex function I have the function $$f(x)=\sqrt{|x|},$$ with $x\in\mathbb{R}$. I understand that for some values $x,y\in$ dom$f$ and $t\in[0,1]$ that the function is not convex. However, I am having trouble proving that this function has convex level sets. We can define the level set as, $$L_{c}(f):=\{x\in\mathbb{R}\mid f(x)\leq c\}.$$
I understand that for the level sets to be convex means that $\forall x,y\in$ dom$f$, and $\forall t\in[0,1]$, $f(tx+(1-t)y)\leq c$. But I do not know how to use the fact that $f(x)$ is not convex.
Based on the level sets for $f(x)$, we know that $-c^2\leq x\leq c^2$. And so if we find that $-c^2\leq tx+(1-t)y\leq c^2$, then the level sets are convex?
| You can attain your goal by proving that $\sqrt{|X|}$ is a quasi-convex function; that is, $f(tx+(1-t)y) \le \max \{f(x),f(y)\}$.
Assume wlog $x \le y$.
For $x \ge 0$, $f(x)$ is increasing; so, if $0<x<y$, $f(tx+(1-t)y) \le f(y)$.
Similarly, for $x \le 0$, $f(x)$ is decreasing; so, if $y \le 0$, $f(tx+(1-t)y) \le f(x)$.
Finally, suppose $x < 0 < y$. If $|y| \ge |x|$, then $f(tx+(1-t)y) \le f(y)$. And if $|y| \le |x|$, then $f(tx+(1-t)y) \le f(x)$.
|
Surface area of a sphere with integration of disks Why it is not correct to say that the surface area of a sphere is:
$$
2 \int_{0}^{R} 2\pi r \text{ } dr
$$
In my mind we are summing up the perimeters of disks from $r=0$ to $r=R$, so by 1 integration, we would have $\frac{1}{2}$ of the surface area of the sphere.
I know that it's not correct because that will give us $2\pi R^2$ that it's different from $4\pi R^2$, but why???
Thanks!
| $$\text{Area will be}~~: 2 \int_{0}^{R} 2\pi x ~ ds$$
Where $ds$ is width of strip bounded by circles of radius $x$ and $x+dx$ situated at height $y$. Also $ds \neq dr$ it's tilted in $y$ direction too. Only horizontal projection of $ds$ is $dr$.What you have done is valid for Disk See image below.
$$(ds)^2=(dx)^2+(dy)^2$$
Therefore :
$$ds = \sqrt{1+\Bigg( \frac{dy}{dx}\Bigg)^2} \; dx$$
Now,
$$x^2 + y^2 = R^2$$
$$y = \sqrt{R^2 - x^2}$$
$$\dfrac{dy}{dx} = \dfrac{-2x}{2\sqrt{R^2 - x^2}}$$
$$\dfrac{dy}{dx} = \dfrac{-x}{\sqrt{R^2 - x^2}}$$
$$\left( \dfrac{dy}{dx} \right)^2 = \dfrac{x^2}{R^2 - x^2}$$
Thus,
$$\displaystyle A = 4\pi \int_0^R x \sqrt{1 + \dfrac{x^2}{R^2 - x^2}} \, dx$$
$$\displaystyle A = 4\pi \int_0^R x \sqrt{\dfrac{(R^2 - x^2) + x^2}{R^2 - x^2}} \, dx$$
$$\displaystyle A = 4\pi \int_0^R x \sqrt{\dfrac{R^2}{R^2 - x^2}} \, dx$$
Let
$$x = R \sin θ \implies
dx = R \cos θ dθ$$
When $x = 0, θ = 0$
When $x = R, θ = \pi/2$
Thus,
$$\displaystyle A = 4\pi \int_0^{\pi/2} R \sin \theta \sqrt{\dfrac{R^2}{R^2 - R^2 \sin^2 \theta}} \, (R \cos \theta \, d\theta)$$
$$\displaystyle A = 4\pi \int_0^{\pi/2} R^2 \sin \theta \cos \theta\sqrt{\dfrac{R^2}{R^2(1 - \sin^2 \theta)}} \, d\theta$$
$$\displaystyle A = 4\pi R^2 \int_0^{\pi/2} \sin \theta \cos \theta\sqrt{\dfrac{1}{\cos^2 \theta}} \, d\theta$$
$$\displaystyle A = 4\pi R^2 \int_0^{\pi/2} \sin \theta \cos \theta \left( \dfrac{1}{\cos \theta} \right) \, d\theta$$
$$\displaystyle A = 4\pi R^2 \int_0^{\pi/2} \sin \theta \, d\theta$$
$$A = 4\pi R^2 \bigg[-\cos \theta \bigg]_0^{\pi/2}$$
$$A = 4\pi R^2 \bigg[-\cos \frac{1}{2}\pi + \cos 0 \bigg]$$
$$A = 4\pi R^2 \bigg[ -0 + 1 \bigg]$$
$$A = 4\pi R^2$$
|
Finding an integral curve (applications of differential equations) Find a curve whose distance of every tangent from the origin $ON$ is equal to the $x$ axis coordinate of the point of intersection between the curve and that tangent $OU$.
How to set up the graph for this kind of problems in general?
How to form an ODE?
How to choose constant of integration and the resulting graph?
EDIT:
Here is the sketch:
Line $OB$ is orthogonal to tangent. The condition is that $OA=OB$
| The equation of the tangent at abscissa $x$ is
$$Y-y(x)-y'(x)(X-x)=0$$ so that the distance to the origin is
$$\frac{|0-y-y'\cdot(0-x)|}{\sqrt{1+y'^2}}$$ and is known to equal $x$.
Squaring and rearranging,
$$y^2-2xyy'+x^2y'^2=x^2+x^2y'^2,$$
$$y^2-2xyy'=x^2,$$
$$\left(\frac{y^2}x\right)'=-1.$$
Then after integration
$$y^2=x(C-x).$$
Discussion:
As the distance is an absolute value, both signs in $y=\pm\sqrt{x(C-x)}$ are valid and we can leave the equation in its quadratic form.
|
For what values of λ does Rank(A) = 1? Rank(A)= 2? Rank(A) = 3? $$ A = \begin{pmatrix} λ & 1 & 1 \\ 1 & λ & 1 \\ 1 & 1 & λ \end{pmatrix} $$
My attempt:
For Rank(A) = 1, the rows of matrix A must be scalar multiples of each other. Thus, the only value that works in this case would be λ = 1.
For Rank(A) = 2, the determinant of A must equal zero in order to find the values of λ.
The determinant is λ^3 -3λ +2 = 0 which gives values of λ = -2 and λ = 1. Since we know that λ = 1 makes Rank(A) = 1, we can disregard λ = 1 in this case. Thus λ = -2 would make Rank(A) = 2.
For Rank(A) = 3, the determinant of A must not equal zero. Since we found values of λ for which det(A) = 0, the values of λ for which Rank(A) = 3 should be numbers that do not equal 1 or -2.
Am I missing any λ values for each case?
| If the rank is not three, then it is not invertible and hence, its determinant must be $0$ (iff). Finding the characteristi polynomial, we can see when this happens. $$\left\vert\begin{pmatrix} λ & 1 & 1 \\ 1 & λ & 1 \\ 1 & 1 & λ \end{pmatrix}\right\vert=\lambda\cdot\left\vert\begin{pmatrix} \lambda & 1 \\ 1 & \lambda\end{pmatrix}\right\vert-\left\vert\begin{pmatrix} 1 & 1\\1&\lambda\end{pmatrix}\right\vert+\left\vert\begin{pmatrix} 1 & \lambda\\1&1\end{pmatrix}\right\vert=\lambda(\lambda^2-1)-\lambda+1+1-\lambda=\lambda^3-\lambda+2-2\lambda=\lambda^3-3\lambda+2=(\lambda-1)^2(\lambda+2)$$
which implies $\lambda=1,-2$ as you stated. Now, check these two values if you need to to see which give rank to be $1$ and which give rank to be $2$. All other values clearly don't give determinant equal to $0$ and hence give rank=$3$. So, your reasoning was just about right. Good job.
|
Find sum binomial coefficients $$
\sum_{k>=1}^{\infty} {2N \choose N-k}k
$$
How to find this sum?
I know that the answer is $ \frac{1}{2}N{2N \choose N}$
But it is very interesting to know the solution :)
| Our first goal in dealing with this sum is to get rid of the $k$. The standard approach is to rewrite something like $\binom{n}{k} \cdot k$ as $\frac nk \binom{n-1}{k-1} \cdot k$, or $n \binom{n-1}{k-1}$. Here, the bottom index doesn't match the extra factor, but we can make it so with a little extra work:
\begin{align}
\binom{2N}{N-k} k &= \binom{2N}{N+k}k \\
&= \binom{2N}{N+k}(N+k) - \binom{2N}{N+k} N \\
&= \frac{2N}{N+k} \binom{2N-1}{N+k-1}(N+k) - N \binom{2N}{N+k} \\
&= 2N \binom{2N-1}{N+k-1} - N \binom{2N}{N+k}.
\end{align}
At this point, we have two sums that are both easier to deal with:
$$\sum_{k \ge 1} \binom{2N}{N-k} k = 2N \sum_{k \ge 1} \binom{2N-1}{N+k-1} - N \sum_{k \ge 1} \binom{2N}{N+k}.$$
The sum of all binomial coefficients of the form $\binom{2N-1}{i}$ is $2^{2N-1}$, and our first sum takes only those binomial coefficients of this form where $i \ge N$. These are the second half, which by symmetry is equal to the first half, so the first sum simplifies to $2^{2N-2}$.
We're in much the same position with the second sum, except that the $\binom{2N}{i}$ coefficients also have a central coefficient $\binom{2N}{N}$, which is left out here. The sum of all coefficients that aren't the central one is $2^{2N} - \binom{2N}{N}$, and this sum is half of that.
Putting these facts together, we get
\begin{align}
\sum_{k \ge 1} \binom{2N}{N-k} k
&= 2N \Bigg(2^{2N-2}\Bigg) - N \Bigg(2^{2N-1} - \frac12\binom{2N}{N}\Bigg) \\
&= N \cdot 2^{2N-1} - N \cdot 2^{2N-1} + \frac N2 \cdot \binom{2N}{N} \\
&= \frac N2 \cdot \binom{2N}{N}.
\end{align}
|
Linear transformation and its matrix I have two bases:
$A = \{v_1, v_2, v_3\}$ and $B = \{2v_1, v_2+v_3, -v_1+2v_2-v_3\}$
There is also a linear transformation: $T: \mathbb R^3 \rightarrow \mathbb R^3$
Matrix in base $A$:
$M_{T}^{A} = \begin{bmatrix}1 & 2 &3\\4 & 5 & 6\\1 & 1 & 0\end{bmatrix}$
Now I am to find matrix of the linear transformation $T$ in base B.
I have found two transition matrixes (from base $A$ to $B$ and from $B$ to $A$):
$P_{A}^{B} = \begin{bmatrix}2 & 0 & -1\\0 & 1 & 2\\0 & 1 & -1\end{bmatrix}$
$(P_{A}^{B})^{-1} = P_{B}^{A} = \begin{bmatrix}\frac{1}{2} & \frac{1}{6} & \frac{-1}{6}\\0 & \frac{1}{3} & \frac{2}{3}\\0 & \frac{1}{3} & \frac{-1}{3}\end{bmatrix}$
How can I find $M_{T}^{B}$?
Is it equal to:
$(P_{A}^{B})^{-1}M_{T}^{A}P_{A}^{B}$?
If yes why?
| Some notation: for a vector $v \in \Bbb R^3$, let $[v]_A$ denote the coordinate vector of $v$ with respect to the basis $A$, and let $[v]_B$ denote the coordinate vector of $v$ with respect to the basis $B$. both of these are column vectors. To put this another way,
$$
[v]_A = \pmatrix{a_1\\a_2\\a_3} \iff v = a_1v_1 + a_2 v_2 + a_3v_3
$$
We can think of $M_T^A$ as a "machine" with the property that, with the usual matrix multiplication, $M_T^A [v]_A = [T(v)]_A$. Similarly, $P^B_A$ satisfies $P^B_A [v]_B = [v]_A$, whereas $P^A_B$ satisfies $P^A_B[v]_A = [v]_B$. What we want is to "build" is a machine $M_T^B$ for which $M_T^B[v]_B = [T(v)]_B$.
We can break the process of going from $[v]_B$ to $[T(v)]_B$ into three steps, each of which uses machinery that we already have. First, go from $[v]_B$ to $[v]_A$ with $P^B_A[v]_B = [v]_A$. Then, go from $[v]_A$ to $[T(v)]_A$ using $M_T^A [v]_A = [T(v)]_A$. Then, go from $[T(v)]_A$ to $[T(v)]_B$ using $P^A_B[T(v)]_A = [T(v)]_B$.
Putting it all together, we have
$$
[T(v)]_B = P^A_B[T(v)]_A = P^A_B(M_T^A [v]_A) =
P^A_BM_T^A (P^B_A[v]_B) = (P^A_BM_T^A P^B_A) [v]_B
$$
What we have found, then, is that the matrix which takes us from $[v]_B$ to $[T(v)]_B$ is the product $P^A_BM_T^A P^B_A = (P^B_A)^{-1} M_T^A P^B_A$. So, this is our matrix $M_T^B$.
|
Zero is least element of ordinal Definition. An ordinal is a well-ordered set $X$ such that for all $x\in X$, $(−∞, x) = x$.
Lemma. Zero is least element of ordinal.
Proof. Let $\alpha$ be an ordinal. Let $x$ be least element of $\alpha$. So, $x=x\cap\alpha=\emptyset$. Thus $\emptyset$ is least element of $\alpha$, that is $0$ is least element of $\alpha$.
My questions: Why $x\cap\alpha=\emptyset$?
| If $\alpha$ is an ordinal and $x\in\alpha$ then, according to the definition you just gave, $x=(-\infty,x)$ and so $x\cap\alpha=(-\infty,x)\cap\alpha.$ If $x$ is the least element of $\alpha,$ then $(-\infty,x)
\cap\alpha=\emptyset.$
|
Show that $\| f \|_{Lip(\mathbb{R^n})} = \| \nabla f \|_{L^{\infty}{\mathbb{(R^n)}}}.$ For a Lipschitz function $f : \mathbb{R}^n \rightarrow \mathbb{R}$, define the Lipschitz norm acting on the set of Lipschitz functions as $\| f \|_{Lip(\mathbb{R^n})} = \sup_{x \neq y}\frac{|f(x) - f(y)|}{\| x - y \|}.$
In this paper, the author quoted that $\| f \|_{Lip(\mathbb{R^n})} = \| \nabla f \|_{L^{\infty}{\mathbb{(R^n)}}},$ where $\nabla f$ is a distribution gradient in $L^{\infty}(\mathbb{R}^n).$
I have trouble showing the equality $\| f \|_{Lip(\mathbb{R^n})} = \| \nabla f \|_{L^{\infty}{\mathbb{(R^n)}}}.$ Any hint would be much appreciated.
The part where I have trouble with is the definition of $\| \nabla f \|_{L^{\infty}{\mathbb{(R^n)}}}.$
| By definition $\Vert \nabla f\Vert_{\infty}$ is the $L^\infty$ norm of the Euclidean norm of $\nabla f$. Note that when $f$ is Lipschitz, $f$ is differentiable (in the multivariable sense) almost everywhere, this is Rademacher's theorem.
Call the Lipschitz norm $L$.
Consider then that $\left|\frac{f(x+h)-f(x)}{\Vert h \Vert}\right| \leq L$, and sending $h \to 0$ gives $|\nabla f| \leq L$ at every point of differentiability of $f$, and hence almost every $x$. Thus $\Vert \nabla f\Vert_\infty \leq L$.
On the other hand, show that the supremum $L$ can be achieved by taking $x,y$ closer and closer to some fixed $z$.
Then use that $\left|\left|\frac{f(x)-f(y)}{\Vert x-y\Vert}\right|-L\right| \leq \epsilon$ for $x,y$ close enough $z$. Then for every point of differentiability of $f$ close enough to $z$, you find $|\nabla f| \geq L-2\epsilon$, and hence $\Vert \nabla f \Vert_\infty \geq L-2\epsilon$. Send $\epsilon \to 0$ to finish the claim.
|
5x5 board Bingo Question There is a game which I play, it is like bingo. It starts with a 5x5.
Lets say horizontally it goes ABCDE from left to right and vertically it goes 12345 from bottom to top.
I have 2 random generators which will generate a letter and a number giving me a box to cross. So for example A2.
Suppose I the generators don't generate a box that has been generated before, what is the probability that after the Nth amount of random generations, I get a bingo horizontally or vertically. I would like to also know the probability of the Nth generation to be a bingo.
| Another simple way to evaluate is probability is to use Monte Carlo Simulation,
See my python code:
from random import randint
from functools import reduce
def isBingo(board):
for i in range(5):
horizontal = reduce(lambda x,y: x and y, map(lambda j: board[i][j],range(5)), True)
vertical = reduce(lambda x,y: x and y, map(lambda j: board[j][i],range(5)), True)
if horizontal or vertical:
return True
return False
def monte_carlo(n, iterations=10000):
count = 0
for i in range(iterations):
#Clean board
board = [[False, False, False, False, False] for j in range(5)]
for j in range(n):
#Mark random square
row = None
column = None
while row is None or board[row][column]:
row = randint(0,4)
column = randint(0,4)
board[row][column] = True
if isBingo(board):
count += 1.0
return count/iterations
For example:
print (monte_carlo(15))
0.4876
|
Why have the Mathematicians defined the boundedness in Functional Analysis so much different from that in Calculus? Let us consider $R$ with the norm $||x||=|x| $ for the whole discussion. Now the identity mapping $I$ from $R$ to $R$ is unbounded in Calculus but in Functional Analysis treating the same identity mapping $I$ as a linear transformation from the vector space $R$ to the vector space $R$, it becomes bounded. Why have the Mathematicians defined the boundedness in Functional Analysis so much different from that in Calculus?
| Let $X$ be a Banach space. One way to connect the two definitions of boundedness is to say that a linear map $T : X \to X$ is bounded in the sense of functional analysis, if its restriction to the unit ball $B = \{ x \,:\, |x| < 1 \}$ is bounded in the sense of calculus.
Why the restriction to the unit ball? The only linear map that is bounded in the sense of calculus is $T=0$. Hence for linear maps taking the supremum over the whole space is not an interesting operation. However, taking the supremum over the unit ball (any ball actually) is interesting because of the following property: a linear map $T$ is continuous if and only if $T(B)$ is bounded.
Another interpretation of the name bounded linear map in functional analysis is the following: There is a notion of bounded subsets of Banach spaces (and more generally locally convex spaces), i.e., $B \subseteq X$ is bounded if and only if given any $U \subseteq X$ open with $0 \in U$ there exists $\lambda > 0$ such that $B \subseteq \lambda U$. Then a bounded linear map has the property that it maps bounded sets to bounded sets.
|
Summation of Binomial Coefficient Proof Prove that $$\sum_{k=0}^n(-1)^k {{n}\choose{k}} = 0$$
I have no idea about how to approach this problem.
| By the binomial theorem, we know that $$\sum_{k=0}^n(-1)^k {{n}\choose{k}} =\sum_{k=0}^n(-1)^k \times 1^{n-k} {{n}\choose{k}} =(1+(-1))^{n}= 0$$
We have the desired result. Also note that elementary sums involving binomial coefficients usually involves the binomial theorem.
|
Order of $f(n) = \sqrt[\log n]{n} \cdot n^{\sqrt[\log n]{n}}$ This is as far as i've gone:
$$f(n) = \sqrt[\log n]{n} \cdot n^{\sqrt[\log n]{n}} \iff f(n)^{1 / \sqrt[\log n]{n}} = (\sqrt[\log n]{n})^{1 / \sqrt[\log n]{n} } \cdot n \iff f(n) = n^{\sqrt[\log n]{n}} $$
since $ \lim \limits_{x\to \infty} \sqrt[x]{x} = 1$
Now i haven't been able to proceed from there. If i divide $f(n)$ by $\sqrt[\log n]{n}$ i will prove that it's $\mathcal{O}\left(\sqrt[\log n]{n}\right)$ but i think i need to do better than that.. Help?
| $$
\sqrt[\log n]{n} = n^{1/\log(n)} = \left(e^{\log(n)}\right)^{1/\log(n)} = e
$$
So overall you have $f(n) = e \cdot n^e$
|
Showing that the hyperintegers are uncountable In class, we constructed the hyperintegers as follows:
Let $N$ be a normal model of the natural number with domain $\mathbb{N}$ in the language $\{0, 1, +, \cdot, <, =\} $. Also let $F$ be a fixed nonprincipal ultrafilter on $\omega$. Then we have $N^*$ as the ultrapower $N^\omega /F$ with domain $\mathbb{N}^{\omega} / F$ .
Now I need to show that $N^*$ is uncountable (this means that it's cardinality is $2^{\aleph_0}$ I guess).
The domain of $N^*$ are the equivalence classes of functions $\{f_F : f \mbox{ a function from }\mathbb{N} \to \mathbb{N}\}$
Further on, we have $N^{\sharp}$ which is the equivalence classes of all the constant functions.
This is my intuition: I know that $N$ and $N^{\sharp}$ are isomporphic (hence have the same cardinality, which is $\aleph_0$ since $N$ is a model for the natural numbers). Furthermore I know that $N^{\sharp}$ is elementary equivalent to $N^*$. But $N^*$ has an extra 'copy' of $N^\sharp$ on top of it, the hyperintegers. So then the cardinalty of $N^*$ should be $2^{\aleph_0}$ (does that even exist? And is that uncountable?). I hope someone can help me with a formal proof.
| A further variation on this theme is to choose a fixed infinite hyperinteger $H$ and note that the partial map $f\colon{}^\ast\mathbb{N}\to\mathbb{R}$ given by $f(n)=\text{st}(\frac{n}{H})$ whenever this is defined, is surjective.
|
Showing that $\sum_{k=0}^{n}(-1)^k{n\choose k}{1\over k+1}\sum_{j=0}^{k}{H_{j+1}\over j+1}={1\over (n+1)^3}$ Consider this double sums $(1)$
$$\sum_{k=0}^{n}(-1)^k{n\choose k}{1\over k+1}\sum_{j=0}^{k}{H_{j+1}\over j+1}={1\over (n+1)^3}\tag1$$
Where $H_n$ is the n-th harmonic
An attempt:
Rewrite $(1)$ as
$$\sum_{k=0}^{n}(-1)^k{n\choose k}{1\over k+1}\left(H_1+{H_2\over 2}+{H_3\over 3}+\cdots+{H_{k+1}\over k+1}\right)\tag2$$
Recall $$\sum_{k=0}^{n}(-1)^k{n\choose k}{1\over k+1}={1\over n+1}\tag3$$
Not sure how to continue
| We seek to show that
$$\sum_{k=0}^n (-1)^k {n\choose k} \frac{1}{k+1}
\sum_{j=0}^k \frac{H_{j+1}}{j+1} = \frac{1}{(1+n)^3}.$$
This is
$$\sum_{k=0}^n (-1)^k {n+1\choose k+1} \frac{k+1}{n+1} \frac{1}{k+1}
\sum_{j=0}^k \frac{H_{j+1}}{j+1} = \frac{1}{(1+n)^3}$$
or
$$\sum_{k=0}^n (-1)^k {n+1\choose k+1}
\sum_{j=0}^k \frac{H_{j+1}}{j+1} = \frac{1}{(1+n)^2}.$$
The LHS is
$$\sum_{j=0}^n \frac{H_{j+1}}{j+1} \sum_{k=j}^n (-1)^k {n+1\choose k+1}.$$
Writing
$${n+1\choose k+1} = {n+1\choose n-k} =
\frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{n-k+1}} (1+z)^{n+1}
\; dz$$
we get range control (vanishes for $k\gt n$) so we may write for the
inner sum
$$\frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{n+1}} (1+z)^{n+1}
\sum_{k\ge j} (-1)^k z^k
\; dz
\\ = (-1)^j \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{n-j+1}} (1+z)^{n+1}
\sum_{k\ge 0} (-1)^k z^k
\; dz
\\ = (-1)^j \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{n-j+1}} (1+z)^{n+1}
\frac{1}{1+z}
\; dz
\\ = (-1)^j \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{n-j+1}} (1+z)^{n}
\; dz
= (-1)^j {n\choose n-j} = (-1)^j {n\choose j}.$$
We thus have to show that
$$\sum_{j=0}^n \frac{H_{j+1}}{j+1} (-1)^j {n\choose j}
= \frac{1}{(1+n)^2}$$
or alternatively
$$\sum_{j=0}^n \frac{H_{j+1}}{j+1} (-1)^j
\frac{j+1}{n+1} {n+1\choose j+1}
= \frac{1}{(1+n)^2}$$
which is
$$\sum_{j=0}^n H_{j+1} (-1)^j
{n+1\choose j+1}
= \frac{1}{1+n}$$
The LHS is
$$\sum_{j=0}^n (-1)^j
{n+1\choose j+1} \sum_{q=1}^{j+1} \frac{1}{q}
= \sum_{j=0}^n (-1)^j
{n+1\choose j+1} \sum_{q=0}^{j} \frac{1}{q+1}
\\ = \sum_{q=0}^n \frac{1}{q+1}
\sum_{j=q}^n (-1)^j {n+1\choose j+1}.$$
We re-use the computation from before to get
$$\sum_{q=0}^n \frac{1}{q+1} (-1)^q {n\choose q}
= \sum_{q=0}^n \frac{1}{q+1} (-1)^q \frac{q+1}{n+1} {n+1\choose q+1}
\\ = \frac{1}{n+1} \sum_{q=0}^n (-1)^q {n+1\choose q+1}.$$
We have reduced the claim to
$$\sum_{q=0}^n (-1)^q {n+1\choose q+1} = 1$$
which holds by inspection or by writing
$$- \sum_{q=1}^{n+1} (-1)^q {n+1\choose q}
= 1 - \sum_{q=0}^{n+1} (-1)^q {n+1\choose q} = 1 - (1-1)^{n+1} = 1.$$
|
Lesson in an induction problem I'm trying to do this problem but I'm having a basic misunderstanding that just needs some clarification.
Consider the proposition that $P(n) = n^2 + 5n + 1$ is even.
Prove $P(k) \to P(k+1)$ $\forall k \in \mathbb N$.
For which values is this actually true?
What is the moral here?
This problem is meant to tell you a moral problem of induction. I'm aware that $P(n)$ is odd for all integers (I think), so I can't think of where to start on this. This is in regard to induction specifically even if the problem doesn't implicitly state it.
| While you can prove the step that $P(k) \rightarrow P(k+1)$ for all $k \in \mathbb{N}$, it does not follow that "$n^2+5n+1$ is even" for all $n \in \mathbb{N}$.
In other words, the moral is to never forget to prove the base case for induction, for otherwise you might be proving things that are just not true.
Here is a simpler example:
Suppose I want to use induction to prove $P(n): n > n + 1$ for all $n \in \mathbb{N}$
OK, so take arbitrary $k \in \mathbb{N}$, and suppose (inductive hypothesis) that $k > k + 1$. Well, then obviously $k + 1 > (k + 1) + 1$, and so the inductive step is proven. So there!
But wait! Clearly $n > n + 1$ is not true! What happened? I forgot the base case!
|
Can I apply L'Hôpital to $\lim_{x \to \infty} \frac{x+\ln x}{x-\ln x}$? Can I apply L'Hôpital to this limit:
$$\lim_{x \to \infty} \frac{x+\ln x}{x-\ln x}?$$
I am not sure if I can because I learnt that I use L'Hôpital only if we have $\frac{0}{0}$ or $\frac{\infty}{\infty}$ and here $x-\ln x$ is $\infty-\infty$ and x tends to infinity.
| $x -\ln x$ goes to $+\infty$ if and only if $e^{x-\ln x}$ does. And this is the case, since
$$e^{x-\ln x} = \frac{e^x}{x} \ge \frac{1+x+\dfrac{x^2}{2}}{x} \to +\infty $$
as $x \to +\infty$. So yes, you can apply de l'Hopital from the beginning.
|
How to explain why Integration by parts apparently "fails" in the case of $\int \frac{f'(x)}{f(x)}dx$ without resorting to definite integrals? Integrating by parts the following integral $$I=\int \frac{f'(x)}{f(x)}dx$$
gives us
$$\begin{align*}
I&=\int \frac{f'(x)}{f(x)}\,dx\\
&=\int\frac1{f(x)}f'(x)\,dx\\
&=\frac1{f(x)}f(x)-\int\left(\frac1{f(x)}\right)'f(x)\,dx\\
&=1+\int \frac{f'(x)}{f(x)}\,dx,
\end{align*}
$$
so
$$I=1+I\implies0=1,$$
which seems like a contradiction but is in reality a mistake as we can see by being somewhat more rigorous:
$$
\begin{align*}
I&=\int_{a}^x \frac{f'(t)}{f(t)}\,dt\\
&=\int_{a}^x\frac1{f(t)}f'(t)\,dt\\
&=\left[\frac1{f(t)}f(t)\right]_a^x-\int_{a}^x\left(\frac1{f(t)}\right)'f(t)\,dt,
\end{align*}$$
so $I=I.$
How do I explain this to a student-of economics for what it's worth-who has not still learned about definite Integrals?
(I suspect I could insert some constant on the upper part of the "failed" relations but I am not sure where or how to properly explain it. I also understand that in a couple of lessons we will talk about definitive integrals but what can I say now-That indefinite integrals are in reality a form of definite ones? )
|
Integrating by parts the following integral $$I=\int \frac{f'(x)}{f(x)}dx$$
gives us
$$I=\int \frac{f'(x)}{f(x)}dx=\int\frac1{f(x)}f'(x)dx=\\\frac1{f(x)}f(x)-\int\Big(\frac1{f(x)}\Big)'f(x)dx=\\1+\int \frac{f'(x)}{f(x)}dx\Rightarrow\\
I=1+I$$
Be careful here. The $I$ on the left and the $I$ on the right are not exactly the same quantity.
How can that make sense? It makes sense because indefinite integrals have an arbitrary constant of integration, and for any given indefinite integral, its arbitrary constant is not guaranteed or required to have the same value everywhere the integral is written.
In fact, the process you followed shows that the arbitrary constant for the $I$ on the left is necessarily $1 + $ the arbitrary constant for the $I$ on the right.
|
Question about determining whether vector field is conservative and about determining a potential function for said vector field. So, I've received this question to solve. Can anyone help me? I do not understand how to show that the vector field is conservative in this case. A detailed solution would help to understand what I've been doing wrong so far. (I seem to get answers that indicate that the vector field in question is not conservative, but it should be.)
The question is listed below:
Show that the vector field
F$(x, y, z) = (2x + y)$i$ + (z\cos(yz) + x)$j$ + y\cos(yz)$k
is conservative and determine a potential function.
| If we manage to find a potential function $U$ for $\mathbf{F}$, we are done so let's try to do that. We need:
$$ \frac{\partial U}{\partial x} = 2x + y \implies U(x,y,z) = \int (2x + y) \, dx + G(y,z) = x^2 + yx + G(y,z) $$
for some function $G = G(y,z)$ which depends only on $y,z$. Next,
$$ \frac{\partial U}{\partial y} = z \cos(yz) + x \implies x + \frac{\partial G}{\partial y} = x + z \cos(yz) \implies \\
G(y,z) = \int z \cos(y z) \, dy + H(z) = \sin(yz) + H(z)$$
for some function $H = H(z)$ that depends only on $z$.
Finally,
$$ \frac{\partial U}{\partial z} = y \cos(yz) \implies y \cos(yz) + \frac{\partial H}{\partial z} = y \cos(yz) \implies H \equiv C $$
so a potential for $\mathbf{F}$ is given by
$$ U(x,y,z) = x^2 + yx + \sin(yz) + C $$
where $C \in \mathbb{R}$ is arbitrary.
|
Equation translation: $V=\{f\in C^0([0,1]);\ f(0)=f(1)=0\}$ What does this equation say in English? More specifically, what does $C^0([0,1])$ mean?
$V=\{f\in C^0([0,1]);\ f(0)=f(1)=0\}$
If it helps, this is a set that I need to prove whether or not is a vector space.
EDIT: Okay, I just read something that says,
Here we denote the set of all continuous functions on $I\subset \mathbb{R}$ by $C^0(I)$.
However, I'm still confused by what it means to say, $C^0([0,1])$.
| $C^0([0,1])$ represents the set of continuous functions on the interval $[0,1]$. Therefore, $V$ is the set of continuous functions on the interval $[0,1]$ vanishing at the endpoints. For example, the functions $|x-\frac{1}{2}|-\frac{1}{2}$ and $\sin(\pi x)$ both belong to $V$.
More generally, the notation $C^n(A)$ represents the set of all $n$-times continuously differentiable functions on $A$.
|
Context free grammar for language { {a,b}*: where the number of a's = the number of b's}
My professor said that converting a language to a CFG is more of an art than anything else. I looked at this problem and didn't even know how to get started, or how I would reason my way to the solution (which I understand how it is correct).
Are there any useful patterns to deal with these conversions?
| For the language given, you need the number of $a$'s to match up with the number of $b$'s. Notice that the strategy used to find a CFG for the language is to make sure that whenever we introduce an $a$, that we also introduce a $b$ at the same time. By doing this, we make sure that the number of $a$'s and $b$'s are equal. Also, whenever we introduce an $a$ or a $b$, we also want the ability to put substrings on either side of them or inbetween them, hence the $S \to SASBS$ and $S \to SBSAS$ production rules. Notice that since the empty string also has the same number of $a$'s and $b$'s, then we need the $S \to \varepsilon$ production rule.
From this point, it is clear that everything that is in the grammar is also in the language. What's not so clear is that everything in the language is produced by the grammar; however, it turns out that these production rules suffice for generating the entire language (this isn't too difficult to prove).
|
Summing sines of different frequencies Is there a general formula for solving the following equation:
$$A \sin(Bt+C) + D \sin(Et+F) = G \sin(Ht+I)$$
All constants on the left side of the equation are known (t is a variable). Is there a formula for calculating G, H and I? Is this even solvable in general?
I searched the web and found some semi-relevant hits, like here, but it didn't really seem to answer my question.
| If you mean $G$, $H$, $I$ to be constant (i.e. independent of $t$), then the answer is No. The sum of sinusoids of different frequencies is never a sinusoid.
|
sphere bundles isomorphic to vector bundle. Bredon claims that sphere bundles in certain cases are isomorphic to vector bundles. For example he says just replace $S^{n-1}$ with $R^n$. But for example the circle and the plane are not even isomorphic. How can we talk about bundle isomorphism when even the fibers are not isomorphic?
"A disk or sphere bundle gives rise to a vector bundle with the orthogonal group as the structure group just by replacing the fiber $D^n$ or $S^{n-1}$ with $R^n$ and using the same change of coordinate function".
We cannot just "replace" the fibers with something not isomorphic can we?
| Apparently, in this context you have a sphere bundle say $M$ with the orthogonal group as structure group. This is a very special case of a sphere bundle. This means that you have a family of trivialization $\phi _i$ of your bundel $F$ over open sets $U_i$, say $\phi _i : U_i\times S \to M$ such that the change of chart over $U_i\cap U_j$ is of the form $\phi _{i,j} (u, v)= O_{i,j}(u).v$ where $O_{i,j}: U_i\cap U_j \to O(n)$ is a certain (continuous or smooth) function. Bredon says that you can use the same functions to define an $\bf R^n$ bundle $E$. First defined its trivialization over $U_i$, say $E_i$ by the formula $\Phi _i : U_i\times {\bf R}^n = E _i$. Then $E$ is the union of the $E_i=U_i\times {\bf R}^n$ glued over $U_i \cap U_j$ by the map $\phi _{i,j} (u, v)= O_{i,j}(u).v$.
Another approach is through the theory of principal bundles. To a sphere bundle with $O(n)$ as structure group you can associate its frame bundle $P$, a principal $O(n)$ bundle, which is the bundle whose fiber at a point $p$ is the set of isometries between the fiber over the point and the standard $n$ sphere. Once this frame bundle is defined, you can construct its associate euclidan bundle by the formula $E= P\times _{O(n)} {\bf R}^n$
|
Find the values of x for which the series $\sum_{n=0}^{\infty} \frac{(x-1)^n}{(-3)^n}$ Find the values of x for which the series
$$\sum_{n=0}^{\infty} \frac{(x-1)^n}{(-3)^n}$$
converge?
Not really sure how to properly answer this question considering its edge terms. Here goes my attempt: $$\sum_{n=0}^{\infty} \left(\frac{x-1}{-3}\right)^n$$
I know by the geometric series test that the only way for this geometric serise to be convergent is if $|r| < 1$
$$|x-1| < 3$$
$$\leftrightarrow -3 < (x-1) < 3$$
$$\leftrightarrow -2 < x < 4$$
Therefore this series converges for $x \in (-2,4)$
Is this a good enough answer? I know it doesn't converge for 4 or -2.
| You close by saying you don't know if it converges when $x = 4$ or $x = -2$.
At these values, note that $x-1 = 3$ or $-3$, respectively, so that in the former case you would be considering the infinite series $1-1+1-1+\cdots$ and in the latter you have $1+1+1+\cdots$
Each of these series diverges, as the $n^{\text{th}}$ terms do not converge to $0$.
|
Find the particular solution that satisfies the initial condition This is how I am going about it
$yy'-e^{x}=0$ ; $y(0)= 4$
I put it in standard form
$\frac{dy}{dx}-\frac{e^{x}}y$=0
$P(x)=e^{x}$
$Q(x)=0$
$I(x) = e^{\int e^{x}}$= ?
I'm not sure if I am doing it correctly but if so, what would I(x) come out to be?
| $$\frac{ydy}{dx} - \exp^x =0$$
$$\frac{dy}{dx} = \frac{\exp^x}{y}$$
$$\int(y)dy = \int(\exp^x)dx$$
$$\frac{y^{2}}{2} = \exp^x + c $$
$$y^2 = 2 \exp^x + c_{}1$$ where c_{1} = 2c
$$y = \surd (2(\exp^x) + c_{1})$$
$$ now y(0) = 4 $$
$$ c_{1} = 14 $$
$$so the answer is y = \surd(2(\exp^x) + 14) $$
|
Prove there exists $t>0$ such that $\cos(t) < 0$. I want to prove that there exists a positive real number $t$ such that $\cos(t)$ is negative.
Here's what I know
$$\cos(x) := \sum_{n=0}^\infty{x^{2n}(-1)^n\over(2n)!}, \;\;(x\in\mathbb R)$$ $${d\over dx}\cos(x) = -\sin(x)$$ $$\cos\left({\pi\over2}\right) = 0, \;\; \cos(0) = 1,$$ $$\sin\left({\pi\over2}\right) = 1, \;\; \sin(0) = 0.$$
I should also specify: The only thing I know about ${\pi\over2}$ is that it is the smallest positive number such that $\cos(\cdot)$ vanishes.
Here's what I tried so far: Since ${d\over dx}\cos(x) = -\sin(x)$ for all $x\in\mathbb R$, we use the fact that $\sin(\pi/2) = 1$ to give us $$\left.{d\over dx}\cos\left(x\right)\right|_{x = {\pi\over2}} = -\sin\left({\pi\over2}\right) = -1.$$ So $\cos(x)$ is decreasing at $x={\pi\over2}$. Using the fact that $\sin(\cdot)$ and $\cos(\cdot)$ are continuous (since differentiable $\implies$ continuous), we know that (and here's where I'm not sure) there exists $\epsilon > 0 $ such that $\cos\left({\pi\over2} + \epsilon\right) < 0$. Call $t = {\pi\over2} + \epsilon > 0$. This completes the proof.
Is there enough justification to make this claim? Since $\sin(\cdot)$ is continuous on $\mathbb R$, for some $\epsilon > 0$ we have $\left.{d\over dx}\cos(x + \epsilon)\right|_{x={\pi\over2}} = -\sin({\pi\over2}+\epsilon) < 0$. Hence $\cos(\cdot)$ is still decreasing at ${\pi\over2} + \epsilon$, and since $\cos({\pi\over2})=0$, it must be that $\cos(t) = \cos({\pi\over2}+\epsilon) < 0 $.
| Perhaps a more formal way is to invoke the mean value theorem which says that
$$\cos \left( \frac{\pi}{2}+\epsilon \right)-\cos \left( \frac{\pi}{2} \right)=-\sin \left( c \right) \epsilon $$
for some $c \in \left( \frac{\pi}{2},\frac{\pi}{2}+\epsilon \right)$. Using the continuity of $\sin$, you can take $\epsilon$ small enough to guarantee that $\sin(c)$ is sufficiently close to $1$, forcing it to be positive, and therefore finishing the proof.
|
How do we get the final formula of the Bernoulli number? I was trying to understand Bernoulli numbers. When I googled, I found this link.
It starts by saying that, The Bernoulli numbers are defined via the coefficients of the power series expansion of
$\frac{t}{e^{t}-1}$,
then they write the expansion,for $m \geq 0$.
$\frac{t}{e^{t}-1} = \sum_{m=0}^{\infty} \frac{B_m}{m!}t^{m}$ (1)
Then, the tutorial says that we multiply both sides by $(e^{t} -1)$ and get
$B_0=1$,
$B_m= -\frac{1}{m+1}\sum_{k=0}^{m-1}\binom{m+1}{k}B_k$. (2)
How is this happening?
I tried doing this with hand and I get a recursive relation, that looks like,
$\frac{t}{e^{t}-1} = \sum_{m=0}^{\infty} \frac{B_m}{m!}t^{m}$
$\implies t = e^{t}\sum_{m=0}^{\infty} \frac{B_m}{m!}t^{m} - \sum_{m=0}^{\infty} \frac{B_m}{m!}t^{m}$
$\implies t = e^{t}\sum_{m=0}^{\infty} \frac{B_m}{m!}t^{m} - \frac{t}{e^{t}-1}$
What next?
How do I get to (2)?
I am new to number theory, please excuse me if the doubt is very basic. Any kind of help will be appreciated. :)
| With recurrences of this type I like the representation in a matrix format. Note, that the binomial-coefficients in your recursive formula occur in the manner of the lower triangular Pascal-matrix ( = "P") and that we need a modified/trimmed version (= "Q" ) of it.
The following is the multiplication-scheme for the Bernoulli-numbers [update] corrected :
| | 1 |
| | -1/2 | vector B of
| | 1/6 | Bernoulli-numbers
| -dZ * Q * B = B' | 0 |
| | -1/30 |
| | 0 |
| | 1/42 |
| | 0 |
- + - - - - - - - - + - +
-Z | modified Pascalmatrix Q | |
- + - - - - - - - - + - +
-1 | . . . . . . . . | 0 | reduced (first entry =0)
-1/2 | 1 . . . . . . . | -1/2 | vector B' of
-1/3 | 1 3 . . . . . . | 1/6 | Bernoulli-numbers
-1/4 | 1 4 6 . . . . . | 0 |
-1/5 | 1 5 10 10 . . . . | -1/30 |
-1/6 | 1 6 15 20 15 . . . | 0 |
-1/7 | 1 7 21 35 35 21 . . | 1/42 |
-1/8 | 1 8 28 56 70 56 28 . | 0 |
- + - - - - - - - - + - +
This scheme illustrates the matrix-product $$ -\,^dZ \cdot Q \cdot B = B' $$ where $\,^d Z = \operatorname{diagonal}([1,1/2,1/3,...]) $ .
This is an explication of your formula (2) where $m$ is the matrix-row-index beginning at $0$ .
Of course: so far it is no proof, just an illustration how to understand the ingredients of formula (2). However, manipulations at this formula can be made to related it to known, simpler and proven relations between binomial-coefficients and Bernoulli-numbers, if that is what you want.
(P.s.: you might also be interested in this small treatize on exactly that subject / method)
|
Properties of gamma function What is the simplest way to prove the two following properties of $
\mathit{\Gamma}{\mathrm{(}}{z}{\mathrm{)}}
$
a)
$
\mathit{\Gamma}{\mathrm{(}}\frac{3}{2}{\mathrm{)}}\mathrm{{=}}\frac{1}{2}\mathit{\Gamma}{\mathrm{(}}\frac{1}{2}{\mathrm{)}}\mathrm{{=}}\frac{\sqrt{\mathit{\pi}}}{2}
$
b) $
\mathit{\Gamma}{\mathrm{(}}{z}{\mathrm{)}}\mathit{\Gamma}{\mathrm{(}}{1}\mathrm{{-}}{z}{\mathrm{)}}\mathrm{{=}}\frac{\mathit{\pi}}{\sin{\mathrm{(}}\mathit{\pi}{z}{\mathrm{)}}}
$
Thanks very much in advance.
| For the first property we are looking for :
$\Gamma (z+1)=z\Gamma(z)$ which was introduced by Gauss and is sometimes called $\Pi (z)$.
This can be derived from the integral representation of Gamma.
The second property is called Euler' reflection formula. The proof is available here :
https://proofwiki.org/wiki/Euler's_Reflection_Formula#Proof
|
When two unbiased dice are rolled one by one, what is the probability that either the first one is $2$ or the sum of the two is less than $5$?
When two unbiased dice are rolled one by one, what is the probability that either the first one is $2$ or the sum of the two is less than $5$?
a) $\dfrac 16$
b) $\dfrac 29$
c) $\dfrac 5{18}$
d) $\dfrac 13$
(Not homework, I'm doing some mock exams I found online)
Now I was pretty sure I got this. Take probability of rolling $2$ on the first one $\left(\frac 16\right)$ and the probability that the sum is $< 5 \left(\frac 6{36}\right)$, add them together and bam, $\frac 13$.
Wrong, seems the answer is c) $\frac5{18}$.
My problem is I've been trying to brush on my probability skills by googling and can see problems I made, such as the wording meaning that rolling a $2$ AND having a total of $5$ NOT being included in the result, but no matter what angle I attack from, I never end up with $\frac5{18}$.
So which little thing did I miss? Thanks very much for helping.
| You need to calculate the probability that either the first die is $2$, OR the sum is less than $5$. In math, "OR" always includes the possibility that both things occur (i.e. "either A or B" includes when A and B both happen).
However, you added the probability that the first die was $2$ ($6/36$) and the probability that the sum was less than $5$ ($6/36$)-- that means that in the case that both occur, you counted it twice, since it will be included in both first term and the second term. This is called double-counting. To fix double-counting, you need to subtract off what you counted twice.
In this case, what you counted twice is when the first die is $2$ AND the sum is less than $5$ -- this happens if the two dice are $(2,1)$ or $(2,2)$, so the probability is $2/36$. Your final answer will be
$$
\frac{6}{36} + \frac{6}{36} - \frac{2}{36} = \frac{10}{36} = \frac{5}{18}.
$$
|
Calculate the volume of the solid bounded laterally. How do i find the volume bounded below by the plane $xy$ and bounded above by $x^2+y^2+4z^2=16$ and laterally by the cylinder $x^2+y^2-4y=0$.
Since when i change to polar coordinates $x^2+y^2-4y=0$. is equal to $4sin(\theta)$.
And for the limits. $z=\frac{\sqrt{16-x^2-y^2}}{4}$ is equal to $z=\sqrt{4-\frac{r^2}{4}}$
So i think the integral for this is:
$$\int_{0}^{\pi}\int_{0}^{\sqrt{4-\frac{r^2}{4}}} 4rsin(\theta)drd\theta$$
| You have some mistake in the limits of integration. Using the symmetry of the solid around the $y-z$ plane ( see the figure), we can take for $\theta$ the values between $0$ and $\frac{\pi}{2}$ and duplicate the integral, so the limits becomes:
$$
0<\theta<\frac{\pi}{2} \qquad 0<r<4\sin \theta \qquad 0<z< \frac{1}{2}\sqrt{16-r^2}
$$
so the volume is:
$$
V= 2\int_0^{\frac{\pi}{2}}\int_0^{4\sin \theta}\int_0^{\frac{1}{2}\sqrt{16-r^2}}rdzdrd\theta=2\int_0^{\frac{\pi}{2}}\int_0^{4\sin \theta}\frac{1}{2}\sqrt{16-r^2}dr d\theta=
$$
$$
=\frac{1}{3}\int_0^{\frac{\pi}{2}}64(1-\cos^2\theta)d\theta=\frac{32}{3}\pi-\frac{128}{9}
$$
(if my calculations are correct).
|
Duality between universal enveloping algebra and algebras of functions There is a well known duality (of Hopf algebras) between universal enveloping algebra $U(\mathfrak{g})$ of a complex Lie algebra $\mathfrak{g}$ of a compact group $G$ and the algebra of continuous functions $C(G)$.
My question is, is there in the literature any place where this is presented in some detail? Bonus: also some references pointing to the generalization of this fact to quantized universal enveloping algebras?
Wikipedia says it is related to Tannaka-Krein theory, but I don`t know much about it and from a preliminary search I found nothing about this duality in the texts.
| I would suggest Hochschild's "Basic Theory of Algebraic Groups and Lie Algebras". It is about affine algebraic groups in general and not just compact Lie groups, but I think it is worthy to have a look there.
|
Entire function bounded on every horizontal and vertical line , then is it bounded on every horizontal and vertical strip? Let $f:\mathbb C \to \mathbb C$ be an entire function such that $f$ is bounded on every horizontal and every vertical line , then is it true that $f$ is bounded on any set of the form $V_{[a,b]}:=\{x+iy : y\in \mathbb R , a \le x \le b\}$ and any set of the form $H_{[a,b]}:=\{x+iy : x\in \mathbb R , a \le y \le b\}$ ?
| This is not true. Let $G\subset \mathbb{C}$ be the set $\{x+iy:|x|<\pi/2, y> -1, |y-\tan x|<1\}$. This is a connected open set that does not contain any line, or even a half-line. Let $E=\mathbb{C}\setminus G$. The function $f(z)=1/z$ is holomorphic on $E$. By Arakelyan's approximation theorem there exists an entire function $F$ such that $|F-f|<1/3$ on $E$. Consequently,
*
*$F$ is bounded on every line (not just on the vertical and horizontal ones)
*$F$ is nonconstant, since a constant $c$ cannot satisfy $|c-f|<1/3$ on $E$.
*Being nonconstant, $F$ is not bounded on $\mathbb{C}$. Yet it is bounded on $E$; thus, it is not bounded on $G$.
*Since the set $\{x+iy:y\in\mathbb{R},|x|\le \pi/2\}$ contains $G$, the function $F$ is not bounded on this set.
Remarks
*
*To check the assumptions of Arakelyan's theorem, as stated on Wikipedia, take $\Omega=\mathbb{C}$, so that $\Omega^*\setminus E = G\cup\{\infty\}$, which is a connected set. It's important that $G$ stretches out to infinity.
*Stronger tangential approximation is possible, where $|F(z)-f(z)|<1/|z|$ on $E$. The references in Wikipedia article should have this; in any case, Lectures on Complex Approximation by Gaier presents this and many other approximation theorems. In this case, $F$ tends to zero along every line in the complex plane.
|
$n^{n-1}-1$ is a multiple of $k$
Find the number of integers $k$ with $2 \leq k \leq 1000$ satisfying the following property:
*
*For every positive integer $n$ relatively prime to $k$, $n^{n-1}-1$ is a multiple of $k$.
Let $k = 2^{\alpha_1}3^{\alpha_2} \cdots p_n^{\alpha_n}$ be the prime decomposition of $k$. Then by the Chinese Remainder Theorem $n^{n-1} \equiv 1 \pmod{k}$ for all $n$ such that $\gcd(n,k) = 1$ if and only if\begin{align*}n^{n-1} &\equiv 1 \pmod{2^{\alpha_1}}\\n^{n-1} &\equiv 1 \pmod{3^{\alpha_2}}\\&\vdots\\n^{n-1} &\equiv 1 \pmod{p_n^{\alpha_n}}\end{align*} for all $n$ such that $\gcd(n,k) = 1$. How can we continue?
| If $\gcd(n,k)=1$ then $\gcd(n+k,k)=1$. So:
$$1\equiv (n+k)^{n+k-1}\equiv n^{n+k-1}=n^{n-1}n^k\pmod{k}$$
and hence $n^k\equiv 1\pmod{k}$ for all $n$ relatively prime to $k$.
Now if $0<n<k$ with $\gcd(n,k)=1$, then:
$$1\equiv (k-n)^{k-n-1} \equiv (-1)^{k-n-1} n^kn^{-(n-1)}n^{-2}\pmod{k}$$
But $n^k\equiv n^{-(n-1)}\equiv 1\pmod{k}$ so you have $$n^2\equiv (-1)^{k-n-1}\pmod{k}$$
That should reduce the problem greatly for case-by-case analysis. The only prime powers where every relatively prime square is $\pm 1$ are $2,4,8,3,5$. And $k$ has to be a product of these.
We know $k$ must be even, since otherwise $(2,k)=1$ and $k$ must divide $2^{2-1}-1=1$, we are most of the way.
When $k$ is even, we get that $n$ is odd, and hence $(-1)^{k-n-1}=1$ so we need $n^2\equiv 1\pmod{k}$ for all relevant $n$. But that means that $5$ can't be a factor.
This leaves us with $k=2,4,8,6,12,24$. We can quickly check each case.
Okay, more verbosely, you need to prove this lemma:
If $p^{a}\mid k$ and there is some $n_0$ such that $\gcd(n_0,p^a)=1$ and $n_0^2\not\equiv 1\pmod{p^a}$, then there is an $n$ with $\gcd(n,k)\neq 1$ such that $n^{2}\not\equiv 1\pmod{k}$.
Essentially, this is because you can write $k=p^bk'$ for $\gcd(k',p)=1$ and then solve the Chinese remainder question:
$$n\equiv n_0\pmod{p^b}\\
n\equiv 1\pmod{k'}$$
This gives you an $n$ with $n^2\not\equiv -1\pmod{p^b}$ so $n^2\not\equiv -1\pmod{k}$.
|
Variation of Weierstrass Approximation Theorem Let $f:[-1,1] \to \mathbb R$ be a continuous even function. Show that for any $\epsilon >0$ there exists an even polynomial $p(x)= \sum _{k=0} ^{n} a_k x^{2k}$ such that $|f(x)-p(x)|<\epsilon$ for any $x \in [-1,1]$. Show a similar result for a continuous odd function.
I know that since $f(x)$ is continuous, that a polynomial exists such that $|f(x)-p(x)|<\epsilon$ by the Weierstrass Approximation Theorem. However, I do not know how to show that this polynomial is also even. The proof for the odd case should be basically the same.
| The polynomial furnished by Weierstrass Approximation Theorem doesn't need to be even/odd in this case. But you can decompose $p$ in its even and odd parts $p=p_e+p_o$ where $p_e(x)=\frac{p(x)+p(-x)}{2}$ and $p_o(x)=\frac{p(x)-p(-x)}{2}$. Then $p_e$ will still be a good enough approximation to $f$ since $p_o$ will be small.
|
Proof by Induction: $n! > 2^{n+1}$ for all integers $n \geq 5.$ I have to answer this question for my math class and am having a little trouble with it.
Use mathematical induction to prove that $n! > 2^{n+1}$ for all integers $n \geq 5.$
For the basis step: $(n = 5)$
$5! = 120$
$2^{5+1} = 2^6 = 64$
So $120 > 64$, which is true.
For the induction step, this is as far as I've gotten:
Prove that $(n! > 2^{n+1}) \rightarrow \left((n+1)! > 2^{(n+1)+1}\right)$
Assume $n! > 2^{(n+1)}$
Then $(n+1)! = (n+1) \cdot n!$
After this, I'm stuck. Any assistance would be appreciated.
Thanks!
| $(n+1)!=(n+1)\times n!>(n+1)2^{n+1}>2\times2^{n+1}=2^{n+2}$
Make sure you understand all the steps and ask if you got trouble.
|
Euclid 1999 Question 4(a) - Circle Tangent Intersection Below is a question and the intended solution to a math contest problem.
I understand that if for both circles, if you assume that a circle's centre, the two points on the circumference that touch a tangent line each, and the intersection of the tangent lines; that these four points form a square, then the distances can be calculated trivially.
Thus, I would like to know how it is known that the points form two squares.
To clarify, this is what I would like to know:
Given a circle and two perpendicular lines both tangent to the circle, how is it known that the tangent points + center + intersection point forms a square?
| It is given that the two tangents form a 90 degree angle with each other. it is also known that the angle formed by the tangent to a circle and the radius to that point is 90 degrees as well. Since the figure emerging is a quadrilateral, the sum of the interior angles is 360 degrees, so the last remaining angle formd by the two tangent points and the center must be 90 degrees as well.
The distance from the center to the two tangent points is the radius and is thus equal. So we now have a quadrilateral where all the angles are 90 degrees and two adjacent sides are equal. So one can conclude it must be a square.
Note that this is not a 'proof', more of an intuition or a way to 'see' why it is a square. That is what I understood from your question.
|
Roots of Complex Polynomials
Find all $x \in \mathbb{C}$ satisfying $(x - \sqrt{3} + 2i)^3 - 8i = 0$
I was able to find one value, $x = \sqrt{3} - 4i$.
I can also see that $x = -i$ also works although I am not able to devise a formal way for ending up with this value.
I am also unsure of how to find other values of $x$.
| Hint: Divide your polynomial by $(x-(\sqrt{3} - 4i))$. You will get a quadratic polynomial. You found that $x = -i$ is also a root (great work!) so you should be to divide again by $(x+i)$.
|
If $A$ & $B$ are $4\times 4$ matrices with $\det(A)=-5$ & $\det(B)=10$ then evaluate... If $A$ & $B$ are $4\times 4$ matrices with $\det(A)=-5 $ & $\det(B)=10$ then evaluate...
a) $\det\left(A+\operatorname{adj}\left(A^{-1}\right)\right)$
b) $\det(A+B)$
Yes, those are meant to be addition signs. I wouldn't be asking if it were multiplication. ANS for a) is $-256/125.$
| Answer to a):
It holds that:
$$ A \cdot \operatorname{adj(A)} = \det (A) \cdot I .$$
Thus, we have:
$$ A^{-1}\cdot \operatorname{adj}\left(A^{-1}\right) = \det \left(A^{-1}\right) \cdot I.$$
However, $\det\left(A^{-1}\right) = -\frac{1}{5}$ and you need to apply the identity $\det(\lambda A) = \lambda^n \det A,$ where $A$ is a $n\times n$ matrix and $\lambda \in \mathbb R.$
|
Show $\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}$ I know there are various methods showing that $\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}$, but I want to know how to derive it from letting $t\rightarrow 0^{+}$ for the following identity:
$$\sum_{n=-\infty}^{\infty}\frac{1}{t^2+n^2}=\frac{\pi}{t}\frac{1+e^{-2\pi t}}{1-e^{-2\pi t}}$$
| Note that we have
$$\sum_{n=-\infty}^\infty \frac{1}{t^2+n^2}=\frac1{t^2}+2\sum_{n=1}^\infty\frac{1}{t^2+n^2}$$
Therefore, using $\sum_{n=-\infty}^\infty \frac{1}{t^2+n^2}=\frac\pi t \frac{1+e^{-2\pi t}}{1-e^{-2\pi t}}$, we find that
$$\sum_{n=1}^\infty\frac{1}{t^2+n^2}=\frac12\left(\frac\pi t \frac{1+e^{-2\pi t}}{1-e^{-2\pi t}}-\frac{1}{t^2}\right) \tag 1$$
The limit of the left-hand side of $(1)$ is the series of interest, $\sum_{n=1}^\infty\frac{1}{n^2}$. The limit of the term on the right-hand side is
$$\begin{align}
\frac12\lim_{t\to 0}\left(\frac\pi t \frac{1+e^{-2\pi t}}{1-e^{-2\pi t}}-\frac{1}{t^2}\right)&=\frac12\lim_{t\to 0}\left(\frac\pi t \frac{1+1-2\pi t+2\pi^2t^2+O(t^3)}{2\pi t-2\pi^2t^2-\frac43 \pi^3t^3+O(t^4)}-\frac{1}{t^2}\right)\\\\
&=\frac12\lim_{t\to 0}\left(\frac{\frac23 \pi^3t^2+O(t^3)}{2\pi t^2\left(1+O(t)\right)}\right)\\\\
&=\pi^2/6
\end{align} $$
|
Determine the mode of the gamma distribution with parameters $\alpha$ and $\beta$ How do you determine the mode of a gamma distribution with parameters $\alpha$ and $\beta$ ? Without looking on Wikipedia.
| Hint: you want to maximize $x^{\alpha-1} e^{-\beta x}$ over $x \in (0,\infty)$.
The derivative is $e^{-\beta x}[(\alpha-1)x^{\alpha-2} - \beta x^{\alpha-1}] = x^{\alpha-2} e^{-\beta x} (\alpha-1-\beta x)$, which is zero when $x= \frac{\alpha-1}{\beta}$ or $x=0$.
*
*If $\alpha \ge 1$, direct inspection shows that $x=0$ is not the mode, since the pdf is zero there. Thus the other critical point $\frac{\alpha-1}{\beta}$ must be the mode.
*If $\alpha < 1$ the pdf has a positive asymptote at $x=0$. Moreover the derivative is strictly negative for all $x>0$, so it decreases from $\infty$ to $0$ as $x$ goes from $0$ to $\infty$.
Thanks to JeanMarie for the clarifications.
|
Expected absolute difference between two iid variables Suppose $X$ and $Y$ are iid random variables taking values in $[0,1]$, and let $\alpha > 0$. What is the maximum possible value of $\mathbb{E}|X-Y|^\alpha$?
I have already asked this question for $\alpha = 1$ here: one can show that $\mathbb{E}|X-Y| \leq 1/2$ by integrating directly, and using some clever calculations. Basically, one has the useful identity $|X-Y| = \max{X,Y} - \min{X,Y}$, which allows a direct calculation. There is an easier argument to show $\mathbb{E}|X - Y|^2 \leq 1/2$. In both cases, the maximum is attained when the distribution is Bernoulli 1/2, i.e. $\mathbb{P}(X = 0) = \mathbb{P}(X = 1) = 1/2$. I suspect that this solution achieves the maximum for all $\alpha$ (it is always 1/2), but I have no ideas about how to try and prove this.
Edit 1: @Shalop points out an easy proof for $\alpha > 1$, using the case $\alpha = 1$. Since $|x-y|^\alpha \leq |x-y|$ when $\alpha > 1$ and $x,y \in [0,1]$,
$E|X-Y|^\alpha \leq E|X-Y| \leq 1/2$.
So it only remains to deal with the case when $\alpha \in (0,1)$.
| This isn't a full solution, but it's too long for a comment.
For fixed $0<\alpha<1$ we can get an approximate solution by considering the problem discretized to distributions that only take on values of the form $\frac{k}{n}$ for some reasonably large $n$. Then the problem becomes equivalent to
$$\max_x x^T A x$$
where $A$ is the $(n+1) \times (n+1)$ matrix whose $(i,j)$ entry is $\left(\frac{|i-j|}{n}\right)^{\alpha}$, and the maximum is taken over all non-negative vectors summing to $1$.
If we further assume that there is a maximum where all entries of $x$ are non-zero, Lagrange multipliers implies that the optimal $x$ in this case is a solution to
$$Ax=\lambda {\mathbb 1_{n+1}}$$
(where $1_{n+1}$ is the all ones vector), so we can just take $A^{-1} \mathbb{1_{n+1}}$ and rescale.
For $n=1000$ and $n=\frac{1}{2}$, this gives a maximum of approximately $0.5990$, with a vector whose first few entries are $(0.07382, 0.02756, 0.01603, 0.01143)$.
If the optimal $x$ has a density $f(x)$ that's positive everywhere,
and we want to maximize
$\int_0^1 \int_0^1 f(x) f(y) |x-y|^{\alpha}$
the density "should" (by analogue to the above, which can probably be made rigorous) satisfy
$$\int_{0}^1 f(y) |x-y|^{\alpha} \, dy= \textrm{ constant independent of } x,$$
but I'm not familiar enough with integral transforms to know if there's a standard way of inverting this.
|
Plotkin bound for the minimal distance of linear code over $\mathbb{F_q}$ I want to understand the proof that the minimal distance $d_C$ of a linear $[n,k]$ code $C$ over $\mathbb{F_q}$ is less or equal to $\frac{nq^{k-1}(q-1)}{q^k-1}$. The proof I'm reading says that the sum of the weights of all the words in $C$ is not bigger than $nq^{k-1}(q-1)$ and then the minimal distance (equivalently the minimal weight is less or equal to the average of the weights of all non-zero code words $\frac{nq^{k-1}(q-1)}{q^k-1}$).
However I don't see why the sum of the weights of all code words is $\leq nq^{k-1}(q-1)$.
These are my calculations. Suppose that $C$ has a word $c$ which is not zero. Then $c = (c_1, ... c_n)$ has a non-zero component $c_i \neq 0$. If $D = \{(x_1, ..., x_n) \in C \ | \ x_i = 0 \}$ is the subspace of $C$ of all the words having $0$ in the $i^{th}$ component, then $C/D$ contains exactly $q$ elements. This is true because, for every $a \in \mathbb{F_q}$ $C$ contains a word with $i^{th}$ component equal to $a$ (namely this is the word $c_i^{-1}a . c$) and two words $s, t$ with different $i^{th}$ components $s_i \neq t_i$ represent different elements in $C/D$. So $q = |C/D| = |C| / |D| \Rightarrow |D| = |C| / q = q^k / q = q^{k-1}$. So there are $(q-1)|D| = (q-1)q^{k-1}$ words in $C$ that has non zero $i^{th}$ component and potentially has weight of $n$ and the sum of their weights is $\leq n(q-1)q^{k-1}$. However this doesn't include the words with $i^{th}$ component equal to $0$ (the elements of $D$). Any help how to handle them too ?
Thanks in advance!
| In a linear code over $\mathbb F_q$, in each coordinate position, either all the codewords have a $0$, or each element of $\mathbb F_q$ appears equally often. That is, if we form a $q^k\times n$ array in which the rows are the codewords, then in each column, we either have all zeroes, or each element of $\mathbb F_q$ appears $q^{k-1}$ times. The total weight of all
the codewords is the total number of nonzero elements in this array, and since the total number of nonzero elements in a column is upper-bounded by $q^{k-1}(q-1)$, the total weight is no larger than $nq^{k-1}(q-1)$.
|
Expansion of this expression Let $x$ be a real number in $\left[0,\frac{1}{2}\right].$ It is well known that
$$\frac{1}{1-x}=\sum_{n=0}^{+\infty} x^n.$$
What is the expansion or the series of the expression $(\frac{1}{1-x})^2$?
Many thanks.
| Or do by brute force multiplication and gathering terms:
$$\frac{1}{(1-x)^2}=$$
$$
(1+x+x^2+x^3+x^4+\ldots)(1+x+x^2+x^3+x^4+\ldots)\\
=(1+x+x^2+x^3+x^4+\ldots)\\
+x+x^2+x^3+x^4+\ldots\\
\quad +x^2+x^3+x^4+\ldots\\
\quad \quad +x^3+x^4+\ldots\\
=1+2x+3x^2+4x^3+\ldots
$$
|
Tangent to a fiber bundle I am trying to prove that the kernel of a push-forward is the fiber.
Let $π : E → M $ be a fiber be bundle with a fiber $F$ . What is the meaning of a tangent space to a bundle? Does it means that if we have a vector, $X$ tangent to curve $\lambda$, that curve must pass to all points of the fiber or in just one point of the bundle?
| The definition of the tangent space of the bundle $E$ is the same as the definition of the tangent space to any manifold. A tangent vector at a point $p \in E$ is just an equivalence class of curves $[\alpha]$ with $\alpha(0) = p$. Since $E$ is a fiber bundle, instead of considering all the curves you can consider curves $\alpha$ which start at $p$ (so $\alpha(0) = p$) and stay in the fiber $E_{\pi(p)} = \pi^{-1}(\pi(p))$. Such curves will satisfy $d\pi_{p}([\alpha]) = 0$ and they will span the tangent space $T_{p}(E_{\pi(p)})$ which is the tangent space to the fiber $E_{\pi(p)}$ at $p$.
|
Improper integral $\int_a^bf(x)dx$ is convergent and $g(x)\ge0$. If $\int_a^bg(x)dx=0$, show that $\int_a^bf(x)g(x)dx=0.$ Assume $f(x)\in\mathrm{C}[a,b),\lim_{x\to b}f(x)=+\infty$, $\int_a^bf(x)dx$ is convergent. And the non-negative function $g(x)$ on the interval $[a,b]$ is Riemann-integrable. If $\int_a^bg(x)dx=0$, show that
$$\int_a^bf(x)g(x)dx=0.$$
Since $\int_a^bf(x)dx$ is an improper integral, the normal Integral mean value theorem can't work. How can I deal with it?
| Hint: $g(x)$ is non-negative on $[a,b]$ and it's integral is zero.
|
Is this normed linear space a Banach space? Let $E$ be a measurable set of finite measure and $1 < p_1 < p_2 < \infty$. Consider the linear space $L^{p_2} (E)$ normed by $||.||_{p_1}$ . Is this normed linear space a Banach space?
| Okay lets suppose that $(L^{p_2},\|.\|_{p_1})$ is a Banach space then the map
\begin{align*}
\Phi:(L^{p_2},\|.\|_{p_2}) &\to (L^{p_2},\|.\|_{p_1}), \\f &\mapsto f
\end{align*}
is not only continuous but also bicontinuous. This follows from the fact that $\|.\|_{p_1} \leq C \|.\|_{p_2}$ and the open map theorem. Now this yields that there is a $K>0$ such that
\begin{align*}
\|\Phi(f)\|_{p_1} \geq K \|f\|_{p_2} \\
\|f\|_{p_1}\geq K \|f\|_{p_2}
.
\end{align*}
Now take a sequence of sets $A_n \subseteq E$ with $0<\lambda(A_n)\to 0$ ($\lambda$ denotes the Lebesgue measure) and consider the previous inequality
\begin{align*}
\lambda(A_n)^{\frac{1}{p_1}} \geq K \lambda(A_n)^{\frac{1}{p_2}} \quad \text{for all} \quad n\in\mathbb{N}\\
\lambda(A_n)^{\frac{1}{p_1}-\frac{1}{p_2}} \geq K \quad \text{for all} \quad n\in\mathbb{N}
\end{align*}
But $\frac{1}{p_1}-\frac{1}{p_2}>0$ because of our assumptions, this leads to $K = 0$ which contradicts $K>0$. So it can't be Banach space
|
Is exponentiation open? Already for $2\times 2$ matrices the exponential map is not open. However, the diagonalization trick does not work for algebras of functions. Hence the question
Is the map $f\mapsto \exp(f)$ open on the complex space $C[0,1]$?
| Yes. Suppose $f\in C[0,1]$ and let $$C=\inf\{|\exp(f(x))|:x\in[0,1]\}.$$ Given $g\in C[0,1]$, let $H:[0,1]\times[0,1]\to\mathbb{C}$ be the linear homotopy from $\exp(f)$ to $g$ (that is, $H(x,t)=t\exp(f(x))+(1-t)g(x)$). Note that if $\|g-\exp(f)\|<C$, then the image of $H$ is contained in $\mathbb{C}\setminus\{0\}$. It follows that the homotopy $H$ lifts to the universal cover $\exp:\mathbb{C}\to\mathbb{C}\setminus\{0\}$. More precisely, there is a unique map $\tilde{H}:[0,1]\times[0,1]\to\mathbb{C}$ such that ${\exp}\circ\tilde{H}=H$ and $\tilde{H}(x,0)=f(x)$ for all $x$. Explicitly, we can compute $\tilde{H}$ using a contour integral: $$\tilde{H}(x,t)=f(x)+\int_{\exp(f(x))}^{H(x,t)}\frac{dz}{z}$$ where the integral is along the linear path. Taking $\tilde{g}(x)=H(x,1)$, we then have $\exp(\tilde{g})=g$.
To conclude that $\exp$ is open on $C[0,1]$, we now just need to show that we can ensure $\tilde{g}$ is arbitrarily close to $f$ by making $g$ sufficiently close to $\exp(f)$. This follows from the fact that $$\tilde{g}(x)-f(x)=\int_{\exp(f(x))}^{g(x)}\frac{dz}{z}$$ where the integral is computed along the linear path. If $\delta<C/2$ and $\|g-\exp(f)\|<\delta$, then $\frac{1}{z}$ is bounded by $2/C$ along this entire path and the path has length $<\delta$, and so $|\tilde{g}(x)-f(x)|<\frac{2\delta}{C}$. This goes to $0$ as $\delta$ goes to $0$, and it follows that $\exp$ is open.
|
How to find the limit of this sequence $u_n$? (defined by recurrence) $\left(u_n\right)$ is a sequence defined by recurrence as follows:
$
\begin{cases}
u_1=\displaystyle\frac{8}{3}\\
u_{n+1}=\displaystyle\frac{16}{8-u_n}, \forall n\in \mathbb{N}
\end{cases}
$
The first part of this question is to show that $u_n<4, \forall n\in \mathbb{N}$ which I have done by induction, the second part is to show that the sequence is monotonically increasing and I have done that too.
The third part is to show that $\left(u_n\right)$ converges and that is easy with the previous two parts done but it asks to determine the limit and I'm not sure it's liquid that the limit is 4. I've done it computationally and verified it should be so, but I don't find it immediate just because we have shown $$u_n<4, \forall n\in \mathbb{N}$$ that this value should be considered the limit. Why not $3.9$?
Is there an analytic way of determining the value of this limit?
| At $n \rightarrow \infty$ , $u_n \approx u_{n+1}$
$$\lim_{n \to \infty} u_{n}=\lim_{n \to \infty}\displaystyle\frac{16}{8-u_n} \implies \lim_{n \to \infty}u_n(8-u_n)=16 \implies \lim_{n \to \infty}u_n=4$$
|
What is wrong with this way of solving trig equations? Let's suppose I have to find the values of $\theta$ and $\alpha$ that satisfy these equations:
*
*$\cos^3 \theta$ = $\cos \theta $
*$3\tan^3 \alpha = \tan \alpha$
on the interval $[0; 2 \pi]$.
If I try to solve, for instance, the first equation like this:
$$\cos^3 \theta = \cos \theta $$
$$\cos^2 \theta * \cos \theta = \cos \theta$$
$$\cos^2 \theta = \cos \theta \div \cos \theta $$
$$\cos^2 = 1$$
$$\cos \theta = \pm \sqrt {1}$$
$$\cos \theta = \pm 1$$
I end up getting only:
$\theta = 0 ; \pi ; 2 \pi$
But I know that $\pi /2$ and $3\pi /2$ would also make the equation true since $\cos^3 (\pi /2) = cos (\pi /2) = 0$ and $\cos^3 ( 3 \pi /2) = cos (3 \pi /2) = 0$.
The same problem arises when I try to solve the second equation:
$$3\tan^3 \alpha = \tan \alpha$$
$$3\tan^2 \alpha * \tan \alpha = \tan \alpha$$
$$3\tan^2 \alpha = \tan \alpha \div \tan \alpha$$
$$3\tan^2 \alpha =1$$
$$\tan^2 \alpha =1/3$$
$$\tan \alpha =\pm \sqrt{1/3}$$
$$\tan \alpha =\pm 1/\sqrt{3}$$
The values of $\alpha$ that make the tangent equal to $\pm 1/\sqrt{3}$ between 0 and $2 \pi$ are only these:
$ \alpha = \pi / 6 ; 5 \pi /6 ; 7 \pi / 6 ; 11 \pi /6$
However, I should also find $0$ and $\pi$ because $3\tan^3 (0) = \tan (0) = 3 tan^3 (\pi) = tan ( \pi ) = 0 $.
Something similar happens when I'm looking for the local minima and maxima of this function:
$$ f(x) = \sin^2(x) + \cos(x)$$
on the interval $[0; 2 \pi]$
$f'(x) = 2 \sin(x) \cos(x) - \sin(x)$
$ 0 = 2 \sin(x) \cos(x) - \sin(x)$
$ \sin(x) = 2 \sin(x) \cos(x)$
$ \sin(x) / \sin(x) = 2 \cos(x)$
$1 = 2 \cos(x)$
$1/2 = \cos(x)$
$x = \pi / 3 ; 5 \pi /3$
And again, plugging $x = 0$ or $x= \pi$ or $x = 2 \pi$ also make the derivative 0.
I've noticed that in all of these 3 cases I have the same trigonometric function on both sides of the equation and I'm dividing both sides by that function. This is making one side of the equation equal to 1 at some point. What is wrong or incomplete with this method? Why am I missing some results when I do this?
| The equation $x^3=x$ has three solutions: you can write it as
$$
x^3-x=0
$$
so
$$
x(x-1)(x+1)=0
$$
and the roots are $0$, $1$ and $-1$.
You cannot “divide by $x$”, which is the mistake you make when you “divide by $\cos\theta$”.
Thus your equation becomes
$$
\cos\theta=0
\quad\text{or}\quad
\cos\theta=1
\quad\text{or}\quad
\cos\theta=-1
$$
The second equation is similar:
$$
\tan\alpha=0
\quad\text{or}\quad
\tan\alpha=\frac{1}{\sqrt{3}}
\quad\text{or}\quad
\tan\alpha=-\frac{1}{\sqrt{3}}
$$
|
If symmetric matrix $A\geq0$, $P>0$, does $APA\leq \lambda_{max}^2(A) P$ always hold? If symmetric matrix $A\geq0$, $P>0$, can $APA\leq \lambda_{max}^2(A) P$ always hold?
Notation:
$\lambda_{max}(A)$ means matrix $A$'s largest eigenvalue.
$A\geq0$ means matrix A is a positive semi-definite matrix.
$P>0$ means matrix P is a positive definite matrix.
| No, it is not always, the case. For example
$$A=\left(\begin{array}{cc}1&1\\1&1\end{array}\right), P=\left(\begin{array}{cc}100&0\\0&1\end{array}\right)$$
Then
$$APA = \left(\begin{array}{cc}101&101\\101&101\end{array}\right)\not\le 4P$$
|
Sum of Gaussian Curves I have two Gaussian curves, and I would like to sum them to have a curve with two bells, to fit some bimodal histogram.
Is doing N(m1 + m2; sig1 + sig2) the good way or should I do something else ?
For instance, I would like to obtain something like the green curve :
Gaussian curves
Thanks for the help !
EDIT: to fit my histogram curve, I have developed the EM algorithm, giving me back the best means / sigma for the laws, and also laws coefficients, but I don't really know how to use these coefficients to build my mixed curve.
| $N(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2)$ corresponds to the curve $\displaystyle x \mapsto \frac 1 {\sqrt{2\pi}\sqrt{\sigma_1^2+\sigma_2^2}} e^{-(x-(\mu_1+\mu_2))^2/(2(\sigma_1^2+\sigma_2^2))},$ and that has just one "bell", centered at $\mu_1+\mu_2.$
What you need is
$$
w_1 \frac 1 {\sigma_1\sqrt{2\pi}} e^{-(x-\mu_1)^2/(2\sigma_1^2)} + w_2 \frac 1 {\sigma_2\sqrt{2\pi}} e^{-(x-\mu_2)^2/(2\sigma_2^2)}
$$
where $w_1$ and $w_2$ are weights, i.e. positive numbers whose sum is $1$.
|
Two-sided ideal $I$ in exterior algebra $T(V)/I$. I have a confusion regarding two definitions of the two-sided ideal in exterior algebra.
Def 1)
In one definition, the exterior algebra $\Lambda(V)$ is defined as $T(V)/I$, where $I$ is the two-sided ideal generated by the graded commutators $$[a,b]=ab-(-1)^{|a||b|}ba$$ for $a\in T(V)_{|a|}$ and $b\in T(V)_{|b|}$.
Def 2) In another definition, $J_k$ is defined as the vector subspace of $V^{\otimes k}$ spanned by the $k$-fold tensors $$\dots\otimes \alpha\otimes\dots\otimes\alpha\otimes\dots$$ where $\alpha$ appears in the $i$th and $j$th position, $i<j$, $\alpha\in V$. Then $J$ is defined as $\bigoplus_{k=0}^\infty J_k$. And again, $\Lambda(V)=T(V)/J$.
My issue is I can't see how the $I$ and $J$ (in the two definitions respectively) are related. The $I$ and $J$ ought to be the same (aren't they?) but that is not so clear to me.
Thanks for any help.
| Since for all $a,b\in V$ we have $$(a+b)\otimes(a+b)=a\otimes a+a\otimes b+b\otimes a+b\otimes b\in J$$ and since also $(a+b)\otimes(a+b),a\otimes a,b\otimes b\in J$, we deduce that all $a\otimes b+b\otimes a\in J$, so that $I\subset J$ and we have a surjective algebra map $f:T(V)/I\to T(V)/J=\Lambda V $.
Over a field of characteristic $\neq 2$ we actually have $J=I$: indeed for all $a\in V$ we have $a\otimes a+a\otimes a=2(a\otimes a)\in I$ and thus also $a\otimes a\in I$ for all $a\in V$, so that $J\subset I$ and since we knew that $I\subset J$ we have equality.
Thus our map $f$ is an isomorphism of algebras in characteristic $\neq 2$.
In characteristic $2$ however the algebra $T(V)/I$ is none other than the symmetric algebra $Sym(V)$ and $f:Sym (V)\to \Lambda V $ cannot be an isomorphism because $ Sym(V)$ is infinite dimensional whereas $\Lambda(V)$ is finite dimensional.
|
Range of gradient map for coercive function I am given a differentiable (and therefore continuous) function $f: \mathbf{E} \to \mathbb{R}$ which satisfies the following growth condition:
$$
\lim_{||x|| \to \infty} \frac{f(x)}{||x||} \to +\infty
$$
Just for the sake of notation, $\mathbf{E}$ is some arbitrary Euclidean space.
The point is to prove that the gradient map of $f$ has range $\mathbf{E}$. The exercise has a hint: "try minimizing the function $f(x) - \langle a, x \rangle $"
Since $f$ is coercive, we know that it has bounded level sets
$$
\mathcal{L}_c = \{ x | f(x) \leq c \}
$$
for otherwise we would be able to find a sequence $\{x^n \}_n$ with $||x_n|| \to \infty$ so that $f(x_n) \leq c$ which contradicts the given growth condition. Also $f$ is continuous, so its level sets are also closed. Therefore $f$ has compact level sets which implies that it has a (global) minimizer.
Question: how do I prove that $g(x) = f(x) - a^T x$ also has a global minimizer for every choice $a \in \mathbf{E}$? This would suffice to prove that the gradient map of $f$ is $\mathbf{E}$, since the first-order optimality condition for $g$ would imply $\nabla f = a$.
Edit: An attempt to show that $g$'s level sets are compact: closedness follows from continuity. As for boundedness:
$$
\begin{align*}
\mathcal{L}^{g}_c &= \{x: f(x) - a^Tx \leq c \} \\
&= \bigcup_{\substack{c_1, c_2 :\\ c_1 + c_2 = c}} {\underbrace{\{ x: f(x) \leq c_1 \}}_{\mbox{bounded}} \cap \{x: -a^T x \leq c_2 \}}
\end{align*}
$$
The problem is that the above is an infinite union of sets which are bounded (every term is an intersection of a bounded with an unbounded set), for which I am not aware of any way to show boundedness.
| The assumption
$$\lim_{\|x\| \to \infty} \frac{f(x)}{\|x\|} \to +\infty$$
implies that the function $g(x) = f(x) - a^T x$ satisfies
$$\lim_{\|x\| \to \infty} \frac{g(x)}{\|x\|} \to +\infty$$
as the contribution of $a^T x/\|x\|$ is bounded.
Hence $g(x)\to +\infty$ as $\|x\|\to \infty$. A continuous function with this property attains global minimum. Indeed, if $\{x_n\}$ is a sequence such that $g(x_n)\to \inf g$, then $\{x_n\}$ must be bounded and therefore has a convergent subsequence.
|
Zeros of complex function Consider the function
$f(z)=e^{z}+\varepsilon_1e^{\varepsilon_1 z}+\varepsilon_2e^{\varepsilon_2 z}$ of a complex variable $z=x+i y$, where
$\varepsilon_1=-\frac{1}{2}+i\frac{\sqrt{3}}{2}$, $\varepsilon_2=-\frac{1}{2}-i\frac{\sqrt{3}}{2}$.
Numerical calculations show that all zeros of the function $f(z)$ are located on the lines $y=0$, $y=\pm \sqrt{3} x$.
Are there any ideas how to prove it theoretically?
| Since $f(\varepsilon_1 z) = \varepsilon_2f(z)$, it is enough to study $f$ on a "third" of the complex plane, for example the infinite cone centered on the negative real axis with an angle of $2\pi/3$.
There, $|\exp(z)|$ will quickly get negligible compared to the other two exponentials :
$f(z) = \exp(z) + \exp(2i\pi/3+z\frac{-1+\sqrt {-3}}2) + \exp(-2i\pi/3+z\frac{-1-\sqrt {-3}}2) \\
= \exp(z) + 2\exp(-z/2)\cos(2\pi/3+z\sqrt3/2)$
If, for $k \in \Bbb N$ (including $0$) you pick $z_k = -(2/\sqrt 3)(k+2/3)\pi $, so that $z_k\sqrt 3 / 2 + 2\pi/3 = -k\pi$, you get
$f(z_k) = \exp(z_k) + 2(-1)^k\exp(-z_k)$.
Since $z_k$ is negative, the first term is smaller than the second (quite an understatement), so this has the sign of $(-1)^k$.
Then by the Intermediate Value Theorem, you get a sequence of zeroes on the negative real axis, one on each interval $(z_{k+1} ; z_k)$ (it should be exponentially close to $(z_{k+1}+z_k)/2$).
Additionally, by expanding $f$ as a power series around zero, you get that there is a double zero at $0$.
To show that there aren't any other zero, we use the argument principle on $f$, or rather, on $g(z) = 2\exp(-z/2)\cos(2\pi/3+z\sqrt 3/2)$.
If $z$ is on the vertical line $z_k+iy$, a symmetry argument shows that the cosine $\cos(-k\pi+iy\sqrt3/2)$ stays real (but gets larger and larger as $y$ grows), and so $|g(z_k+iy)| \ge |g(z_k)| = 2\exp(-z_k/2)$, and $arg(g(z_k+iy)) = -y/2 + \arg(g(z_k))$.
Threfore, when $z$ moves from $z_k-i\sqrt 3 z_k$ down to $z_k+ i\sqrt 3 z_k$, the argument of $g(z)$ increases by $-\sqrt 3 z_k = 2(k+2/3)\pi$.
Because $|f-g| <<< |g|$, the argument of $f(z)$ does the same thing up to an exponentially small error. In fact, since $f$ is a positive real on the positive real axis, with the equation from earlier we know exactly what the argument is at the endpoints modulo $2\pi$, and therefore the argument of $f(z)$ also increases by exactly $2(k+2/3)\pi$.
Now we use the symmetry $f(\varepsilon_1z) = \varepsilon_2f(z)$ to get the variation of the argument on the two rotated line segments are also exactly $2(k+2/3)\pi$, and so the variation on the completed triangle is three times that, $(3k+2)(2\pi)$.
From this we get that the number of zeroes of $f$ inside each triangle is $3k+2$, and we are done.
|