The Library.
Mapping the analytical architecture of mathematics, from foundational axioms to advanced research frontiers.
Advanced Probability THEORY
4 Institutional Proofs
Advanced
Borel-Cantelli
Infinite predictors.
Enter Proof →
Let (Ω,F,P) be a probability space, and let {An}n=1∞ be a sequence of events. \\
First Borel-Cantelli Lemma: If ∑n=1∞P(An)<∞, then P(limsupn→∞An)=0. \\
Second Borel-Cantelli Lemma: If the events {An}n=1∞ are independent and ∑n=1∞P(An)=∞, then P(limsupn→∞An)=1.
Advanced
Proof: Borel-Cantelli Lemma 2 (Independence, Divergent Sum)
The second Borel-Cantelli Lemma for independent events with a divergent sum of probabilities.
Enter Proof →
\sum P(A_n) = \infty \text{ and independence } \implies P(A_n \text{ i.o.}) = 1
Advanced Stochastic Processes
55 Institutional Proofs
Advanced
Solving the SDE: Unveiling the Log-Normal Distribution for Geometric Brownian Motion
Exploring the cinematic intuition of Solving the SDE: Unveiling the Log-Normal Distribution for Geometric Brownian Motion.
Enter Proof →
Let St be a stochastic process satisfying the Geometric Brownian Motion (GBM) SDE given by dSt=μStdt+σStdWt, where μ∈R is the drift, σ>0 is the volatility, and Wt is a standard Wiener process. Given an initial condition S0>0, the unique solution is:
St=S0exp((μ−21σ2)t+σWt)
Consequently, the random variable ln(St) is normally distributed with mean ln(S0)+(μ−21σ2)t and variance σ2t.Advanced
Ito's Lemma: The Cornerstone of Stochastic Calculus
Exploring the cinematic intuition of Ito's Lemma: The Cornerstone of Stochastic Calculus.
Enter Proof →
Let Xt be an Ito process satisfying the stochastic differential equation dXt=μtdt+σtdWt, where Wt is a standard Wiener process. If f(t,x) is a scalar-valued function that is C1,2 (continuously differentiable in t and twice continuously differentiable in x), then the differential of the stochastic process Yt=f(t,Xt) is given by:
df(t,Xt)=(∂t∂f+μt∂x∂f+21σt2∂x2∂2f)dt+σt∂x∂fdWt
Algebra
1 Institutional Proofs
Analytical Mechanics
3 Institutional Proofs
Applied Statistics
58 Institutional Proofs
Intermediate
Proof of Chebyshev's Inequality
Exploring the cinematic intuition of Proof of Chebyshev's Inequality.
Enter Proof →
Let X be a random variable with finite expected value μ=E[X] and finite non-zero variance σ2=Var(X)=E[(X−μ)2]. For any k>0, Chebyshev's Inequality states that the probability that X deviates from its mean by more than k standard deviations is at most 1/k2:
P(∣X−μ∣≥kσ)≤k21
Intermediate
Derivation of the Mean and Variance of the Binomial Distribution
Exploring the cinematic intuition of Derivation of the Mean and Variance of the Binomial Distribution.
Enter Proof →
Let X be a discrete random variable following a Binomial distribution, denoted as X∼B(n,p), where n∈N and p∈[0,1]. The Probability Mass Function is given by P(X=k)=(kn)pk(1−p)n−k for k=0,1,…,n. The expected value and variance are:
E[X]=np,Var(X)=np(1−p)
Biometry
5 Institutional Proofs
Calculus
23 Institutional Proofs
Foundational
The Definition of a Limit
Visualizing limits.
Enter Proof →
For a function f:A→R where A⊆R, and a point c that is a limit point of A, we say that the limit of f(x) as x approaches c is L, denoted by limx→cf(x)=L, if for every ε>0, there exists a δ>0 such that if 0<∣x−c∣<δ, then
∣f(x)−L∣<ε
Foundational
The Power Rule & Slope
Seeing the derivative.
Enter Proof →
f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}
Chaos Theory
2 Institutional Proofs
Advanced
The Butterfly Effect
Initial sensitivity.
Enter Proof →
Let (X,d) be a metric space, and let f:X→X be a continuous map representing a discrete-time dynamical system. The system exhibits Sensitive Dependence on Initial Conditions (often referred to as the Butterfly Effect) if there exists a positive Lyapunov exponent λ>0 such that for a typical initial condition x0∈X and for any infinitesimally small perturbation δx0 (where d(x0,x0+δx0) is very small), the distance between the evolved trajectories fn(x0) and fn(x0+δx0) grows approximately exponentially with the number of iterations n as:
d(fn(x0),fn(x0+δx0))≈d(x0,x0+δx0)eλn
This approximation holds for sufficiently small d(x0,x0+δx0) and for a range of n before the trajectories become decorrelated or constrained by the phase space.Advanced
Fractals & Self-Similarity
Infinite complexity.
Enter Proof →
D = logN/logS
Complex Variables
1 Institutional Proofs
Computational Fluid Dynamics
1 Institutional Proofs
Control Theory
1 Institutional Proofs
Differential Equations
5 Institutional Proofs
Differential Geometry
2 Institutional Proofs
Discrete Mathematics
5 Institutional Proofs
Financial Mathematics
2 Institutional Proofs
Fluid Mechanics
2 Institutional Proofs
Advanced
Bernoulli's Law
Unravel Bernoulli's Law in Fluid Mechanics. Explore its rigorous derivation, cinematic intuition, and crucial applications for BSc Mathematics and Statistics students.
Enter Proof →
For an incompressible, inviscid fluid in steady, irrotational flow along a streamline, the sum of its static pressure, dynamic pressure, and hydrostatic pressure remains constant. This is expressed as:
P+21ρv2+ρgh=constant
where P is the static pressure of the fluid, ρ is the fluid density, v is the fluid velocity, g is the acceleration due to gravity, and h is the elevation above a reference datum.Advanced
Vorticity Dynamics
Explore Vorticity Dynamics: the mathematical heart of fluid rotation. Understand vortex stretching, baroclinic generation, and viscous effects in fluid flows. Essential for BSc Math & Stats.
Enter Proof →
Let ω=∇×u be the vorticity vector for a fluid flow u(x,t). The dynamics of ω are governed by the Vorticity Equation, which, for a general compressible, viscous fluid with density ρ, pressure p, and viscous stress tensor τ, states:
DtDω=(ω⋅∇)u+ρ21∇ρ×∇p+∇×(ρ1∇⋅τ)(Vortex Stretching and Tilting)(Baroclinic Torque)(Viscous Diffusion and Generation)
where DtD=∂t∂+u⋅∇ is the material derivative. In the case of an incompressible, inviscid fluid with conservative body forces, the equation simplifies to DtDω=(ω⋅∇)u.Fundamentals of Optimization
27 Institutional Proofs
Intermediate
Weierstrass Extreme Value Theorem: Guaranteeing Existence of Optima
Exploring the cinematic intuition of Weierstrass Extreme Value Theorem: Guaranteeing Existence of Optima.
Enter Proof →
Let f be a real-valued continuous function defined on a compact set K in Rn. Then f attains both a global maximum and a global minimum on K. That is, there exist points c and d in K such that for all x in K,
f(c)≥f(x)andf(d)≤f(x)
Intermediate
Local Optima are Global Optima for Convex Functions
Exploring the cinematic intuition of Local Optima are Global Optima for Convex Functions.
Enter Proof →
Let f:S→R be a convex function defined on a convex set S⊆Rn. If x∗∈S is a local minimum of f, then x∗ is a global minimum of f. That is, for all x∈S:
f(x∗)≤f(x)
Game Theory
2 Institutional Proofs
Group Theory
2 Institutional Proofs
Information Technology
17 Institutional Proofs
Linear Mathematics
8 Institutional Proofs
Intermediate
Rank-Nullity Theorem
Conservation of dimensions.
Enter Proof →
Let V and W be vector spaces, and let T:V→W be a linear transformation. Then, the dimension of the domain V is equal to the sum of the dimension of the image (rank) of T and the dimension of the kernel (nullity) of T. Mathematically:
dim(V)=rank(T)+nullity(T)
Foundational
Orthogonal Projections
Shadow geometry.
Enter Proof →
proj_V(x)
Linear and Integer Programming
29 Institutional Proofs
Foundational
The Convexity of the Feasible Region of a Linear Program
Exploring the cinematic intuition of The Convexity of the Feasible Region of a Linear Program.
Enter Proof →
Let S be the feasible region of a Linear Program (LP) defined by a system of linear inequalities and equalities, specifically S={x∈Rn∣Ax≤b,x≥0}, where A is an m×n matrix, x∈Rn, and b∈Rm. The feasible region S is a convex set. This means that if any two points x1 and x2 belong to S, then every point on the line segment connecting them also belongs to S. Formally, for any x1∈S, x2∈S, and any scalar λ∈[0,1], the convex combination
xλ=λx1+(1−λ)x2
must also satisfy xλ∈S.Intermediate
The Fundamental Theorem of Linear Programming: Existence of an Optimal Extreme Point Solution
Exploring the cinematic intuition of The Fundamental Theorem of Linear Programming: Existence of an Optimal Extreme Point Solution.
Enter Proof →
Consider a linear programming problem (LP) seeking to optimize an objective function \ f(x) = c^T x \ for \ x \\in \\mathbb{R}^n \ subject to a set of linear constraints, forming a feasible region \ S \. The set \ S \ is assumed to be a non-empty, convex polyhedron in \ \\mathbb{R}^n \. The Fundamental Theorem of Linear Programming states: \
\begin{aligned} \\text{If an optimal solution exists for the LP over } S \\text{, then at least one optimal solution is an extreme point (vertex) of } S \\text{.} \\end{aligned}
Mathematical Discourse
6 Institutional Proofs
Number Theory
5 Institutional Proofs
Numerical Analysis
5 Institutional Proofs
Operations Research
4 Institutional Proofs
Advanced
The Simplex Algorithm: A Visual Intuition
Mastering the Simplex algorithm through a geometric journey across the vertices of a high-dimensional feasible region.
Enter Proof →
max cTx
Advanced
Dynamic Programming
Master Dynamic Programming: A rigorous dive into Bellman's Principle, state transitions, and optimal control for BSc Mathematics and Statistics students.
Enter Proof →
Given a system evolving through N stages, let sk be the state at stage k, and xk be the decision made at stage k. Let fk(sk,xk) be the immediate cost incurred at stage k, and Tk(sk,xk) be the state transition function such that sk+1=Tk(sk,xk). The optimal value function Vk(sk), representing the minimum total cost from stage k to N starting from state sk, is governed by **Bellman's Principle of Optimality** and is given by the recursive relation:
Vk(sk)for k=xk∈Xk(sk)min{fk(sk,xk)+Vk+1(Tk(sk,xk))}=N,N−1,…,1
with the terminal condition VN+1(sN+1)=0 (or some other specified cost for the final state).Probability Theory
10 Institutional Proofs
Quantum Information
1 Institutional Proofs
Real Analysis
7 Institutional Proofs
Risk Theory
1 Institutional Proofs
Statistical Inference I
36 Institutional Proofs
Foundational
Classifying Statistics: Descriptive vs. Inferential
Exploring the cinematic intuition of Classifying Statistics: Descriptive vs. Inferential.
Enter Proof →
Let D be a finite dataset of n observations, D={x1,…,xn}. Let P denote the underlying population from which D is either a complete census or a sample. The classification of statistical methods hinges on their primary objective concerning D and P:\n\n1. **Descriptive Statistics**: Involves methods that organize, summarize, and present the features of D itself. The objective is to characterize the observed data without making generalizations beyond it. For example, the sample mean xˉ for dataset D is given by:\n
xˉ=n1i=1∑nxi
\n\n2. **Inferential Statistics**: Involves methods that use data from a sample Dsample⊆P to draw conclusions or make predictions about the characteristics of the larger population P from which the sample was drawn. The objective is to generalize from the sample to the population, often quantifying uncertainty. For example, using xˉ as an estimator for the population mean μ involves an inferential step:\n μ^=xˉ
Intermediate
Scales of Measurement: From Nominal to Ratio
Exploring the cinematic intuition of Scales of Measurement: From Nominal to Ratio.
Enter Proof →
Let X be a set of observations. A scale of measurement defines an operation ⊕ and a set of functions F that map X to a numerical set N, such that the properties of ⊕ and F satisfy specific invariance criteria. The four primary scales (Nominal, Ordinal, Interval, Ratio) are characterized by the set of permissible transformations T that preserve the structure of the data. Specifically, for a transformation f:N→N, we have:
- **Nominal:** f is any permutation of the numbers. T={permutations}.
- **Ordinal:** f is strictly increasing. T={strictly increasing functions}.
- **Interval:** f is strictly increasing and linear (i.e., of the form f(x)=ax+b with a>0). T={affine transformations with a>0}.
- **Ratio:** f is strictly increasing and multiplicative (i.e., of the form f(x)=ax with a>0). T={scaling transformations with a>0}.
Stochastic Calculus
3 Institutional Proofs
Intermediate
Ito's Lemma
Explore Ito's Lemma in stochastic calculus with rigorous proofs and cinematic intuition for BSc Math/Stats students.
Enter Proof →
Let Xt be a stochastic process adapted to a filtration Ft such that dXt=μ(t,Xt)dt+σ(t,Xt)dWt, where Wt is a standard Brownian motion and μ,σ are suitable functions. If Yt=f(t,Xt) where f(t,x) is a twice continuously differentiable function with respect to x and once continuously differentiable with respect to t, then Yt satisfies the stochastic differential equation:
dYt=∂t∂f(t,Xt)dt+∂x∂f(t,Xt)dXt+21∂x2∂2f(t,Xt)(dXt)2=(∂t∂f+μ∂x∂f+21σ2∂x2∂2f)dt+σ∂x∂fdWt
Intermediate
Martingales
Fair game math.
Enter Proof →
E[X|F] = X
Stochastic DE
2 Institutional Proofs
Time Series Analysis
26 Institutional Proofs
Intermediate
Proof that Autocovariance Depends Only on Lag for Weakly Stationary Processes
Exploring the cinematic intuition of Proof that Autocovariance Depends Only on Lag for Weakly Stationary Processes.
Enter Proof →
Let \ \\{X_t : t \\in \\mathbb{Z}\\} \ be a stochastic process. The autocovariance function between \ X_t \ and \ X_s \ is generally defined as \ \\gamma_X(t, s) = Cov(X_t, X_s) = E[(X_t - E[X_t])(X_s - E[X_s])] \.\n\nA process \ \\{X_t\\}\\} \ is defined to be *weakly stationary* if it satisfies the following two conditions:\n1. **Constant Mean:** \ E[X_t] = \\mu \ for all \ t \\in \\mathbb{Z} \, where \ \\mu \ is a finite constant.\n2. **Time-Invariant Autocovariance:** For any integers \ t, s, k \, the autocovariance between \ X_t \ and \ X_s \ is invariant under time shifts: \ Cov(X_t, X_s) = Cov(X_{t+k}, X_{s+k}) \.\n\n**Theorem Statement:** For a weakly stationary process \ \\{X_t\\}\\} \, its autocovariance function \ \\gamma_X(t, s) \ depends solely on the time lag \ h = t-s \, and not on the individual time points \ t \ or \ s \.\nSpecifically, there exists a function \ \\gamma_X: \\mathbb{Z} \\to \\mathbb{R} \ such that
\begin{aligned} \\gamma_X(t, s) = \\gamma_X(t-s) \\end{aligned}
\n\n**Proof:**\nLet \ \\{X_t\\}\\} \ be a weakly stationary process.\nBy definition, its mean is constant, \ E[X_t] = \\mu \.\nAlso, its autocovariance is time-invariant, meaning for any integers \ t, s, k \:\n\ Cov(X_t, X_s) = Cov(X_{t+k}, X_{s+k}) \\nLet \ \\gamma_X(t, s) \ denote \ Cov(X_t, X_s) \.\nSo, we have \ \\gamma_X(t, s) = \\gamma_X(t+k, s+k) \.\nTo show that this depends only on the lag \ h = t-s \, we can choose a specific value for \ k \.\nLet \ k = -s \. (This shifts the second time index to 0).\nSubstituting \ k = -s \ into the time-invariance property:\n\ \\gamma_X(t, s) = \\gamma_X(t + (-s), s + (-s)) \\n\ \\gamma_X(t, s) = \\gamma_X(t-s, 0) \\nThis result shows that the autocovariance function \ \\gamma_X(t, s) \ depends only on the difference \ t-s \ and the fixed time point 0. Therefore, it is effectively a function of the lag \ h = t-s \. We conventionally denote this function as \ \\gamma_X(h) \.\nThus, for a weakly stationary process, \begin{aligned} \\gamma_X(t, s) = \\gamma_X(t-s) \\end{aligned}
\nFurthermore, setting \ s=t \ yields \ Var(X_t) = Cov(X_t, X_t) = \\gamma_X(t-t) = \\gamma_X(0) \. Since \ \\gamma_X(0) \ is a constant (not depending on \ t \), the variance of a weakly stationary process is also constant.Foundational
Derivation of the Autocorrelation Function (ACF) for a White Noise Process
Exploring the cinematic intuition of Derivation of the Autocorrelation Function (ACF) for a White Noise Process.
Enter Proof →
Let {ϵt}t∈Z be a discrete-time white noise process satisfying the following properties for all t∈Z:\n1. Zero Mean: E[ϵt]=0\n2. Constant Variance: Var(ϵt)=E[ϵt2]=σ2 for some 0<σ2<∞\n3. Uncorrelatedness: Cov(ϵt,ϵs)=E[ϵtϵs]=0 for all t=s\n\nThen, the Autocorrelation Function (ACF) at lag k, denoted ρk, for the white noise process is given by:\n
ρk={10if k=0if k=0
Topology
4 Institutional Proofs
Vector Calculus
4 Institutional Proofs