Documentation

Exchangeability.Probability.CondIndep.Bounded.Projection

Conditional Independence - Projection Theorems #

This file proves that conditional independence allows projecting conditional expectations from σ(Z,W) to σ(W).

Main results #

Key insight #

Conditional independence means that knowing Z provides no additional information about Y beyond what W already provides. Therefore E[f(Y)|σ(Z,W)] = E[f(Y)|σ(W)].

References #

theorem condExp_project_of_condIndep {Ω : Type u_1} {α : Type u_2} {β : Type u_3} {γ : Type u_4} [MeasurableSpace Ω] [MeasurableSpace α] [MeasurableSpace β] [MeasurableSpace γ] (μ : MeasureTheory.Measure Ω) [MeasureTheory.IsProbabilityMeasure μ] (Y : Ωα) (Z : Ωβ) (W : Ωγ) (hY : Measurable Y) (hZ : Measurable Z) (hW : Measurable W) (h_indep : CondIndep μ Y Z W) {A : Set α} (hA : MeasurableSet A) :
μ[(Y ⁻¹' A).indicator fun (x : Ω) => 1|MeasurableSpace.comap (fun (ω : Ω) => (Z ω, W ω)) inferInstance] =ᵐ[μ] μ[(Y ⁻¹' A).indicator fun (x : Ω) => 1|MeasurableSpace.comap W inferInstance]

Conditional expectation projection from conditional independence (helper).

When Y ⊥⊥_W Z, conditioning on (Z,W) gives the same result as conditioning on W alone for indicator functions of Y.

This is a key technical lemma used to prove the main projection theorem.

theorem condIndep_project {Ω : Type u_1} {α : Type u_2} {β : Type u_3} {γ : Type u_4} [MeasurableSpace Ω] [MeasurableSpace α] [MeasurableSpace β] [MeasurableSpace γ] (μ : MeasureTheory.Measure Ω) [MeasureTheory.IsProbabilityMeasure μ] (Y : Ωα) (Z : Ωβ) (W : Ωγ) (hY : Measurable Y) (hZ : Measurable Z) (hW : Measurable W) (h_indep : CondIndep μ Y Z W) {A : Set α} (hA : MeasurableSet A) :
μ[(Y ⁻¹' A).indicator fun (x : Ω) => 1|MeasurableSpace.comap (fun (ω : Ω) => (Z ω, W ω)) inferInstance] =ᵐ[μ] μ[(Y ⁻¹' A).indicator fun (x : Ω) => 1|MeasurableSpace.comap W inferInstance]

Conditional expectation projection from conditional independence.

When Y ⊥⊥_W Z, conditioning on (Z,W) gives the same result as conditioning on W alone for functions of Y.

Key insight: Conditional independence means that knowing Z provides no additional information about Y beyond what W already provides. Therefore E[f(Y)|σ(Z,W)] = E[f(Y)|σ(W)].

Proof strategy:

  1. By uniqueness, suffices to show integrals match on σ(W)-sets
  2. For S ∈ σ(W), we have S ∈ σ(Z,W) since σ(W) ≤ σ(Z,W)
  3. So ∫_S E[f(Y)|σ(Z,W)] = ∫_S f(Y) by conditional expectation property
  4. And ∫_S E[f(Y)|σ(W)] = ∫_S f(Y) by conditional expectation property
  5. Therefore the integrals match, giving the result

Alternative via conditional independence definition:

  • Can show E[f(Y)|σ(Z,W)] is σ(W)-measurable by using the factorization from CondIndep
  • Then apply that conditional expectation of a σ(W)-measurable function w.r.t. σ(W) is identity

Kallenberg 1.3: Indicator Conditional Independence from Drop-Info #

Infrastructure for deriving conditional independence from distributional equality via the "drop information" property for Y.

Note: Helper lemmas integrable_mul_of_bound_one and condExp_indicator_ae_bound_one are available from CondExpHelpers.lean.