Github: https://github.com/quangtiencs/bayesian-cognitive-modeling-with-turing.jl
Bayesian Cognitive Modeling is one of the classical books for Bayesian Inference. The old version used WinBUGS/JAG software as the main implementation. You can find other implementations, such as Stan and PyMC, in the below link. I reimplemented these source codes with Julia Programming Language & Turing library in this tutorial.
using Pkg
using Logging
using DynamicPPL, Turing
using Zygote, ReverseDiff
using StatsPlots, Random
using LaTeXStrings
using CSV
using DataFrames
using SpecialFunctions
using LinearAlgebra
using FillArrays
using CSV, DataFrames
using LogExpFunctions
using KernelDensity
format=:png
:png
Random.seed!(6)
TaskLocalRNG()
$$ z_{i} \sim \text{Bernoulli}(0.5) $$
$$ \phi \sim \text{Uniform}(0.5, 1) $$
$$ \psi = 0.5 $$
$$ \theta_{i} \sim
\begin{cases}
\phi & \text{if $z_{i} = 1$} \
\psi & \text{if $z_{i} = 0$}
\end{cases} $$
$$ k_{i} \sim \text{Binomial}(\theta_{i}, n) $$
x = [21, 17, 21, 18, 22, 31, 31, 34, 34, 35, 35, 36, 39, 36, 35];
num_people = length(x)
num_question = 40
psi = 0.5
function exam_loglikelihood(e, p)
return log(0.5) + loglikelihood(Binomial(num_question, p), e)
end
@model function ExamScores(x)
phi ~ Uniform(0.5, 1.0)
for i in eachindex(x)
Turing.@addlogprob! logsumexp([exam_loglikelihood(x[i], psi), exam_loglikelihood(x[i], phi)])
end
end
iterations = 10_000
burnin = 1000
model_exam_scores = ExamScores(x)
chain = sample(model_exam_scores, NUTS(), iterations, burnin=burnin)
plot(chain, size=(800, 200), left_margin=10Plots.mm, bottom_margin=10Plots.mm, fmt=format)
┌ Info: Found initial step size
└ ϵ = 0.4
Sampling: 100%|█████████████████████████████████████████| Time: 0:00:00
chain
Chains MCMC chain (10000×13×1 Array{Float64, 3}):
Iterations = 1001:1:11000
Number of chains = 1
Samples per chain = 10000
Wall duration = 8.06 seconds
Compute duration = 8.06 seconds
parameters = phi
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
phi 0.8636 0.0174 0.0002 6757.6793 7148.4507 1.0000 ⋯
1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
phi 0.8278 0.8520 0.8644 0.8758 0.8959
$$ z_{i} \sim \text{Bernoulli}(0.5) $$ $$ \mu \sim \text{Uniform}(0.5, 1) $$ $$ \lambda \sim \text{Gamma}(.001, .001) $$
$$ \phi_{i} \sim \text{Gaussian}(\mu, \lambda)_{\mathcal I(0,1)} $$
$$ \psi = 0.5 $$
$$ \theta_{i} \sim
\begin{cases}
\phi_{i} & \text{if $z_{i} = 1$} \\
\psi & \text{if $z_{i} = 0$}
\end{cases} $$
$$ k_{i} \sim \text{Binomial}(\theta_{i}, n) $$
y = [21, 17, 21, 18, 22, 31, 31, 34, 34, 35, 35, 36, 39, 36, 35]
num_people = length(y)
num_question = 40
psi = 0.5
function exam_loglikelihood_individual_differences(e, p)
return log(0.5) + loglikelihood(Binomial(num_question, p), e)
end
@model function ExamScoresIndividualDifferences(y)
mu ~ Uniform(0.5, 1.0)
lambda ~ truncated(Gamma(0.001, 1 / 0.001), lower=1e-15, upper=10_000)
phi ~ filldist(truncated(Normal(mu, 1.0 / sqrt(lambda)), lower=0, upper=1), length(y))
for i in eachindex(y)
log_prob_parts = [
exam_loglikelihood_individual_differences(y[i], psi),
exam_loglikelihood_individual_differences(y[i], phi[i])
]
Turing.@addlogprob! logsumexp(log_prob_parts)
end
end
iterations = 10_000
burnin = 10_000
model_exam_scores_individual_differences = ExamScoresIndividualDifferences(y)
chain = sample(model_exam_scores_individual_differences, NUTS(), iterations, burnin=burnin)
┌ Info: Found initial step size
└ ϵ = 0.05
Sampling: 100%|█████████████████████████████████████████| Time: 0:00:06
Chains MCMC chain (10000×29×1 Array{Float64, 3}):
Iterations = 1001:1:11000
Number of chains = 1
Samples per chain = 10000
Wall duration = 11.63 seconds
Compute duration = 11.63 seconds
parameters = mu, lambda, phi[1], phi[2], phi[3], phi[4], phi[5], phi[6], phi[7], phi[8], phi[9], phi[10], phi[11], phi[12], phi[13], phi[14], phi[15]
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
mu 0.8701 0.0355 0.0009 1884.3975 1564.6791 0.9999 ⋯
lambda 581.1297 627.3758 36.8015 617.8060 356.4243 1.0000 ⋯
phi[1] 0.8553 0.0705 0.0009 5690.4724 3210.9659 1.0000 ⋯
phi[2] 0.8563 0.0708 0.0007 7657.9989 3981.1402 1.0002 ⋯
phi[3] 0.8566 0.0710 0.0008 5447.2197 2810.6473 1.0006 ⋯
phi[4] 0.8569 0.0708 0.0008 6620.4931 3154.4206 0.9999 ⋯
phi[5] 0.8539 0.0711 0.0008 6060.8583 3126.6366 1.0003 ⋯
phi[6] 0.8200 0.0453 0.0007 4229.1002 3971.6397 1.0001 ⋯
phi[7] 0.8203 0.0451 0.0008 3461.1052 5032.2733 1.0011 ⋯
phi[8] 0.8516 0.0400 0.0004 8875.7797 6444.9193 1.0000 ⋯
phi[9] 0.8523 0.0391 0.0004 7733.5561 5975.8721 1.0000 ⋯
phi[10] 0.8643 0.0389 0.0004 7190.2563 4902.4894 1.0003 ⋯
phi[11] 0.8639 0.0385 0.0005 6263.9946 4989.7034 1.0002 ⋯
phi[12] 0.8764 0.0388 0.0005 5519.4627 5283.2530 0.9999 ⋯
phi[13] 0.9199 0.0421 0.0011 1488.0146 1153.8822 1.0002 ⋯
phi[14] 0.8760 0.0385 0.0005 6784.6000 6032.8708 0.9999 ⋯
phi[15] 0.8629 0.0394 0.0005 5974.0947 4209.4951 1.0000 ⋯
1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
mu 0.8118 0.8482 0.8658 0.8858 0.9682
lambda 32.3931 171.9491 382.8516 754.7453 2276.5144
phi[1] 0.6841 0.8237 0.8614 0.8976 0.9747
phi[2] 0.6931 0.8246 0.8615 0.8986 0.9751
phi[3] 0.6884 0.8237 0.8625 0.8994 0.9747
phi[4] 0.6895 0.8239 0.8620 0.8991 0.9784
phi[5] 0.6781 0.8220 0.8598 0.8968 0.9728
phi[6] 0.7156 0.7943 0.8249 0.8513 0.8947
phi[7] 0.7175 0.7946 0.8256 0.8519 0.8941
phi[8] 0.7651 0.8277 0.8540 0.8791 0.9224
phi[9] 0.7676 0.8291 0.8553 0.8798 0.9196
phi[10] 0.7814 0.8403 0.8660 0.8902 0.9360
phi[11] 0.7809 0.8400 0.8658 0.8901 0.9335
phi[12] 0.7955 0.8515 0.8780 0.9028 0.9478
phi[13] 0.8342 0.8902 0.9223 0.9532 0.9885
phi[14] 0.7968 0.8515 0.8775 0.9023 0.9464
phi[15] 0.7797 0.8379 0.8643 0.8905 0.9337
list_of_plot = []
for i in 1:15
v = chain["phi[$i]"] |> collect
z = [rand(Bernoulli(p)) for p in [softmax([
exam_loglikelihood_individual_differences(y[i], 0.5),
exam_loglikelihood_individual_differences(y[i], e)
])[2] for e in v]]
push!(list_of_plot, vcat(0.5 * ones(length(v) - sum(z)), v[z]))
end
boxplot(list_of_plot; legend=:outerright, outlier=true, fmt=format)
Suppose a group of 10 people attend a lecture, and are asked a set of 20 questions afterwards, with every answer being either correct or incorrect.
$$ p_{i},q_{j} \sim \text{Beta}(1,1)$$
$$ \theta_{ij} = p_{i}q_{j} $$
$$ k_{ij} \sim \text{Bernoulli}(\theta_{ij}) $$
choose_dataset = 2
data_1 = [
1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
1 0 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 0 0
1 1 1 0 1 1 0 0 1 1 0 1 0 0 0 0 1 1 0 1
1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0
1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0
1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1 1 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 1 1 1 0
];
data_2 = [
1 1 1 1 0 0 1 1 0 1 0 0 missing 0 0 1 0 1 0 0
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0
1 0 1 1 0 1 1 1 0 1 0 0 1 0 0 0 0 1 0 0
1 1 0 1 0 0 0 1 0 1 0 1 1 0 0 1 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 missing 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1
1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 missing 0 0
]
if choose_dataset == 1
k = data_1
else
k = data_2
end
10×20 Matrix{Union{Missing, Int64}}:
1 1 1 1 0 0 1 1 0 1 0 0 … 0 0 1 0 1 0 0
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0
1 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 0
1 1 0 1 0 0 0 1 0 1 0 1 … 0 0 1 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 missing 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1
1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 missing 0 0
n_p, n_q = size(k,1), size(k,2)
(10, 20)
@model function TwentyQuestions(k)
pi ~ filldist(Beta(1.0, 1.0), n_p)
qi ~ filldist(Beta(1.0, 1.0), n_q)
theta = pi .* qi'
for i in 1:n_p
for j in 1:n_q
if !ismissing(k[i, j])
k[i, j] ~ Bernoulli(theta[i, j])
end
end
end
end
TwentyQuestions (generic function with 2 methods)
iterations = 2_000
burnin = 2_00
model_twenty_questions = TwentyQuestions(k)
chain = sample(model_twenty_questions, NUTS(), iterations, burnin=burnin)
┌ Info: Found initial step size
└ ϵ = 1.6
Sampling: 100%|█████████████████████████████████████████| Time: 0:00:09
Chains MCMC chain (2000×42×1 Array{Float64, 3}):
Iterations = 1001:1:3000
Number of chains = 1
Samples per chain = 2000
Wall duration = 13.85 seconds
Compute duration = 13.85 seconds
parameters = pi[1], pi[2], pi[3], pi[4], pi[5], pi[6], pi[7], pi[8], pi[9], pi[10], qi[1], qi[2], qi[3], qi[4], qi[5], qi[6], qi[7], qi[8], qi[9], qi[10], qi[11], qi[12], qi[13], qi[14], qi[15], qi[16], qi[17], qi[18], qi[19], qi[20]
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
pi[1] 0.8760 0.1098 0.0020 2688.7650 1435.2553 0.9995 ⋯
pi[2] 0.2754 0.1395 0.0026 3071.1026 1456.0573 1.0001 ⋯
pi[3] 0.4780 0.1741 0.0037 2205.2416 1142.0367 0.9999 ⋯
pi[4] 0.3618 0.1551 0.0032 2496.5662 1425.7038 1.0000 ⋯
pi[5] 0.8462 0.1177 0.0021 2570.1171 1456.8339 0.9996 ⋯
pi[6] 0.8220 0.1229 0.0022 2418.2627 1142.1895 0.9996 ⋯
pi[7] 0.1778 0.1156 0.0022 2502.7246 1265.2019 1.0020 ⋯
pi[8] 0.0944 0.0883 0.0016 2799.4447 1000.3629 1.0038 ⋯
pi[9] 0.7248 0.1607 0.0036 1649.0833 826.0313 1.0012 ⋯
pi[10] 0.5214 0.1859 0.0045 1682.4494 883.1077 1.0013 ⋯
qi[1] 0.7353 0.1743 0.0035 1780.7636 946.5882 0.9999 ⋯
qi[2] 0.6900 0.1842 0.0033 2552.2948 989.2726 0.9997 ⋯
qi[3] 0.7642 0.1561 0.0029 2330.5301 1053.4839 1.0006 ⋯
qi[4] 0.6386 0.2047 0.0039 2619.7144 1235.6202 1.0004 ⋯
qi[5] 0.1576 0.1373 0.0029 2059.7807 1227.9001 1.0014 ⋯
qi[6] 0.3173 0.1936 0.0039 2616.0150 1393.1606 0.9998 ⋯
qi[7] 0.7455 0.1622 0.0029 2400.5159 969.2226 0.9999 ⋯
qi[8] 0.8146 0.1495 0.0026 2648.4667 1260.2918 0.9998 ⋯
qi[9] 0.2843 0.1693 0.0037 2210.8002 1310.6773 0.9999 ⋯
qi[10] 0.8520 0.1261 0.0022 2240.3937 1008.1558 1.0009 ⋯
qi[11] 0.1500 0.1296 0.0021 3605.1469 1267.4127 1.0012 ⋯
qi[12] 0.4179 0.1831 0.0039 2401.5869 973.8931 0.9997 ⋯
qi[13] 0.8307 0.1404 0.0025 2856.6643 1267.5161 0.9995 ⋯
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱
1 column and 7 rows omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
pi[1] 0.5925 0.8234 0.9064 0.9603 0.9961
pi[2] 0.0611 0.1708 0.2541 0.3563 0.5991
pi[3] 0.1788 0.3471 0.4697 0.6032 0.8323
pi[4] 0.1132 0.2450 0.3462 0.4611 0.7099
pi[5] 0.5609 0.7821 0.8704 0.9392 0.9939
pi[6] 0.5563 0.7431 0.8449 0.9221 0.9908
pi[7] 0.0213 0.0928 0.1569 0.2415 0.4584
pi[8] 0.0032 0.0280 0.0675 0.1331 0.3313
pi[9] 0.3894 0.6172 0.7391 0.8515 0.9830
pi[10] 0.1987 0.3824 0.5107 0.6470 0.8957
qi[1] 0.3605 0.6172 0.7588 0.8775 0.9896
qi[2] 0.2960 0.5594 0.7099 0.8323 0.9808
qi[3] 0.4196 0.6662 0.7850 0.8905 0.9865
qi[4] 0.2191 0.4928 0.6467 0.8001 0.9750
qi[5] 0.0049 0.0505 0.1195 0.2307 0.5028
qi[6] 0.0463 0.1658 0.2826 0.4339 0.7680
qi[7] 0.3803 0.6425 0.7669 0.8771 0.9853
qi[8] 0.4520 0.7206 0.8527 0.9357 0.9940
qi[9] 0.0415 0.1551 0.2537 0.3833 0.6663
qi[10] 0.5360 0.7884 0.8874 0.9482 0.9957
qi[11] 0.0065 0.0509 0.1111 0.2161 0.4773
qi[12] 0.1115 0.2842 0.3982 0.5392 0.8189
qi[13] 0.4704 0.7553 0.8671 0.9416 0.9946
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
7 rows omitted
p_plot = density(chain["pi[1]"], label="p1")
for i in 2:n_p
density!(chain["pi[$i]"], label="p$i")
end
xlims!(0, 1)
plot(p_plot, size=(800, 200); legend=:outerright)
q_plot = density(chain["qi[1]"], label="q1")
for i in 2:n_p
density!(chain["qi[$i]"], label="q$i")
end
xlims!(0, 1)
plot(q_plot, size=(800, 200); legend=:outerright)
plot(p_plot, q_plot, size=(800, 400), layout=(2, 1), legend=:outerright, fmt=format)
list_of_plot = []
for i in 1:n_p
v = chain["pi[$i]"] |> collect
push!(list_of_plot, v)
end
boxplot(list_of_plot; legend=:outerright, outlier=true, fmt=format)
list_of_plot = []
for i in 1:n_q
v = chain["qi[$i]"] |> collect
push!(list_of_plot, v)
end
boxplot(list_of_plot; legend=:outerright, outlier=true, fmt=format)
$$ \alpha \sim \text{Uniform}(0,1) $$ $$ \beta \sim \text{Uniform}(0,\alpha) $$ $$ x_{i} \sim \text{Bernoulli}(0.5) $$ $$ z_{j} \sim \text{Bernoulli}(0.5) $$ $$ \theta_{ij} \sim \begin{cases} \alpha & \text{if $x_{i} = z_{j}$} \ \beta & \text{if $x_{i} \neq z_{j}$} \end{cases} $$ $$ k_{ij} \sim \text{Bernoulli}(\theta_{ij}) $$
# TODO: WIP
$$ \psi^b \sim \text{Uniform}(0.5,1) $$ $$ \psi^m \sim \text{Uniform}(0,\psi^b) $$ $$ z_{i} \sim \text{Bernoulli}(0.5) $$ $$ \theta_{ij} \sim \begin{cases} \psi^b & \text{if $z_{i} = 0$} \ \psi^m & \text{if $z_{i} = 1$} \end{cases} $$ $$ k_{i} \sim \text{Binomial}(\theta_{i},n) $$
k = [45,45,44,45,44,45,45,45,45,45,30,20,6,44,44,27,25,17,14,27,35,30]
p = length(k) # number of people
n = 45 # number of questions
45
function malingering_loglikelihood(e, p)
return log(0.5) + loglikelihood(Binomial(n, p), e)
end
@model function AsessmentOfMalingering(k)
psi_b ~ Uniform(0.5, 1)
psi_m ~ Uniform(0, psi_b)
for i in eachindex(k)
Turing.@addlogprob! logsumexp([malingering_loglikelihood(k[i], psi_b), malingering_loglikelihood(k[i], psi_m)])
end
end
AsessmentOfMalingering (generic function with 2 methods)
iterations = 5_000
burnin = 2_000
model_malingering = AsessmentOfMalingering(k)
sampler = Gibbs(PG(60, :zi), NUTS(1000, 0.95, :psi_b, :psi_m))
sampler = NUTS()
chain = sample(model_malingering, sampler, iterations, burnin=burnin)
┌ Info: Found initial step size
└ ϵ = 0.05
Sampling: 100%|█████████████████████████████████████████| Time: 0:00:00
Chains MCMC chain (5000×14×1 Array{Float64, 3}):
Iterations = 1001:1:6000
Number of chains = 1
Samples per chain = 5000
Wall duration = 3.55 seconds
Compute duration = 3.55 seconds
parameters = psi_b, psi_m
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
psi_b 0.9907 0.0041 0.0001 4814.5755 3334.3599 1.0000 ⋯
psi_m 0.3073 0.0143 0.0002 4563.7220 3895.3421 1.0000 ⋯
1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
psi_b 0.9810 0.9883 0.9913 0.9938 0.9969
psi_m 0.2798 0.2976 0.3073 0.3172 0.3353
plot(chain, fmt=format)
$$ \mu_{b} \sim \text{Beta}(1,1) $$
$$ \mu_{d} \sim \text{Gaussian}(0,0.5)_{\mathcal I(0,∞)} $$
$$ \lambda_{b} \sim \text{Uniform}(40,800) $$ $$ \lambda_{m} \sim \text{Uniform}(4,100) $$ $$ z_{i} \sim \text{Bernoulli}(\phi) $$ $$ \theta_{i} \sim \begin{cases} \text{Beta}(\mu_{b}\lambda_{b},(1-\mu_{b})\lambda_{b}) & \text{if $z_{i} = 0$} \\ \text{Beta}(\mu_{m}\lambda_{m},(1-\mu_{m})\lambda_{m}) & \text{if $z_{i} = 1$} \end{cases} $$ $$ k_{i} \sim \text{Binomial}(\theta_{i},n) $$ $$ \text{logit}\mu_{m} = \text{logit}\mu_{b} - \mu_{d} $$ $$ \phi \sim \text{Beta}(5,5)$$
k = [45,45,44,45,44,45,45,45,45,45,30,20,6,44,44,27,25,17,14,27,35,30]
p = length(k) # number of people
n = 45 # number of questions
45
@model function AsessmentOfMalingering2(k)
mubon ~ Beta(1, 1)
mudiff ~ truncated(Normal(0, 1 / sqrt(.5)), lower=0)
phi ~ Beta(5, 5)
lambdabon ~ Uniform(40, 800)
lambdamal ~ Uniform(4, 100)
mumal = logistic(logit(mubon) - mudiff)
alpha_1 = mubon * lambdabon
beta_1 = lambdabon * (1 - mubon)
alpha_2 = mumal * lambdamal
beta_2 = lambdamal * (1 - mumal)
for i in eachindex(k)
theta_1 ~ Beta(alpha_1, beta_1)
theta_2 ~ Beta(alpha_2, beta_2)
log_prob_parts = [
log(1 - phi) + loglikelihood(Binomial(n, theta_1), k[i]),
log(phi) + loglikelihood(Binomial(n, theta_2), k[i])
]
Turing.@addlogprob! logsumexp(log_prob_parts)
end
end
AsessmentOfMalingering2 (generic function with 2 methods)
iterations = 5_000
burnin = 2_000
model_malingering2 = AsessmentOfMalingering2(k)
sampler = NUTS()
chain = sample(model_malingering2, sampler, iterations, burnin=burnin)
┌ Info: Found initial step size
└ ϵ = 0.05
Sampling: 100%|█████████████████████████████████████████| Time: 0:00:05
Chains MCMC chain (5000×19×1 Array{Float64, 3}):
Iterations = 1001:1:6000
Number of chains = 1
Samples per chain = 5000
Wall duration = 12.7 seconds
Compute duration = 12.7 seconds
parameters = mubon, mudiff, phi, lambdabon, lambdamal, theta_1, theta_2
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
mubon 0.9681 0.0080 0.0002 2280.3896 2759.9929 0.9999 ⋯
mudiff 3.3793 0.2657 0.0054 2395.8248 2786.6139 1.0000 ⋯
phi 0.4664 0.0892 0.0013 4615.1989 2996.3230 1.0008 ⋯
lambdabon 735.1961 59.0944 0.8359 3326.8658 2077.9180 0.9999 ⋯
lambdamal 91.8997 7.3924 0.0988 3680.7223 1884.8936 1.0000 ⋯
theta_1 0.9689 0.0078 0.0002 2286.5540 2684.9849 0.9998 ⋯
theta_2 0.5153 0.0236 0.0003 5327.2069 3176.6236 0.9998 ⋯
1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
mubon 0.9498 0.9637 0.9689 0.9736 0.9812
mudiff 2.8813 3.1994 3.3716 3.5500 3.9083
phi 0.2899 0.4069 0.4663 0.5274 0.6393
lambdabon 580.1090 708.0150 752.5085 779.1408 798.1863
lambdamal 72.6337 88.4719 93.9651 97.4364 99.8046
theta_1 0.9507 0.9645 0.9698 0.9743 0.9817
theta_2 0.4691 0.5002 0.5154 0.5310 0.5614
plot(chain, fmt=format)
$$ \mu_{b} \sim \text{Beta}(1,1) $$
$$ \mu_{d} \sim \text{Gaussian}(0,0.5)_{\mathcal I(0,∞)} $$
$$ \lambda_{b} \sim \text{Uniform}(5,50) $$ $$ \lambda_{c} \sim \text{Uniform}(5,50) $$ $$ z_{i} \sim \text{Bernoulli}(\phi) $$ $$ \theta_{i} \sim \begin{cases} \text{Beta}(\mu_{b}\lambda_{b},(1-\mu_{b})\lambda_{b}) & \text{if $z_{i} = 0$} \\ \text{Beta}(\mu_{c}\lambda_{c},(1-\mu_{c})\lambda_{c}) & \text{if $z_{i} = 1$} \end{cases} $$ $$ k_{i} \sim \text{Binomial}(\theta_{i},n) $$ $$ \text{logit}\mu_{c} = \text{logit}\mu_{b} + \mu_{d} $$ $$ \phi \sim \text{Beta}(5,5)$$
# TODO: WIP
Pkg.status()
Status `~/quangtiencs_projects/bayesian-cognitive-modeling-with-turing.jl/Project.toml`
[336ed68f] CSV v0.10.11
[a93c6f00] DataFrames v1.6.1
⌃ [366bfd00] DynamicPPL v0.23.0
[7073ff75] IJulia v1.24.2
[5ab0869b] KernelDensity v0.6.7
[b964fa9f] LaTeXStrings v1.3.0
[2ab3a3ac] LogExpFunctions v0.3.24
[91a5bcdd] Plots v1.38.17
[37e2e3b7] ReverseDiff v1.15.0
[276daf66] SpecialFunctions v2.3.0
[f3b207a7] StatsPlots v0.15.6
[fce5fe82] Turing v0.28.1
[e88e6eb3] Zygote v0.6.62
Info Packages marked with ⌃ have new versions available and may be upgradable.