mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.8K
active users

#bayesian

5 posts5 participants0 posts today

#statstab #334 Workflow Techniques for the Robust Use of Bayes Factors

Thoughts: "We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis"

#bayesfactors #bayesian #r #robust

arxiv.org/abs/2103.08744

arXiv logo
arXiv.orgWorkflow Techniques for the Robust Use of Bayes FactorsInferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions. Moreover it's not clear how straightforwardly this approach can be implemented in practice, and in particular how sensitive it is to the details of the computational implementation. Here, we investigate these questions for Bayes factor analyses in the cognitive sciences. We explain the statistics underlying Bayes factors as a tool for Bayesian inferences and discuss that utility functions are needed for principled decisions on hypotheses. Next, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors. Importantly, it is unknown whether Bayes factor estimates based on bridge sampling are unbiased for complex analyses. We are the first to use simulation-based calibration as a tool to test the accuracy of Bayes factor estimates. Moreover, we study how stable Bayes factors are against different MCMC draws. We moreover study how Bayes factors depend on variation in the data. We also look at variability of decisions based on Bayes factors and how to optimize decisions using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis, and we illustrate this workflow using an example from the cognitive sciences. We hope that this study will provide a workflow to test the strengths and limitations of Bayes factors as a way to quantify evidence in support of scientific hypotheses. Reproducible code is available from https://osf.io/y354c/.

I'd like to move the Git repo for my R protopackage ("Inferno") from GitHub to Codeberg. I was wondering if Codeberg offers (even paying of course) something similar to GitHub pages, by which I mean something like this: pglpm.github.io/inferno

From what I understand, the Codeberg pages <docs.codeberg.org/codeberg-pag> should be something equivalent – but I'm not fully sure. Is that correct? And does anyone have examples of Codeberg pages of R packages, just to see their functionality? I didn't manage to find any examples.

Cheers!

pglpm.github.ioBayesian nonparametric exchangeable inference - model-free uncertainty-quantified predictionOffers several functions for Bayesian nonparametric exchangeable inference (also called "density" or "population" inference), including Monte Carlo calculation of posterior densities. From a machine-learning perspective, it offers a model-free, uncertainty-quantified prediction algorithm.
Replied in thread

@Posit

It's important to emphasize that "realistic-looking" data does *not* mean "realistic" data – especially high-dimensional data (unfortunately that post doesn't warn against this).

If one had an algorithm that generated realistic data for a given inference problem, it would mean that that inference problem had been solved. So: for educational purposes, why not. But for validation-like purposes, use with uttermost caution and at your own peril.

Happy Birthday, Laplace! 🎂 🪐 🎓 One of the first to use Bayesian probability theory in the modern way!

"One sees in this essay that the theory of probabilities is basically only common sense reduced to a calculus. It makes one estimate accurately what right-minded people feel by a sort of instinct, often without being able to give a reason for it. It leaves nothing arbitrary in the choice of opinions and of making up one's mind, every time one is able, by this means, to determine the most advantageous choice. Thereby, it becomes the most happy supplement to ignorance and to the weakness of the human mind. If one considers the analytical methods to which this theory has given rise, the truth of the principles that serve as the groundwork, the subtle and delicate logic needed to use them in the solution of the problems, the public-benefit businesses that depend on it, and the extension that it has received and may still receive from its application to the most important questions of natural philosophy and the moral sciences; if one observes also that even in matters which cannot be handled by the calculus, it gives the best rough estimates to guide us in our judgements, and that it teaches us to guard ourselves from the illusions which often mislead us, one will see that there is no science at all more worthy of our consideration, and that it would be a most useful part of the system of public education."

*Philosophical Essay on Probabilities*, 1814 <doi.org/10.1007/978-1-4612-418>

After a long collaboration with @martinbiehl, @mc and @Nathaniel I’m excited to share the first of (hopefully) many outputs:
“A Bayesian Interpretation of the Internal Model Principle”
arxiv.org/abs/2503.00511.

This work combines ideas from control theory, applied and reasoning, with ramifications for science, /#ML, and biology to be further explored in the future.

In these fields, we come across ideas of “models”, “internal models”, “world models”, etc. but it is hard to find formal definitions, and when one does, they usually aren’t general enough to cover all the aspects these different fields consider important.

In this work, we focus on two specific definitions of models, and show their connections. One is inspired by work in control theory, and one comes from Bayesian inference/filtering for cognitive science, AI and ALife, and is formalised with Markov categories.

In the first part, we review and reformulate the “internal model principle” from control theory (at least, one of its versions) in a more modern language heavily inspired by categorical systems theory (davidjaz.com/Papers/DynamicalB, github.com/mattecapu/categoric).