mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.7K
active users

#poweranalysis

0 posts0 participants0 posts today

Last week I attended the 6th Perspectives on Scientific Error Conference at @TUEindhoven
I learned so much! About #metascience #preregistration #replicability #qrp questionable research practices, methods to detect data fabrication, #peerreview, #poweranalysis artefacts in #ML machine learning...
I'm impressed by the commitment of participants to improve science through error detection & prevention. Thanks to the organizers Noah van Dongen, @lakens @annescheel Felipe Romero and @annaveer

The labor movement needs the solidarity and organization of tech workers acting in solidarity with warehouse or production workers dependent on their algorithms! What an awesome transformative power!

`stansburyforum.com/2024/01/25/

labornotes.org/blogs/2024/01/b

Labor Power and Strategy by John Womack, Jr. for more discussion of #chokepoints and #poweranalysis
pmpress.org/index.php?l=produc

stansburyforum.comTech Workers Deserve a Union | The Stansbury Forum

Hello #statstodon! A reviewer asks me to perform a post-hoc #PowerAnalysis. I know this is generally not advised because if you replace the a priori effect size by the effect size measured in the experiment, this will introduce an erroneous relationship between the significance level of the test and the measured power.
… but does that mean that there is no proper way of measuring power retrospectively? For example, if you refrain from using the measured effect size and instead simulate a range of “a priori” effect size unrelated to the results of the test, then the dependency of the power to the significance level should not happen?
#stats #statschat @lakens

Online #workshop:
Simulation-based power analyses in (generalized) linear mixed models
17.05.2023, 10-12h CEST

The workshop will cover basics of power analysis, linear mixed models, and why the combination of both requires a simulation-based approach.

In my experience, this is for many areas of #HealthSciences and #HRQL research a key problem when designing studies.

Maybe worth a read as well:
link.springer.com/article/10.3

Suppose you've designed a study and found a "null" result, and you'd like to argue that you've found evidence against the phenomenon in question producing a large effect. Should you emphasize how statistically powerful your study was?
You can, but there are much better ways to describe the evidence. Provide an upper bound on the plausible magnitude of the effect given the theory tests your data permit. For example, report a confidence interval. [1/3]
#Statistics #StatsTeaching #PowerAnalysis

It really is fun to objectively choose a level of alpha for NHT using Mudge's optimal alpha. It's so simple with a little simulation and elbow grease. And then you feel kinda boss-level when someone asks you why you did not use 0.05. journals.plos.org/plosone/arti #NHST #PowerAnalysis

journals.plos.orgSetting an Optimal α That Minimizes Errors in Null Hypothesis Significance TestsNull hypothesis significance testing has been under attack in recent years, partly owing to the arbitrary nature of setting α (the decision-making threshold and probability of Type I error) at a constant value, usually 0.05. If the goal of null hypothesis testing is to present conclusions in which we have the highest possible confidence, then the only logical decision-making threshold is the value that minimizes the probability (or occasionally, cost) of making errors. Setting α to minimize the combination of Type I and Type II error at a critical effect size can easily be accomplished for traditional statistical tests by calculating the α associated with the minimum average of α and β at the critical effect size. This technique also has the flexibility to incorporate prior probabilities of null and alternate hypotheses and/or relative costs of Type I and Type II errors, if known. Using an optimal α results in stronger scientific inferences because it estimates and minimizes both Type I errors and relevant Type II errors for a test. It also results in greater transparency concerning assumptions about relevant effect size(s) and the relative costs of Type I and II errors. By contrast, the use of α = 0.05 results in arbitrary decisions about what effect sizes will likely be considered significant, if real, and results in arbitrary amounts of Type II error for meaningful potential effect sizes. We cannot identify a rationale for continuing to arbitrarily use α = 0.05 for null hypothesis significance tests in any field, when it is possible to determine an optimal α.