Show more

"Palm oil may be the world's most hated product and it’s in everything. But if economics is about anything, it’s about tradeoffs...

the rapid expansion of palm oil exports since 2000 led to 2.7 percentage point faster poverty reduction and 4% faster consumption growth, at the cost of more rapid forest loss and more fire.

A back of the envelope calculation finds 2.6 of 10 million Indonesians lifted from poverty this century were because of palm oil"

Worse: We never actually observe anything unconditional, nor anything asymptotic.

Statistical tools are justified by maths. But almost none of the users of the tools understand the maths. Under internalism (the traditional philosophical view, that only fully understood beliefs are justified) this is a gigantic crisis.

(It is, but not for this reason.)

Two kinds of highest esteem:

1. You can't see any flaws

2. You love the flaws

Anyone else noticed the difference between Captcha annoyance on Firefox and Chrome?

I'm using a VPN to be fair, but Chrome waves me through with one round while Firefox takes 10. I smell corporate bullshit.

Impromptu holiday in Gatwick. Free food, free entertainment, and I think this is the first day in which I've really seen *all* of the sunlight (been surrounded by giant windows since sunrise, 8am)

Wheeler misattributes the stack (actually Turing 1945) and subroutines (actually Zuse 1945) but that is almost completely beside the point.

Two great lists, which would make a good, maximally efficient curriculum:

'The Most Important Software Innovations' (Babbage to MapReduce)

'Insights required for AI' (Matrices to cycleGANs)

Why care about this? Besides mere trust in one's hardware, or a mere preference not to be watched, it's to do with the increasing tail risks of being in principle vulnerable to one oddball with a vendetta. These will increase for two reasons: increase in the online population, and in ML fuzzing and intrusion methods.

Only half of humanity are online at the moment; a single script-kiddie troll can do quite a lot; the internet is about to get bigger, louder, and stranger.

I know a decent amount about

Existential risks
Passive cybersecurity
Data science careers
Python, SciPy, Spark
Numerical methods
Development economics
Nietzsche, Frege, Wittgenstein
Economic history
History of computing
History of philosophy
The replication crisis and social science
Quantified self
Biochemistry (esp. veganism & nootropics)
Analytic philosophy (esp. science, ethics)
Contemporary scifi
UK poetry
Punk, jazz, ambient music

and I like answering questions, AMA.

How many people with "scientist" in their job title do science? (As opposed to knowing science, using science, liking science, talking about science...)

Some Oriental vocabulary for enjoying Ada Palmer's 'Terra Ignota' series even more:

* Xiaoheiwang, 小黑王 (Mandarin): Little Dark Lord.

* Sanling, 三菱 (Mandarin): Three Chestnuts, Three Diamonds. (Transliteration of Mitsubishi)

* Tai-kun, 太君 (Japanese): Supreme Child OR Supreme Leader (Shogun, tycoon). A good rendition for both meanings would be "great upstart".

- Excellent performance in the Notebook, which disappears once I rerun it (with the same seed). This was probably cheeky .ipynb file hacking.

- Most people forget to normalise the data for linear reg, making everything else in the analysis nonsense.

- No code reuse ever.

- Blithe recommendation of multiple comparisons (cheating).

- Editing the response data, instead of binning or rescaling.

Things I have seen marking machine learning take-homes:

- No test split.

- Test split but forgot to use it.

- An exquisite description of the mathematical and practical motivation behind PCA, followed by them taking as many components as there were original columns.

- Tree depths of 50 on data with 1000 rows (this is saturated, with one row per perverse leaf, after 10 splits).

- Absolutely everyone uses classifiers on ordinal/interval responses. I seriously want to know why.

"what just happened presents a serious indictment of academic science. There are dozens of academic groups, with researchers likely numbering in the (low) hundreds, working on protein structure prediction. We have been working on this problem for decades, with vast expertise built up, and not insignificant computational resources...

For DeepMind’s ~10 researchers to so thoroughly rout everyone surely demonstrates the structural inefficiency of academic science..."

Show more

A Mastodon instance for maths people. The kind of people who make \(\pi z^2 \times a\) jokes.

Use \( and \) for inline LaTeX, and \[ and \] for display mode.