mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.7K
active users

#cache

2 posts2 participants0 posts today

Morgens vom PC mit den Worten, "Sitzung kann nicht gestartet werden", begrüßt zu werden, darf wohl als Fehlstart gedeutet werden.

War meine Aufräumaktion am Mittwoch vielleicht doch etwas radikal? Aber #pacman konnte kein Systemupdate durchführen, weil die Partition keinen Platz mehr hatte. Da habe ich bestimmt äußerst blöd aus der Wäsche geguckt. Nach der Reinigung war fast 50 % wieder frei, der Rest also nur temporärer #Cache kram. Naja, beinahe, #i3wm kam dabei irgendwie mit unter die Räder. Läuft aber alles wieder.

🐘 Mastodon Account Archives 🐘

TL;DR Sometimes mastodon account backup archive downloads fail to download via browser, but will do so via fetch with some flags in the terminal. YMMV.

the following are notes from recent efforts to get around browser errors while downloading an account archive link.

yes, surely most will not encounter this issue, and that's fine. there's no need to add a "works fine for me", so this does not apply to your situation, and that's fine too. however, if one does encounter browser errors (there were several unique ones and I don't feel like finding them in the logs).

moving on... some experimentation with discarding the majority of the URL's dynamic parameters, I have it working on the cli as follows:

» \fetch -4 -A -a -F -R -r --buffer-size=512384 --no-tlsv1 -v ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256

the primary download URL (everything before the query initiator "?" has been substituted as ${URL_PRE_QMARK}, and then I only included Amazon's algo params, the rest of the URL (especially including the "expire" tag) seems to be unnecessary.

IIRC the reasoning there is about the CDN's method for defaulting to a computationally inexpensive front-line cache management, where the expire aspects are embedded in the URL instead of internal (to the CDN clusters) metrics lookups for cache expiration.

shorter version: dropping all of the params except the hash algo will initiate a fresh zero-cached hit at the edge, though likely that has been cached on second/non-edge layer due to my incessent requests after giving up on the browser downloads.

increasing the buffer size and forcing ipv4 are helpful for some manner of firewall rules that are on my router side, which may or may not be of benefit to others.

- Archive directory aspect of URL: https://${SERVER}/${MASTO_DIR}/backups/dumps/${TRIPLE_LAYER_SUBDIRS}/original/
- Archive filename: archive-${FILE_DATE}-{SHA384_HASH}.zip

Command:

» \fetch -4 -A -a -F -R -r --buffer-size=512384 --no-tlsv1 -v ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256

Verbose output:

resolving server address: ${SERVER}:443
SSL options: 86004850
Peer verification enabled
Using OpenSSL default CA cert file and path
Verify hostname
TLSv1.3 connection established using TLS_AES_256_GCM_SHA384
Certificate subject: /CN=${SEVER}
Certificate issuer: /C=US/O=Let's Encrypt/CN=E5
requesting ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256
remote size / mtime: ${FILE_SIZE} / 1742465117
archive-${FILE_DATE}-{SHA384_HASH}.zip 96 MB 2518 kBps 40s

@stefano looks to be working now :)

Ech kurde. Właśnie odkryłem, że plugin od cache psuje mi wyświetlanie map na blogu.

Gdy jestem zalogowany widzę na mapie wszystkie markery POI, profil wysokości trasy i mam możliwość pobrania pliku gpx, ale bez logowania jest tylko mapa z narysowaną trasą.

Znowu trzeba będzie dłubać, albo wyłączyć keszowanie całkiem, bo i tak nie ratuje bloga przed FediDDoS-em, a cała reszta ruchu jest znikoma.

🎤 Drupal Developer Days Leuven 2025: Speaker Spotlight Series 🎤

Join Kristiaan Van den Eynde at #DrupalDevDays this April to understand the common caching mistakes.

💡 This session aims to inform developers about possible pitfalls, helping them avoid making common mistakes and providing some information along the way as to why these mistakes are so common and how they can mess with your site.

🎟️ Register now to secure your spot: drupalcamp.be/en/drupal-dev-da

#DDD25#Drupal#Cache

🎤 Drupal Developer Days Leuven 2025: Speaker Spotlight Series 🎤

👉 Most people who run a decent-sized Drupal website have probably heard of Varnish.
👉 It comes with some VCL code that might seem confusing at first.
👉 There are some Drupal modules you need to install and configure to invalidate the cache. But how does it work?

Join Thijs Feryn this #DrupalDevDays to learn about Varnish and its features.

🎟️drupalcamp.be/en/drupal-dev-da

👑 Cache is King: Smart Page Eviction with eBPF

arxiv.org/abs/2502.02750

arXiv logo
arXiv.orgCache is King: Smart Page Eviction with eBPFThe page cache is a central part of an OS. It reduces repeated accesses to storage by deciding which pages to retain in memory. As a result, the page cache has a significant impact on the performance of many applications. However, its one-size-fits-all eviction policy performs poorly in many workloads. While the systems community has experimented with a plethora of new and adaptive eviction policies in non-OS settings (e.g., key-value stores, CDNs), it is very difficult to implement such policies in the page cache, due to the complexity of modifying kernel code. To address these shortcomings, we design a novel eBPF-based framework for the Linux page cache, called $\texttt{cachebpf}$, that allows developers to customize the page cache without modifying the kernel. $\texttt{cachebpf}$ enables applications to customize the page cache policy for their specific needs, while also ensuring that different applications' policies do not interfere with each other and preserving the page cache's ability to share memory across different processes. We demonstrate the flexibility of $\texttt{cachebpf}$'s interface by using it to implement several eviction policies. Our evaluation shows that it is indeed beneficial for applications to customize the page cache to match their workloads' unique properties, and that they can achieve up to 70% higher throughput and 58% lower tail latency.
#linux#kernel#cache
Replied in thread

@wyatt I think part of it might be that newer processors provide instructions to help block one process from reading the memory of another through speculation, branch prediction, and cache behavior. You could block cross-process "snooPING AS usual" on an older processor by invalidating cache on every context switch, but then you'd lose the constraint that you called "performant" (a word I'm having trouble accepting as valid).