it's easy to remember grep options if you know what they stand for:
-i: ignore case
-v: vhat the fuck is invert???
it's easy to remember grep options if you know what they stand for:
-i: ignore case
-v: vhat the fuck is invert???
It is very interesting to know where the name "grep" came from in computer history and why. This video perfectly explains it. Just grab a coffee, sit back and enjoy
You know how you can understand something like Regular Expressions (Grep) well enough to write really complex statements that precisely match the nuanced text patterns you’re hoping to revise, but then 6 months later you look at this RegEx you wrote and can’t parse it at all? Like, literally no idea how it works? As in it-would-be-faster-to-recreate-it-from-scratch-than-to-understand-it-enough-to-change-it-even-slightly?
I feel like that all the time.
[FR]
Arrivée ici début 2025, big up aux @admin de Piaille.fr ! #introduction :
Tombée dans la marmite #OpenSource en 2000, je me nourris de commandes #bash. Fichiers texte, #grep et ses jolies #regex, #ansible, #git, #greasemonkey, les tests auto et la supervision sont tes amis.
Cordes frottées, grattées et frappées, sons soufflés, chantés ou beatboxés, sons électro ou scratchés me touchent. Rien de tel qu'une bonne soirée à jammer / à enregistrer pour un beatmaker / à débarquer sur scène pour accompagner quand il manque un instrumentiste / à repiquer des morceaux entiers sur papier à l'ancienne / à improviser avec les enfants
Engagée #AMAP et pro #CNV
Anybody else download like 1000 TikToks and max out their phones storage and their back up solution? No...yeah...Me Neither.
But if someone did, here's a way to solve it quickly on Android devices.
Since #TikTok names all its video files 32bithexadecimalvalue.mp4, we can us a little #grep and #regex along with #Termux to sort through the #Android #camera roll and delete all the corresponding files.
https://justinmcafee.com/posts/2025/so-you-downloaded-a-thousand-tiktoks/
I've mirrored a relatively simple website (redsails.org; it's mostly text, some images) for posterity via #wget. However, I also wanted to grab snapshots of any outlinks (of which there are many, as citations/references). By default, I couldn't figure out a configuration where wget would do that out of the box, without endlessly, recursively spidering the whole internet. I ended up making a kind-of poor man's #ArchiveBox instead:
for i in $(cat others.txt) ; do dirname=$(echo "$i" | sha256sum | cut -d' ' -f 1) ; mkdir -p $dirname ; wget --span-hosts --page-requisites --convert-links --backup-converted --adjust-extension --tries=5 --warc-file="$dirname/$dirname" --execute robots=off --wait 1 --waitretry 5 --timeout 60 -o "$dirname/wget-$dirname.log" --directory-prefix="$dirname/" $i ; done
Basically, there's a list of bookmarks^W URLs in others.txt that I grabbed from the initial mirror of the website with some #grep foo. I want to do as good of a mirror/snapshot of each specific URL as I can, without spidering/mirroring endlessly all over. So, I hash the URL, and kick off a specific wget job for it that will span hosts, but only for the purposes of making the specific URL as usable locally/offline as possible. I know from experience that this isn't perfect. But... it'll be good enough for my purposes. I'm also stashing a WARC file. Probably a bit overkill, but I figure it might be nice to have.
Die #InDesigns #GREP-Engine kann im #Lookbehind nur eine fest definierte Anzahl von Zeichen prüfen – nicht eine beliebige Länge.
Wenn man beispielsweise schließende Klammern nur dann finden möchte, wenn davor ein beliebig langer Text steht, geht das nicht.
(?<=\d{3}) (3 Ziffern) funktioniert.
(?<=\d+) (eine oder mehrere Ziffern) funktioniert nicht.
Selbst wenn man Alternativen mit | kombiniert, müssen alle Optionen die gleiche Zeichenzahl aufweisen.
#Lookahead lässt variable Längen zu.
GitHub - yshavit/mdq: like jq but for Markdown: find specific elements in a md doc https://github.com/yshavit/mdq #OpenSource #markdown #GitHub #query #find #grep #jq
[Archive — 2023] Comment j'ai (presque) foutu en l'air une demi-journée de boulot
Si vous avez viré par erreur un fichier texte, un simple grep peut vous sauver la vie. Une histoire qui commence très mal mais finit très bien grâce à GNU/Linux !
Lire cet article : https://studios.ptilouk.net/superflu-riteurnz/blog/2023-02-09_recovery.html
Le livre best of : https://editions.ptilouk.net/gb10ans
Soutien : https://ptilouk.net/#soutien
#archive #technique #grep #dev #récupération #GNUlinux
https://studios.ptilouk.net/superflu-riteurnz/blog/2023-02-09_recovery.html
... Oh that reminds me: avoid #grep too due to terrible speed issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=254763 ; https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=223553 .
Am partial to "#ugrep(1)" https://github.com/Genivia/ugrep (port: https://www.freshports.org/textproc/ugrep/ ).
(Sure, GNU grep, "ggrep(1)", is better too if want to use same/similar enough options as native grep. So is a search via "perl(1)": perl -n -e 'm{regex} and print'.)
One day regex library (of #FreeBSD) [cw]ould be better. One could also die while holding breath for that day to arrive.
2/2
@holger grep, awk and sed. Available on pretty much every distribution. Not leaking info, either.
Honestly - steep learning curves but once you're even slightly proficient then you can find all the needles in all the haystacks you want, neatly.
Add in jq for conversion to JSON for a more transportable format.
Can you even imagine life without the grep command???
#Linux #CommandLine #grep
Protip: If you use "grep -v " H " file.pdb" to get rid of the hydrogen atoms, you might end up wondering where you protein chain H disappeared.
Using grep to find lines with uppercase text in files?
Command: grep -h [[:upper:]] dirlist*.txt
Results: GET, HEAD, NF, POST, VGAuthService. Powerful and efficient! #Linux #grep #CommandLineTips
Using grep to find lines with uppercase text in files?
Command: grep -h [[:upper:]] dirlist*.txt
Results: GET, HEAD, NF, POST, VGAuthService. Powerful and efficient! #Linux #grep #CommandLineTips
Need to search for patterns in text files?
This `grep` command with a character set (`[bg]zip`) looks for lines containing 'bzip' or 'gzip' across files matching `dirlist*.txt`.
Effortless pattern matching!
Need to search for patterns in text files?
This `grep` command with a character set (`[bg]zip`) looks for lines containing 'bzip' or 'gzip' across files matching `dirlist*.txt`.
Effortless pattern matching!
You can use the `--color` option to highlight matched portions with GNU grep. But what if you need to do so with sed and awk?
https://learnbyexample.github.io/coloring-matched-portions-grep-sed-awk/
#TIL about the #column command and I cant believe that I missed this gem since... forever?
My previews ways of doing such things where #AWK (-ward) scribbles that I gathered over time... I think I can ditch lots of them now
Thank you so much @vkc for making me smarter <3