mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.7K
active users

#standardml

0 posts0 participants0 posts today

Does *anyone* actually like Moscow ML's insistence that structures live in files with identical names (e.g., `structure Foo_Fabricator` MUST live in the file `Foo_Fabricator.sml` and MUST have a signature with identical name `signature Foo_Fabricator` which itself lives in the file `Foo_Fabricator.sig`)?

I always thought it idiomatic Standard ML to use SCREAMING_SNAKE_CASE for signature names, which breaks Moscow ML's conventions (unless you put it in SCREAMING_SNAKE_CASE-sig.sml and compiled it as a toplevel unit...which kinda breaks the whole module-as-compilation-unit idea?).

And Moscow ML *hates* it when you include the signature and a structure within the *same* file, because that violates its hardcoded assumptions about the world.

And another thing...

A little gem of an opening salvo from the "Learning ", U Comp 105 course handout:

————
For someone with a background in COMP 11 and COMP 15, the fastest and easiest way to learn Standard ML is to buy Ullman’s book and work through chapters 2, 3, 5, and 6. But many students choose not to buy Ullman—a move that saves money but costs time. You can recover some of the time by reading this guide; it enumerates the most important concepts, and it tells you where to find key information, not just in Ullman, but also in three other sources:

• Jeff Ullman’s "Elements of ML Programming" (ML’97 edition)
• Norman Ramsey’s "Programming Languages: Build, Prove, and Compare"
• Mads Tofte’s “Tips for Computer Scientists on Standard ML" (Revised)
• Bob Harper’s draft "Programming in Standard ML"

Know your sources! Mads Tofte and Bob Harper both worked with Robin Milner on the design of Standard ML, and they helped write the Definition of Standard ML. They know what they’re talking about, and they have good taste—though Tofte’s use of the ML modules is considered idiosyncratic. Norman Ramsey at least knows some functional programming. Jeff Ullman, by contrast, got his start in the theory of formal languages and parsing, then switched to databases. He may like ML, but he doesn’t understand it the way the others do.
————

I don't know any professors who teach at Tufts, but the word on the Web is that Ramsey is the author of this handout. Cheeky blighter!🤣

cs.tufts.edu/comp/105-2017f/re

If you are trying to get HOL4 working on a Raspberry Pi, you need to use MoscowML.

Poly/ML native compilation is not really supported for Raspberry Pi, and its bytecode interpreter performs worse than MoscowML's bytecode interpreter anyways.

I learned this after spending a few frustrating days trying to get PolyML to build HOL4, fruitlessly, then gave Mosml a shot (and it worked like a charm).

The things you learn...

The -11/780 was the top-of-the-line 32-bit minicomputer, when I entered . It occupied a large, climate-controlled room. The one I used had 256 KB of RAM. It was in continuous service from the late 1970s to the early 2000s, so it belongs in the pantheon of "great computers".

is universally accepted as the mother of all modern languages, including and . The earliest ML implementation I used was Cardelli's VAX ML.

Today, we have $0.99 32-bit microcontrollers, which are the size of a thumbnail, but can run rings round the VAX-11 in all aspects of performance and capability. There ought to be bare metal implementations of ML, OCaml, Haskell, etc., for the MCUs.

aliexpress.us/item/32568073098

aliexpress.ESP32-C3 SuperMini WiFi Bluetooth-Compatible Board ESP32 C3 SuperMini Development Board IOT Board for Arduino - AliExpress 44Smarter Shopping, Better Living! Aliexpress.com

I'd say this "introduction" to is the most trenchant I've ever read. And I can't find fault in its concise and incisive, if cutting at times, description of both the language and its luminaries. It was written for the COMP105 course at , and it suits its purpose. But I don't recommend it for self-study novices unfamiliar with SML.

• Tofte’s use of the ML modules is considered idiosyncratic. p.1
• Jeff Ullman got his start in the theory of formal languages and parsing, then switched to databases. He may like ML, but he doesn’t understand it. p.1
• For reasons best known to Robin Milner, Standard ML does not use the unary minus sign used in every other known language. Instead, Standard ML uses the ~ as a minus. p.2
• Jeff Ullman doesn’t understand how to program with tuples. His section 2.4.2 should be torn out of your book and shredded. p.3

And so on....🤣

cs.tufts.edu/comp/105-2020f/re

One thing that has piqued my interest is something based off of Hilbert's programme, applicable to proof assistants.

Suppose I write a proof assistant in Standard ML. Then I use it to prove some function can be added to the kernel without jeopardizing the soundness of the system. I would like to dynamically extend the running program by compiling this function, and adding it to the program.

Hilbert's programme had something like this (as I understand it, they called it the epsilon substitution method) whereby a theorem stated in "idealized mathematics" which states and proves a claim about "finitary mathematics" can be "imported" back into the finitary metatheory, even if the finitary metatheory cannot prove it directly.

I think John Harrison called such a scheme as I've outlined "computational reflection".

It'd be neat to see a toy model of it.

My favourite simply-typed language was, is, and will always be, '97, nowadays.

To my knowledge, SML is the only modern language that has a defined standard with a mathematically sound semantics, due to Milner et al.

In the 1980s and the 1990s, when the and the world were dizzy with , the stewards of SML exercised their judgement with considerable vigour and valour to shun all pretences of OO. This makes the language simpler and lighter, compared to FP-OO hybrid languages like OCaml, F#, and the like.

While SML is by no means a simple language compared to say, C or Python or JavaScript, it is arguably the simplest among the descendants of ML, especially compared to modern Haskell. By "simple" I mean not only to use the language but also to teach it to the undergrad students. ML's immense collection of textbooks and academic papers dating back to the early 1970s unburdens the teacher.

Yet, I usually recommend to my clients who want to transition to FP, especially those clients whose "business" is creating large, robust, adaptable, maintainable software in science and engineering. My reasons are manifold, but the main one is that SML'97 is "dated", whereas OCaml is "modern" in all the venerable senses of that word.

Programming language is a double-edge blade with a stiletto point. Without standardisation, the semantics of the language is squishy. After standardisation, the language is dead, unless the standard is kept up-to-date, like C, C++, JavaScript, etc. With or without standardisation, language longevity is hazardous, like the piercing point of a stiletto.

In any case, I feel like a traitor to my bloodline.🤦‍♂️

I collated all my random thoughts and comments interleaved with the [revised] 1997 Definition of in LaTeX, writing the Definition as a sequence of clauses, grammar rules, definitions, notational conventions, and inference rules.

As I have carved it up, there are:
- 78 clauses
- 16 notational conventions
- 81 definitions
- 10 grammars
- 189 inference rules (it's hard to get this one wrong)

...for a total of 374 items.

I have added 42 remarks, mostly observations that others have made, but there are also a few puzzles (e.g., can we present an LL(1) grammar for Standard ML?) that I have asked as an aside.

I still want to digest the static semantics of the Core language, since I do not feel I adequately understand it. It's quite different than what I expected (I hoped/expected it was like something out of Pierce's TAPL, but found something quite...different...).

Dear #LazyWeb, it seems a lot of the cool things I like about Rust come originally from a programming language called "Standard ML". The sensible choice would be to look into languages one or two steps before Rust, like Haskell and OCaml, but I'd like to play with the original. And there seems to be half a dozen Standard ML compilers to play with!

But as befits a somewhat niche and academic language dating back to 1983, all the compilers are maintained on a best-effort basis by very small teams, and they all seem to have their own quirks and shortcomings.

As somebody who'd like to learn about the language, and therefore somebody who doesn't care much about performance but would like hard-to-misuse tools, which would you recommend?

You know, I've never actually seen a proof that `newtype Void = Void Void` is empty in though we can write similar things for (`datatype Void = void of Void`), I haven't seen that proven to be empty either :\

Anyone up for the challenge? I mean, it's obvious, but we should like to know it's true, too.

Anyone else getting tired of #Rust getting the credit for (G)ADTs?

No? Just me?

#ocaml, #haskell, #StandardML, and #scala among others had this long before Rust and without the complexity of the borrow checker or lifetimes. Rust moves this concept of type system as bug prevention system forward for systems programming, but there is a long history of good compilers (targeting application development, not systems) that did this already (and in a few cases - better!). I am tiring of the narrative that Rust is the ”*only* memory safe language"...

I just published "PFD: purely functional data structures in " on GitHub.

These are Elm reimplementations of the data structures presented in the book " " (1999) by Prof. Okasaki. It is the only one of its kid that I am aware. It is a tour de force of functional thinking. The book includes and implementations.

Okasaki is a descendant of the ML tradition. His PhD advisor at CMU during the mid 1990s was Prof. Harper who wrote "Programming in Standard ML" (2011), contributed to the definition of the language, and was a member of the ML posse, alongside Milner, Tofte, Reppy, MacQueen, et al.

I chose for the following reasons:

• Elm is a pure functional language untainted by OO
• Elm does not yet have a comprehensive data structure library
• Elm evolves at a deliberate pace, without succumbing to the modern CI/CD pipeline pressure
• Elm is one of the simplest languages
• Elm is sane
• These properties make Elm a good candidate for use in education, for teaching FP, for teaching data structures, and for teaching disciplined web programming, and a comprehensive collection of data structures could be of use in undergraduate education

An unstated, but no less important, reason for my choosing Elm is Python fatigue. I currently use Python at work, and I am also expanding my "CLRS algorithms in Jupyter notebooks" project. I like Python, but many hours of Python a day is deleterious to my . Elm is both the prophylactic and the cure.

Please note that both PFD and CLRS are my solo projects, and they are works in progress that grow incrementally. Both projects aim to help undergraduate students.

github.com/amenzwa/pfd

GitHubGitHub - amenzwa/pfd: This project reimplements in Elm the data structures presented in the book "Purely Functional Data Structures" by Professor Okasaki (1999). Okasaki first published this material in 1996 in his PhD dissertation at CMU. The book provides Standard ML and Haskell implementations.This project reimplements in Elm the data structures presented in the book "Purely Functional Data Structures" by Professor Okasaki (1999). Okasaki first published this material in 1996 i...

Purely Functional Data Structures (1999) by Chris is a 💎.

All popular algorithms and data structures textbooks, including my favourite, the venerable , are decidedly . The more a book leans toward the practical, the more blatantly imperative it gets.

But 's book, the only one of its kind that I'm aware, is markedly different. It's functional—purely . It's written in , my favourite language. It also includes implementations in the appendix. The writing style is typical —clear, concise, cogent.

I never got to teach from it, since it came out well after I escaped academia. But I heartily recommend serious CS students to study it, especially those with an unhealthy FP obsession. Experienced programmers should read it, too. And FP novices must read it.

amazon.com/Purely-Functional-D

www.amazon.comAmazon.com