NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The world's loudest Lisp program to the rescue (blog.funcall.org)
troad 2 days ago [-]
This is a really cool story!

Perhaps a slight segue, but I recently tried to learn CL for the first time and I was genuinely surprised by all the decades of accumulated cruft (mainly masses of semi-redundant and soft-depreciated standard library functions, with bizarre names). The way people talk about Lisp, I'd expected something more elegant. I suppose I should try something like Scheme or Racket, but it's hard to find an introduction to those that isn't bone dry. (Recommendations welcome!)

I've also heard people say reading Lisp functions, inside out, ensconced (heya) in their parentheses, is somehow more comprehensible than sequential C style, but this state of enlightenment thus far eludes me. I can only speak for myself, but I definitely reason about code outside in rather than inside out.

hickelpickle 2 days ago [-]
Little schemer is good, some people hate it some people love it. But it is a fairly light read the slowly teaches some syntax at a time, questions you about assumptions then revels the information as it goes on. It would be the least dry read. There is also sketchy scheme for a more thorough text, or even the rs7s standard, which are both pretty dry but short.

What made me appreciate scheme was watching some of the SICP lectures (https://www.youtube.com/watch?v=2Op3QLzMgSY&list=PL8FE88AA54...) and the little schemer to learn more. I also read some of the SICP along with it, though I put it down due to not having the time to work through it.

Scheme is interesting and toying with recursion is fun, but the path a mentioned above is only really enjoyable if you are looking to toy around with CS concepts and recursion. You can do a lot more in modern scheme as well, and you can build anything out of CL. But learning the basics of scheme/lisp is can be pretty dry if you are just looking to build something right away like you already can in a traditional imperative language. But it is interesting if you are interested in a different perspective. But even RS7S scheme is still far from the batteries included you get with CL.

I personal found the most enjoyment using Kawa scheme, which is jvm based and using it for scripting with java programs as it has great interop. I used it some for a game back end in the event system to be able to emit events while developing and script behaviors, I've also used it for configurations as well with a graphical terminal app, I used hooks into the ascii display/table libraries then kawa to configure the tables/outputs and how to format the data.

troad 2 days ago [-]
Interesting, thank you!

I suppose what draws me to Lisp is that insight people say it gives them on programming. I already do much of my programming in functional style, so I'm trying to discover what it is about Lisp that's so beloved above and beyond that - I'm gathering it's a mix of recursion and the pleasantness of being able to get 'inside' the program, so to speak, with a REPL?

I must also admit that I tend to run into a bit of a roadblock over Lisp's apparent view that programming is, or should be, or should look like, maths. I cut my teeth on assembly, so for me programming isn't maths, but giving instructions to silicon, where that silicon is only somewhat loosely based on maths. It tends to make me bounce off Lisp resources which by Chapter 2 are trying to show the advantages of Lisp by implementing some arcane algorithm with tail-end recursion.* But I'm very open to being persuaded I'm missing the bigger picture here, hence my ongoing effort to grok Lisp.

(*Isn't tail-end recursion just an obfuscated goto?)

Tevo 2 days ago [-]
>recursion

I think one of the reasons recursion is often emphasized in relation to Lisp is because one of Lisp's core data structures, the linked list, can be defined inductively, and thus lends itself well to transformations expressed recursively (since they follow the structure of the data to the letter). But recursion in itself isn't something particularly special. Though it is more general than loops, and so it is nice to have some grasp on it, and how looping and iteration relate to each other, and it is often easier to reason about a problem in terms of a base case and a recursive case rather than a loop, at a higher level you will usually come to find bare recursion mostly counterproductive. You want to abstract it out, such that you can then compose your data transformations out of higher level operations which you can pick and match at will, APL-style. Think reductions, onto which you build mappings and filters and groupings and scans and whichever odd transformations one could devise, at which point recursion isn't much more than an implementation detail. This is about collections, but anything inductive would follow a similar pattern. Most functional languages will edge you towards the latter, and I find Lisp won't particularly, unless you actively seek it out (though Clojure encourages it most explicitly, if you consider that a Lisp).

>the pleasantness of being able to get 'inside' the program

Indeed, that's one of the things makes Common Lisp in specific particularly great (and it is something other contemporary dialects seem to miss, to varying degrees). It lets you sit within your program and sculpt it from the inside, in a Smalltalk sort of way, and the whole language is designed towards that. Pervasive late-binding means redefining mostly anything takes effect pretty much immediately, not having to bother recompiling or reloading anything else depending on it. The object system specifies things such as class redefinitions and instance morphing and dependencies and so on, such that you can start with a simple class definition, then go on to to interactively add or remove slots, or play with the inheritance chain, and have all of the existing instances just do the right thing, most of the time. Many provided functions that let you poke and prod the state of your image don't make much sense outside of an interactive environment.

There is a point to be made about abstraction, maths, and giving instructions to silicon (and metaprogramming!), but I'll have to pass for now. I apologize if this is too rambly, I tend to get verbose when tired.

lispm 2 days ago [-]
> I think one of the reasons recursion is often emphasized in relation to Lisp is because one of Lisp's core data structures, the linked list, can be defined inductively

Lisp was used in computer science education to teach "recursion". We are not talking about software engineering, but learning new ways to think about programming. That can be seen in SICP, which is not a Lisp/Scheme text, but a computer science education book, teaching students ways to think, from the basics upwards.

Personally I would not use recursion in programs everywhaere, unless the recursive solution is somewhat easier to think about. Typically I would use a higher order function or some extended loop construct.

troad 2 days ago [-]
Not at all too rambly, very interesting, thank you. Your answer makes intuitive sense to me; I'll ponder over it.
pfdietz 1 days ago [-]
It's important to distinguish between Common Lisp and Scheme. The two approaches have diverged considerably, with different emphasis. The aspects you describe in your third paragraph there are more Scheme than Common Lisp.
lispm 2 days ago [-]
There are a bunch of things to learn from Lisp:

* list processing -> model data as lists and process those

* list processing applied to Lisp -> model programs as lists and process those -> EVAL and COMPILE

* EVAL, the interpreter as a Lisp program

* write programs to process programs -> code generators, macros, ...

* write programs in a more declarative way -> a code generator transforms the description into working code -> embedded domain specific language

* interactive software development -> bottom up programming, prototyping, interactive error handling, evolving programs, ...

and so on...

The pioneering things of Lisp from the end 50s / early 60s: list processing, automatic memory management (garbage collection), symbol expressions, programming with recursive procedures, higher order procedures, interactive development with a Read Eval Print Loop, the EVAL interpreter for Lisp in Lisp, the compiler for Lisp in Lisp, native code generation and code loading, saving/starting program state (the "image"), macros for code transformations, embedded languages, ...

That's was a lot of stuff, which has found its way into many languages and is now a part of what many people use. Example: Garbage Collection now is naturally a part of infrastructure, like .net or languages like Java and JavaScript. It had its roots in Lisp, because the need arose to process dynamic lists in complex programs, getting rid of the burden of manual memory management. Lisp got a mark & sweep garbage collector. That's why we say Lisp is not invented but discovered.

Similar the first Lisp source interpreter. John McCarthy came up with the idea of EVAL, but thought it only to be a mathematical idea. His team picked up the idea and implemented it. The result was the first Lisp source interpreter. Alan Kay said about this: "Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were “Maxwell’s Equations of Software!. EVAL is the E in REPL.

Then Lisp had s-expressions (symbol expressions -> nested lists of "atoms"), which could be read (R) and printed.

This is the "REP" part of the REPL. Looping it was easy, then.

People then hooked up Lisp to early terminals. In 1963 an 17 year old kid ( https://de.wikipedia.org/wiki/L_Peter_Deutsch ) wrote a Lisp interpreter and attached it to a terminal: the interactive REPL.

A really good, but large, book to teach the larger picture of Lisp programming is PAIP, Paradigms of Artificial Intelligence Programming, Case Studies in Common Lisp by Peter Norvig ( -> https://github.com/norvig/paip-lisp ).

A beginner/mid-level book, for people with some programming experience, on the practical side is: PCL, Practical Common Lisp by Peter Seibel ( -> https://gigamonkeys.com/book/ )

Both are available online at no cost.

MarceColl 2 days ago [-]
Common Lisp is not a functional programming language in most current definition of the word. It's as procedural as they come, then libraries on top build other paradigms.

Scheme tends to approach things more math-like. While common lisp is less academic and more practical.

cess11 2 days ago [-]
You might already be aware, but there is a DISASSEMBLE function in the CL spec: http://clhs.lisp.se/Body/f_disass.htm

The details are implementation and platform dependent, but on e.g. SBCL someone who understands assembly could use this to dig into what the compiler does and tune their functions.

I was also drawn in on the promise of insight, but I'm not so sure that's what I got out of it. What keeps me hooked is more the ease with which I can study somewhat advanced programming and computer science topics. There has been aha-moments for sure, like when many moons ago it clicked how object and closure can be considered very, very similar and serve pretty much the same purpose in an application. But it's the unhinged amount of power and flexiblity that keeps me interested.

Give me three days and I would most likely fail horribly at inventing a concurrency library in Java even though it's one of the languages that pays my bills, but with Common Lisp or Racket I would probably have something to show. As someone who hasn't spent any time studying these things at uni (my subjects were theology and law) I find these languages and the tooling they provide awesome. It's not uncommon that I prototype in them and then transfer parts of it back to the algolians, which these days usually have somewhat primitive or clumsy implementations of parts of the functional languages.

I think the reason why tail call optimisation crops up in introductory material is because it makes succinct recursive functions viable in practice. Without it the application would explode on sufficiently large inputs, while TCO allows streaming data of unknown, theoretically unlimited, size. Things like while and for are kind of special, somewhat limited, cases of recursion, and getting fluent with recursive functions means you can craft your own looping structures that fit the problem precisely. Though in CL you also have the LOOP macro, which is a small programming language in itself.

sourcepluck 1 days ago [-]
"Algolian" is a lovely word. Does it come from somewhere, or was it yourself off the cuff?
cess11 1 days ago [-]
'C-like language' has irked me for decades, since C was one of the first languages I learned and most languages that expression refers to are nothing like C, so when I came across lispers referring to Algol-like or Algol-descendants I took it a step further.

A web search tells me it's already in use in Star Trek.

kqr 2 days ago [-]
> I've also heard people say reading Lisp functions, inside out, ensconced (heya) in their parentheses, is somehow more comprehensible than sequential C style, but this state of enlightenment thus far eludes me. I can only speak for myself, but I definitely reason about code outside in rather than inside out.

Based on my years tutoring university students in various programming languages throughout their courses, I suspect some of this is personal preference that's set before one starts programming.

Some people who start with C-style languages find Lisp-style languages more intuitive, while some people who start with Lisp-style languages breathe a sigh of relief when they discover C-style languages. I haven't found any predictor of this ahead of time – as far as I can tell it's just something one discovers as one tries different languages.

pjc50 2 days ago [-]
As I posted on the "cognition" lexer thread, some users prefer "left handed scissors". Just as it is not equally intuitive for everyone to write with a particular hand, it is not equally intuitive for everyone to program with a particular language.
lispm 2 days ago [-]
On thing to keep in mind when you see the language, which evolved of several decades: it has low-level (go to, ...), mid-level (macros, ...) and high-level (CLOS + MOP) elements in one language. A reason for that: the low-level parts are code generation building blocks for the higher level parts. Example: The SERIES library (a higher-level way to think about loops and sequences) uses macros (mid-level) to transform code into efficient loops (-> low-level): https://github.com/rtoy/cl-series

So one reason for all this functionality is: the language is its own compilation target. One is not supposed to write all the code, but we can write code which writes the lower-level code.

mark_l_watson 1 days ago [-]
I kind of like the decades of accumulated cruft and I like that Common Lisp code I wrote decades ago still works as-is. More modern conveniences like Quicklisp make everything simple and a mellow experience to use.

All that said, I also strongly recommend Racket. I just use a small part of the Racket ecosystem: compiler, REPL, Emacs bindings, and sometimes DrRacket. The library and packaging system is simple to get used to and use.

Sorry for going off topic: I need to use Python a lot for my own research and projects, and I find Python tricky. Don’t ask me how often I blow away my Python installation, and occasionally even switch between project specific virtual environments and different miniconda environments. Yuck.

Jach 2 days ago [-]
Have you seen https://stevelosh.com/blog/2018/08/a-road-to-common-lisp/ ? "Kludges" everywhere is applicable. On the other hand, having a function like "row-major-aref" that allows accessing any multi-dimensional array as if it were one dimensional is "sweeter than the honeycomb".

I still think CL code can be beautiful. Norvig's in PAIP https://github.com/norvig/paip-lisp is nice.

As for the inside-out remark, while technically you do it for even basic syntax, you don't always exactly have to, and it's very convenient to not do. Clojure has its semi-famous arrow macro that lets you write things in a more sequential style, it exists in CL too, and there's always the venerable let* binding. e.g. 3 options:

    (loop (print (eval (read))))
    (-> (read) (eval) (print) (loop))
    (loop
      (let* ((r (read))
             (e (eval r)))
        (print e)))
And even the first one isn't that bad to read. For the really annoying cases like a lot of arithmetic, just use the reader macro from the 90s that lets you write in infix mode rather than complain how the quadratic equation is harder to read in prefix notation.
superdisk 2 days ago [-]
As someone else said, Scheme is the one that warrants the "pure, elegant" reputation. CL is full of crazy features and weird functions but it's because it's basically the continued lineage of the original Lisp from 1960. It can even run those old 60s vintage programs with minimal tweaks.

https://web.archive.org/web/20150217111426/http://www.inform...

If you just want to play with macros and learn what makes the Lisp thing so special I'd recommend Clojure, it's like a stripped down CL with only functional features, and it's extremely nice and ergonomic.

adonovan 2 days ago [-]
> I'd expected something more elegant.

Common Lisp is sort of the union of all dialects of Lisp, and some might say of all possible programming language paradigms. Scheme is more like the intersection of dialects, and is thus closer to the platonic ideal of Lisp. If you've never seen any dialect of Lisp before, Scheme may be a better introduction to the flavor as it's much easier to learn.

agumonkey 1 days ago [-]
I'm part of the (lisp) crowd, I always had trouble and anger toward C, Java, (and even ADA for syntactic choices) and when I got into lisps it felt like finding home. There's less of what I don't like (side effects, syntax) and more of what I like (value oriented, composability, principled, interactive, tree/recursive thinking, ability to customize your language more).

A 'cleaner' starting point might be clojure (lisp on top of the JVM). Rich Hickey tried to make it short and principled, leveraging interfaces for polymorphism. schemes are cool too.

Some cruft in lisp I still like, like car/cdr .. in clojure they're named first/rest .. obviously more obvious.. yet a I miss using car and cdr to walk/deconstruct structures. There's something timeless about them.

kazinator 2 days ago [-]
How do you know the functions are "semi-redundant" and "soft-depreciated", if you've not worked in this before?
troad 2 days ago [-]
Clever question, but a boring answer - the learning resource I was using simply said so (in fact, several did!). It seems common for learning resources to somewhat apologetically explain that Common Lisp has many functions with similar names but subtly different behaviours (e.g. (=) v (eq) v (eql) v (equal) v (equalp) v (string-equal)), before telling you which ones are in vogue.
kazinator 2 days ago [-]
There is no "vogue" about it. Those functions test for different equalities. It's not the case that any of them are new versions of the others.

(eq x y) tests whether x and y are the same object. It is very fast because typically all it has to do is do a machine word comparison, and that comparison is conclusive. (When it is false, there is no more work to do to rule out sameness.)

If x and y are the same number, of the same type, it is not required to report true. Even if they are the the same number that you might think fits into a machine word, though on most implementations that situation will be eq. ANSI Lisp allows for implementations in which small, identical integer values like 7 are not necessarily eq. A Lisp implementation in which all numbers are "boxed" quantities on the heap could be conforming to ANSI.

The eql function is like eq, but the same characters and same numbers of the same type are required to be equal under eql. For all other objects it is like eq. eql might be implemented in terms of doing an eq comparison, and then when that is false, doing more work to rule out sameness of numbers and characters.

Common Lisp's hash tables can use different equalities; that is where it can make a bigger difference. Objects in an eq hash table can use a very simple, fast hashing function. Whereas objects in an eql hash table have to be hashed in such a way that equal integer or floating-point values have the same hash. A value that is a pointer to a boxed bignum integer has to be dereferenced to access some of the value bits of that object.

There are some exotic functions in Lisp that you might never end up using, particularly in the list processing area like pairlis or revappend and whatnot.

troad 2 days ago [-]
Yeah, I am aware of the differences. I equivocated about specifying examples, because I worried someone would try to explain them, rather than engage with my overall point.

My overall point is two-fold. (1) There is nothing in the function name that would indicate the difference between, say, (eq) and (eql) and (equal), any more than you could guess what the difference between (colorise), (clrise) and (clrs) would be on sight. Lisp seems to love doing this. To me, it just seems like an obvious source of very painful bugs (works as expected 99% of the time and then it doesn't).

(2) In many (most?) day-to-day cases, the distinction isn't material, and for those the ecosystem is going to end up preferring one more than the others (which may change over time - hence, 'in vogue'). The references I've seen tend to suggest sticking to (eq) and (equal), for instance, and avoiding (eql) unless you have specific need of it.

For a language that is reputed to be elegant and beautiful, this - well, isn't that. Hence my initial surprise.

kgwgk 2 days ago [-]
> (1) There is nothing in the function name that would indicate the difference between, say, (eq) and (eql) and (equal), any more than you could guess what the difference between (colorise), (clrise) and (clrs) would be on sight. Lisp seems to love doing this. To me, it just seems like an obvious source of subtle bugs.

What alternative would you suggest?

Giving paragraph-length names to the functions? Common Lisp already has some of the more descriptive function names around and they can be inconvenient if the function is used often. (By the way, there is something that helps to remember what those functions do: shorter names are correlated with more primitive and restrictive checks.)

Having a single function that performs different checks depending on a parameter? If you don’t know which function to use you don’t know which parameter to use either.

Removing the ability to perform different kinds of checks entirely?

pfdietz 1 days ago [-]
I'd suggest changing the name from SORT to NSORT, just for consistency and bug avoidance. :) But it's water under the bridge and one can patch this (or, really, any gripe about function names) by suitable package fu, at the cost of making your code less readable to others.
kazinator 1 days ago [-]
I did exactly that in TXR Lisp, after initially following the Common Lisp naming. In the same change, I also made a function called shuffle nondestructive, and introduced nshuffle.

It's a backwards compat breakage, but there is a mechanism in the language which helps with such situations.

  txr -C 237
or setting the TXR_COMPAT environment variable to 237 or lower will restore the destructive behavior of sort and shuffle. People who don't want to fix their code can deploy that way.

I hate compatibility breaking changes, but it was a real thorn in my side how sort is destructive without any indication in its name, in a language where destructive versions of functions should be separately named.

Now the thing is that sort is not required to be destructive. It is allowed to be. Therefore code written to the language spec will not break if sort becomes pure. But we have to worry about breaking all programs though, not just programs written to the spec. (Unless we are GCC maintainers; different rules apply.) You can't just say, "Oh, programs that call sort on a vector and ignore the return value, expecting the vector to be sorted, are just nonportable junk; let them break."

pfdietz 1 days ago [-]
In general, the "N" versions of functions in the standard say they "maybe" or "might" modify arguments. SORT and NSORT should have been like this, but IIRC SORT was taken from Interlisp, where it was destructive.
kgwgk 1 days ago [-]
I agree that there are some inconsistencies in the naming of functions and parameters like destructive/non-destructive variants and predicates among other things. Also strange names like car and cdr (which on the other hand allow for cadr et al.). However, even though they can be confusing, I wouldn’t put the equality functions in the same bag.
doctor_eval 2 days ago [-]
I think troad is trying to explain how they feel about learning CL and what they've discovered. They aren't criticising CL. Of course naming things over a long period of time is a complex problem, that's why they acknowledged that there is a lot of cruft from the 60s.

I've found their comments, and the constructive responses to them, interesting because I am interested in Lisp but have never learned it.

What I don't find interesting is seeing troad's observations strongly challenged, as you are doing, as if somehow by sharing their observations, they now owe us detailed solutions as well.

It's not cool.

kgwgk 2 days ago [-]
Fair enough. In fact I agree with the broad point that having functions that do things is a source of bugs and CL is a huge language with less than perfect names. My point is that having less functions that do less things does not obviously reduce the number of bugs in the end if those things still need to be done. Anyway. I see how a logical argument may be out of place if the thread is understood about sharing feelings and not about challenging claims.
lispm 1 days ago [-]
Lisp was developed at a time when space was more limited. Generally operators had short names. Page 147 of the Lisp I manual from 1960 has a function listing: https://bitsavers.org/pdf/mit/rle_lisp/LISP_I_Programmers_Ma...

Generally in Common lisp we have EQ, EQL, EQUAL, EQUALP -> from the most specific to the most general. The most specific has the shortest name: https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node74.html

EQL is actually often used, since it is also the default equality test in Common Lisp. For example the named TEST argument has the default EQL. That information is slightly hidden. Example: (find 1 '(2 3 2 1 3)) is the same as (find 1 '(2 3 2 1 3) :test 'eql) , since EQL is the default test. One can pass a different equality test predicate, if needed.

Some background about naming and equality operators in Lisp: http://www.nhplace.com/kent/PS/EQUAL.html

Other languages for example use some of =, ==, ===, is, ... what may the difference be?

There were many different naming conventions in Lisp over the decades. One was, as mentioned above, the more specific/primitive operators have shorter names. Later (70s/80s, Lisp got a lot more operators and at some point there were operators with long names introduced. This was not liked by everyone, since one needed to type longer names - completion was often new then (or slow).

For example CLOS was added end 80s. Thus it uses names like UPDATE-INSTANCE-FOR-DIFFERENT-CLASS, instead of UPDINSTC ;-) .

Generally, a good Lisp IDE should make getting the documentation (and source code) for any operator easy.

kgwgk 2 days ago [-]
> (2) In many (most?) day-to-day cases, the distinction isn't material, and for those the ecosystem is going to end up preferring one more than the others (which may change over time - hence, 'in vogue'). The references I've seen tend to suggest sticking to (eq) and (equal), for instance, and avoiding (eql) unless you have specific need of it.

I guess they suggest two instead of one because in most cases the difference is material. And I imagine that they also suggest using = to compare numbers.

When the distinction doesn’t matter one could just use eq. I’m not sure that this covers most uses though.

kazinator 1 days ago [-]
If you want to test whether x is 1.0 or 1, then (= x 1) is what you want. But = is specific to numbers and blows up on non-numbers.

The family of functions whose names begin with eq are special in that they are applicable to all objects; equal will tell you that "abc" is not 1 without complaining.

kgwgk 1 days ago [-]
But - with the exception of equalp - they may fail to identify as equal numbers which are equal (=). I suggested to use = to compare numbers specifically.
kazinator 1 days ago [-]
> rather than engage with my overall point

So you'd want someone to engage your overall point that there are redundant and deprecated functions, while the specific ones you actually have in mind are not redundant or deprecated (and you know this)? And for that reason, it would be good to keep the specifics out of the discussion? Okay ...

> There is nothing in the function name

Yes there is: the names get longer with increasing complexity of comparison.

The common eq prefix puts them into a family.

> avoiding (eql) unless you have specific need of it.

eql is the default value of the :test argument in numerous library functions.

Rather, you should avoid eq unless you're optimizing. Rarely do you want to compare objects in such a way that (<compare> 12345 12345) might be false!

It is idiomatic, though, to use eq in code that manipulates symbols, like (if (eq arg 'foo) ...).

If one of the arguments of an equality function is a constant, it's possible to pick the strongest equality function which goes with that constant's type. All equality functions reduce to eq when symbols are compared.

> Lisp seems to love doing this.

Many operator names in Lisp are short mnemonics. It is like this in older languages, or older parts of languages. Short identifiers keep programs short.

Short, mnemonic names are part of the elegance of traditional Lisps.

It's not just old versus new; for instance in some very new languages, we see a trend of shortening define to def, or function to fn and such.

My TXR Lisp is much newer than Common Lisp, but I shortened some things. Instead of stable-sort, I have ssort. Or instead of symbol-macrolet, I have symacrolet. In Common Lisp, like in many languages, there is a "layer" of newer identifiers that are longer, often hyphenated compounds. You can tell that define-symbol-macro is newer than defun. However, someone making a new language which is inspired by an existing one can challenge those decisions, going by what identifiers are often used, rather than going by the chronological order in which they were introduced in the inspiring language. Symbol macros were introduced in Common Lisp for, I think, supporting with-slots. But symbol macros turn out to be important, deserving short names.

Why ssort rather than stable-sort? Part of the reason is that I made a change of introducing nsort as the in-place sort, making sort pure. That then establishes the idea that we have a one letter modifier. The s for stable becomes another one. And in fact, I have one more: c for caching. There are all eight combinations, in the order c, s, n: in other words c?s?n?sort. Thus csnort ("cee snort") is the caching, stable, destructive sort. I don't want to punish the programmer and reader of the code with caching-stable-destructive-sort. This is not Java.

You can't cram meaning into the spelling of every identifier, by making it out of a string of English words, because that makes things comically verbose.

Languages don't do that. You can't guess what "water" means, if you're new to English, and your native language doesn't have a cognate like "wasser". You just have to learn the vocabulary word. Some words are compounds of other words, others aren't. Only some compounds have compositional meanings, bearing out the obvious guess.

taeric 1 days ago [-]
I always find it odd when I see how hard modern languages move away from mnemonic style. And I share what I feel is your amusement that some people think there is a universal guessability to some symbolic terms. It is all learned, and it is convenient when it leverages other learning. There is nothing really universal, though. As much as that would be convenient.
brabel 2 days ago [-]
CL does have some weird stuff, after all it's coming from the 1960's LISP tradition. But after you get past some basic weird stuff, it's a quite wonderful language.

> I can only speak for myself, but I definitely reason about code outside in rather than inside out.

You can indent code to make it much easier to "parse", and use some macros that turn the code inside/out, it's more readable than most other languages.

The CL cookbook is an excellent resource, and this page links to several other excellent resources and books you can read for free online: https://lispcookbook.github.io/cl-cookbook/

The "new docs" also present the documentation in a "modern" looking way (rather than the 90's looks of what you get if you Google around): https://lisp-docs.github.io/cl-language-reference/

About other Lisps...

The Racket Guide is definitely not "bone-dry": https://docs.racket-lang.org/guide/intro.html

It is well written and looks very beautiful to me.

On another Scheme, I find Guile docs also great: https://www.gnu.org/software/guile/manual/html_node/index.ht...

They may be a bit more "dry" but they're to the point and very readable! In fact, I think Lisp languages tend to have great documentation. The guy who wrote is an excellent writer (he has written Racket books which are equally great) and I believe is the author of the Racket docs tool!

hcarvalhoalves 1 days ago [-]
There's cruft, but there's also decades long backwards compatibility and feature completeness.

Sitting down and doing actual work instead of fighting immature runtimes and toolchain isn't a bad idea, but unfortunately is something pervasive in this industry. In addition, nothing I've tried thus far comes even close to the experience of the CL debugger.

db48x 2 days ago [-]
The funny names all have history. They had history even at the time when Common Lisp was standardized.
troad 2 days ago [-]
No doubt! I look forward to learning it in due course, but it's not exactly penetrable for a newcomer, particularly amidst a sea of parentheses.
bmacho 1 days ago [-]
> I suppose I should try something like Scheme or Racket, but it's hard to find an introduction to those that isn't bone dry. (Recommendations welcome!)

Use it as a tool, instead of an end goal. Your end goal can be the SICP book, HtDP book, leetcode, pet project. Or literally whatever that you like doing.

anthk 2 days ago [-]
On Common Lisp, I loaded a nearly 30 yo eliza Chatbot written in CL, it ran almost straight under SBCL with just omitting an error:

https://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas...

       sbcl --load eliza.lsp
       (top-level)
       (hello how are you)
Do not use punctuation. Use (goodbye) to exit.

From a Unix user like me, SBCL/CL looks a bit bloaty and non-Unix, but I have to acknowledge that CL and Emacs' Elisp had a great history on compatibility and easyness due to the homoiconicity. In plain English: everything it's handled in the same way everywhere. The syntax will be the same on every function.

mark_l_watson 2 days ago [-]
Great writeup! I am a long time user and fan of Common Lisp, and this is one of the more interesting use cases I have seen!
emptybits 1 days ago [-]
"Long time user and fan" is an understatement. Thank you, again, Mark. I was re-reading your Loving Common Lisp book just an hour ago!

I read elsewhere that you're also a Racket user. I'm curious ... aside from CL legacy code requirements, do you view Racket (the language and/or ecosystem) as a smart long term choice going forward with lisp projects?

mark_l_watson 1 days ago [-]
Thank you for the kind words!

I would choose one or the other for most of your Lisp dev. I have been just an occasional user of Racket forever, but in the last few years I have really been enjoying the language and the minimal tools I use for Racket dev. I also feel happy using Common Lisp, so it is difficult for me to make a definitive statement on preference.

Racket and LispWorks Pro have portable UI libraries which is nice. I evaluated both Racket and LispWorks Pro for making standalone apps and they are both pretty good.

varjag 2 days ago [-]
Thank you Mark! There are blessed and cursed projects out there, and this one has definitely been the former.
varjag 2 days ago [-]
Author here, if you have any questions.
db48x 2 days ago [-]
What does the evacuation alarm actually sound like? Does it reuse any of the sounds mentioned in the Tronstad study, or did you come up with your own?
varjag 2 days ago [-]
It is a bell sound as the sister comment points out. We found that a multitude of sounds work with negligible difference in perception. The bell however was consistently voted the most comfortable in post trial questionnaire.
KennyBlanken 2 days ago [-]
db48x 1 days ago [-]
I might have guessed that there would be a youtube video! Thanks :)
guenthert 2 days ago [-]
Given that this is a safety-critical application, are condition/restarts being used? If so, what is your take on the value of those and can an example of restarts be listed? If not, have they been considered and if, can you share the reason not to use them?
varjag 2 days ago [-]
We certainly do use both. For example in communication we process socket layer conditions and remote operation results together to synthesize Evacsound's own nomenclature of conditions in distributed operation terms. They are then re-signaled and can be handled by a small set of our wrapper macros and constructs.

Our process/tasks abstraction naturally also uses conditions to handle the lifecycle.

As for restarts you can see their invocations in the last code snippet in the article.

fellerts 2 days ago [-]
What are the tunnels strung with (physical layer) that allows a 10 km+ network to work reliably?
varjag 2 days ago [-]
Lots of single mode fiber in redundant loops. In longer tunnels you'd have several technical rooms along that hold the loop ends into L3 switches. Within the loop you have emergency cabinets spaced 125m. Apart from fire extinguishers and emergency phones (often also our products!) they contain some PLCs and L2 switches that distribute signal and PoE to end point devices such as Evacsound or traffic cameras.
darnthenuggets 1 days ago [-]
Were any other languages in contention here, or was it a “use what you know” kind of situation? As much as I would also like to be paid to write lisp, I couldn’t help but notice that a lot of the scarier problems solved were reasons others had chosen/built erlang.

Great post, cool system!

varjag 1 days ago [-]
Yes it was some of that certainly. I could gauge roughly what kind of effort the problem would take, what kind of constraints it would have to run with and select among the tools I am comfortable with. I read about Erlang but never did anything with it. From what I know though it would not necessarily be better at this job than CL and learning another language on the go was not in the plans.

But isn't it the deal with all programming language choices? Ultimately the only true programming language is machine code, the rest are just abstractions for our wetware's benefit.

mtreis86 2 days ago [-]
How was working with posix threads? I've only dug into SBCL's various thread tools
varjag 2 days ago [-]
Fortunately it was uneventful, as the idiom is the same as in any other programming language that support them. We used bordeaux-threads package for portability across the implementations.
cies 2 days ago [-]
varjag 2 days ago [-]
Sorry about that, should be up again!
justneedaname 2 days ago [-]
This reminds me of the very first project that I worked on, a warning system for rail trackside workers. There had been numerous case studies of near misses, injuries and even fatalities.

The system currently in place at the time was, unbelievably, two people (in the case of a bi-directional line) stood downline within earshot of the main crew. When they saw a train approaching they would blow whistles and wave a flag, the workers would then move out of the way until the train passed. Yeah I also couldn't believe that such an archaic system was still in use - this was in 2019 mind.

The company in charge of managing the railway lines reached out to our company and a few others to have us tender on a new design to help protect workers and reduce near misses. Our research led us to an existing system developed by a company in Switzerland which we essentially planned to modify for our national network, as there are differences in how railway lines are signalled across different regions. It consisted of units that could be placed periodically downline of the work site and would alarm when a train was approaching by use of real-time train location data.

The main issue we faced though was how to ensure an accurate reading that gave enough time to vacate the line whilst not being excessive, as research suggested workers may believe it to be a false positive if nothing approached after a couple of minutes. To understand why this is a difficult problem it first helps to understand how traffic within a railway line is managed.

The railway network is split into what is known as blocks, these are discrete sections of track separated by axle counters. Without a train the two sections of track are electronically separate, when one passes over a circuit is completed and the train's position can know be known to that exact location at that exact time. However these readings are discreet, with resolution of the trains position only being as good as the number of axle counters present on the line. This results in some tricky estimating of when "impact" will happen. Train speed is another metric that can be used in conjunction but again you only know the speed read at the last axle counter, anything could have happened between the last reading and "now".

In the end our solution was to assume maximum line speed and warn when this would be within 30s of the worksite. We created a demo that worked flawlessly and the client was visibly impressed. However they then wanted a proposal, cost and everything else for the next phase within 2 weeks - so we had to pull out as we weren't able to produce it. This was a real shame for me as I look back on this project with fond memories, one of the few projects where we were essentially left to figure it out. Already in my short career (<5y) I've been fortunate enough to work on some interesting projects and gain interesting stories to tell...

worthless-trash 2 days ago [-]
I would love to read more on these topics. I keep getting told that lisp isn't "used anymore" (Even though I actively do).
nemoniac 2 days ago [-]
Is it really established wisdom that multiple inheritance might be an anti-pattern? Anyone care to elaborate?
pfdietz 2 days ago [-]
A nice pattern from Common Lisp is to inherit the parts of an object from different superclasses. Method combination means one can write methods for those superclasses and then have them automatically combined in a subclass.

Example: if one has tree nodes with various slots that represent children and you want to write a tree traversal function, you put each slot in a superclass, inherit from those superclasses in the correct order, and then write a method for each superclass that calls the child at that slot. The methods are combined in the right order automatically in a PROGN method combination.

nvy 2 days ago [-]
Isn't it the ambiguity of the Diamond Problem? Suppose B and C are both children of A, and D is a child of both B and C.

If B and C both have methods foo(), which gets called when you do d.foo()?

Seems like a real footgun requiring extra effort to avoid.

phoe-krk 2 days ago [-]
In CL's solution, the order of superclasses matter to avoid ambiguity. If D is defined like (defclass d (b c) ...) then a method specialized on B is called; if like (defclass d (c b) ...) then it's otherwise.
jerf 2 days ago [-]
In this case the problem becomes that while one can define a 100% consistent, coherent order for the compiler to use, the human's ability to understand what will happen when they call a method of a particular name, and also what that resolution method will do as the code is refactored and changed over time, exceeds anything a human can be reasonably expected to have.

Really, all the problems with multiple inheritance are that the humans can't handle the complexity that results. The compilers can be made to do "something" that is arguably sensible.

Jach 2 days ago [-]
Fortunately in Lisp the compiler is available at runtime!

I mean, it's just not that bad. I believe the commercial Lisp IDEs will just show you relevant info much like, say, Java IDEs, but even with a free Lisp you can still ask for it so you don't actually need to wonder what will happen as you're looking at a line. You just ask. The worst part of Lisp vs. C++ on multiple inheritance, I think, where it can be more confusing for Lisp is that Lisp will just overwrite slots (fields) sharing the same name, whereas C++ will shadow them. On the other hand methods aren't owned by individual classes in Lisp, so you get multiple dispatch by default. Lastly the presence of :before / :after / :around methods, combined with multiple dispatch, make it pretty straightforward to achieve behaviors through mixins that require pretty complex contortions otherwise in C++. (Or Java.) The behavior of those "auxiliary methods" is straightforward to reason about. All :before methods run before the most specific primary method, in most-specific-first order, and all :after methods run after the least specific primary method, in least-specific-first order.

I'm probably going to convince some people otherwise by giving some more specifics, but as a minor example, consider a silly "game object" style class. I can always ask any class (e.g. an asteroid), hey, what's your class precedence list? (closer-mop:class-precedence-list (find-class 'asteroid)) returns a list of class objects: asteroid, game-object, sprite, add-groups-mixin, cleaned-on-kill-mixin, standard-object, slot-object, and T. From the source code where the defclass is, only game-object is shown. If you look at game-object, only sprite and the two mixins are shown as an example of multiple inheritance.

I don't need to call that function to get the info either, it's readily available by calling 'describe on the class. (I think even free editors like Lem or emacs can be configured to automatically show the description of things if you hover over them, I just type ,s in vim.) The description includes the same class precedence list info, tells me the direct superclasses, any subclasses, direct slots (fields directly defined on the class), inherited slots...

If I'm wondering what could happen if I call #'kill on an asteroid before I actually call it, I can ask with the built-in 'compute-applicable-methods function or 'closer-mop:compute-applicable-methods-using-classes, and it will show me the applicable methods are firstly the primary defined method, then an :after method due to the mixin.

I can also compute the actual effective method that will be called with 'closer-mop:compute-effective-method. For something like #'kill, it shows what happens first is the primary #'kill method, then the :after method. For something like #'draw, let's say I overwrote the base implementation, now it shows there's just one method call, with the potential for the next base class method if the specialized method happens to use 'call-next-method.

So in summary, the tools exist in various forms to wrestle the complexity and make it amenable to human understanding. Just like with tools such as cross-referencing, they help understand and create bigger systems, we don't have to limit ourselves to what can easily be done with physical code printouts and hand-made indexes.

pfdietz 2 days ago [-]
And sometimes more than one method is invoked, using a sophisticated method combination infrastructure.
phoe-krk 2 days ago [-]
Right, I assumed the default method combination, and also the simplest case of it with no around/before/after methods being defined... Golly, CL object system is complicated, now that I look at it from this perspective.
tmtvl 2 days ago [-]
It's simple when you want it, and powerful when you need it.
mark_undoio 2 days ago [-]
I think the implementation in C++ put it out of fashion, as later languages (e.g. Java) deliberately restricted it to avoid the complexity. The main criticism I saw was the potential for (variants of) the "diamond" where A is subclassed by B and C, then both of those are subclassed by D. Does D get two copies of A's state? It's hard to come up with an intuitive behaviour.

More recently, the move seems to be away from class based object orientation (including inheritance) entirely.

On the other side of things, I've never heard people talk about Python's multiple inheritance with the same tone used for C++ - but then there are cultural differences in the language communities too.

bitwize 2 days ago [-]
> Does D get two copies of A's state? It's hard to come up with an intuitive behaviour.

C++ gonna C++, which means the language covers all the bases because some programmer might get mad if their use case wasn't accounted for.

C++ has something called virtual inheritance, wherein if subclasses B and C inherit virtually from A, any subclasses of both B and C will get one copy of A's state. Otherwise, they will get two copies: one from B and one from C.

This solves the problem of addressing concerns of all programmers w.r.t. the diamond inheritance problem, but makes the language more complex (and triggers my CPPPTSD).

https://en.wikipedia.org/wiki/Virtual_inheritance

pfdietz 1 days ago [-]
In this vein, note that Common Lisp's inheritance would always be virtual.
fiddlerwoaroof 2 days ago [-]
Something I’ve found interesting is that most widely-used class-based inheritance languages eventually added multiple inheritance of implementations back in: PHP added traits that can contain method implementations; Java added default implementations on interfaces; etc.
lmm 2 days ago [-]
The famous "super considered harmful" post ( https://fuhm.net/super-harmful/ ) pointed out the key problem with diamonds, and it's mainly a problem with constructors. Allowing mixins that can have method implementations but only allowing one class parent with a constructor is a pretty good spot in the design space, and is what a lot of languages have converged on.
fiddlerwoaroof 2 days ago [-]
I like CL’s solution to constructors which is basically “specialize this generic function (SHARED-INITIALIZE or INITIALIZE-INSTANCE) with an :AFTER method”. You reliably run all the initialization code for each class involved and you don’t have to remember to call CALL-NEXT-METHOD (CL’s spelling of super)

Edit: I see that post refers to Dylan, which is more like CL than python in the important ways. IMO, sleeping on CL’s object system CLOS was a huge mistake of the “Java/C++ era” of our industry.

jolt42 2 days ago [-]
Meh. Probably a reaction to getting "burned" by it. But show me something you can't get burned by.
2 days ago [-]
dangmumwhore 2 days ago [-]
[flagged]
copx 2 days ago [-]
90s-style Java OOP showed everyone that heavy use of multiple inheritance is the worst thing since 80s-style BASIC where ever third line was a GOTO.

Imagine one class inheriting from 50 other classes through multiple inheritance..

People really used to construct classes like:

"Iron Sword inherits from Iron which inherits from Metal which inherits from Meltable (which inherits from Temperature) and Material. But of course it also inherits from Sword which inherits from Weapon and Edged. Meanwhile Weapon inherits from Equipment which inherits from Ownable and Item which.." and so on.

Basically you make every aspect and attribute of an entity a class and then create your entity's class by mushing together all those classes through multiple inheritance. The results are..not pretty.

Such code quickly becomes very hard to comprehend and maintain.

bitwize 2 days ago [-]
90s Java didn't do that because Java doesn't support multiple class inheritance.

90s C++, however, did.

Funny you should cite a game example. I once read about how the developers of StarCraft[0] ran into the same Goddamn inheritance problems I did when trying to build a custom game engine and a game with that engine. Adding behaviors via inheritance seemed like a good idea at the time (mid-late 90s), especially given all the propaganda we read from our C++ compiler manuals and such. But it turned into a situation where you either accepted multiple inheritance with all of its complexity and suck, including "which of the multiple base classes that implement 'foo' do I want when I call derived::foo()?" -- or resorting to delegates or other methods of composing behavior.

Me, for gaming, I became an ECS convert and haven't looked back. There are some pain points when writing a game in ECS style... but the advantages pay for the relatively minor pain many times over.

[0] https://www.codeofhonor.com/blog/tough-times-on-the-road-to-...

CFlingy is a particle spawner. Why does that have to be in the inheritance chain, instead of a trait you add to an object?

hprotagonist 2 days ago [-]
lll-o-lll 2 days ago [-]
Why “canonical”? From what I can see, Entity-Component-System (ECS), long pre-date this blog series by Eric, and he doesn’t even reference the term.

I did enjoy the read however! My own programming has evolved towards data oriented design over the years.

fargle 8 hours ago [-]
wow! that is a great reference.

i've only found it summited a few times and only with comments here: https://news.ycombinator.com/item?id=10567360.

not a lot of comments and i suspect somewhat missing the point because this submission started with part 5, which intentionally is only part of the series exposing the pros/cons, limitations, etc. of various approaches.

Jtsummers 2 days ago [-]
90s Java did not have multiple inheritance (nor does today's Java). It did have multiple interfaces, but they only carried a spec of the interface and no implementation details beyond that. C++ was the one with multiple inheritance, if you are trying to reference a popular 90s OO language.
2 days ago [-]
anthk 2 days ago [-]
OOP would work fine for a text adventure, such as Inform6 against the Z-Machine, which pretty much the gameplay rooms->objects it's perfect for this. For everything else... well... maybe just CLOS it's usable enough.
cess11 2 days ago [-]
The MUD-family of games are usually built in a C-like OOP-language, LPC. I think it's rather nice.
anthk 2 days ago [-]
Under Inform6 the inheritance and OOP features are literally that, objects have attributes and you can create in-game objects (rooms are objects too) which are instances of defined ones. Such as always 'lighted' rooms, a furniture class by defining as object as 'scenery' (you can't take it).

That's an elegant example on coding Inform6 which transpiles against the Z Machine, but overall I won't use OOP outside gaming.

cess11 1 days ago [-]
Rather similar, then.

I've come across GUI and a database where I thought object orientation was nice, and I'm also fond of contemporary Smalltalk-like languages. I've made peace with Java, but if I have a choice I'll be in something Lisp-like or logic programming. Racket, Elixir, Scryer, that sort of thing.

anthk 1 days ago [-]
Indeed, yes. You can create a text adventure with very little logic in place, by setting the winning flag when very few conditions match. Everything else it's predefined with objects with attributes. But, OFC, some small logic it's done to add realism to the game. Such as a TV showing messages upon entering a room, and so on. But compared to any other language, Inform6 makes that almost like editing a config file.
mikepurvis 2 days ago [-]
Yup. No amount of generated documentation or static analysis can make up for the cognitive load required to reason about where a particular method is actually being dispatched to under those conditions.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 23:24:06 GMT+0000 (Coordinated Universal Time) with Vercel.