Transcript for October 2, 2021 - Why Tacit? 

Thanks to Rodrigo Girão Serrão for providing the transcript.

00:00:00 [BT]

So I think earlier on earlier on Stephen was talking about how he likes to get all this hard work done in the the the beginning so that at the end it towards the end of the day you can sort of do emails and things like that. I think this episode has been that way. We we loaded we front loaded an awful lot of information.

And now we're just coasting.

00:00:20 [ST]

Are we still recording?

00:00:22 [Music Theme]

00:00:33 [CH]

Welcome to another episode of Array Cast. I'm your host Conor, and today I'm here with Bob Rich and Steve and we're going to go around and do quick introductions.

Then we have a short announcement and then we'll hop into today's episode topics. So we'll start with Bob,

00:00:46 [BT]

I am Bob Therriault. I am a J enthusiast. I program in J. Although not professionally, and I've been doing that for about 20 years and oh boy there's lots to talk about J today.

00:00:57 [ST]

I'm Stephen Taylor. I've been an APL developer for decades and I'm these days I'm the Kx librarian.

00:01:05 [RP]

I'm Rich Park. I work for Dyalog and these days I teach APL to people sometimes.

00:01:12 [CH]

And as mentioned before, my name is Conor. I'm a professional C++ developer and a huge array programming language/paradigm advocate/you know fan. As those that, yeah I always mention, follow me on Twitter know. So yeah, I'll throw it to Rich, who's got a short announcement and then we'll hop into today's episode.

00:01:32 [RP]

So, and I think the days blur the for me these days, but I think in about 6 weeks time it's the the 8th and 9th of November. Dyalog 21 user meeting is happening, it's going to be online again this year as it was last year and hopefully with the state of affairs coming to some. More normalcy next year will be in person again, but it's online. Although I think you will have to register in advance. Hopefully by the time this goes up, if not shortly thereafter, you will be able to do so.

00:02:07 [CH]

Yeah, definitely we'll we'll put all this. Show the links in the show notes for those that are interested. I attended both APL Seeds and the Dyalog virtual last year and it was. It was awesome. Yeah, you get to hang out with fellow fellow nerds and learn more about, you know, the array language paradigm and specifically Dyalog APL. But sometimes there's some sort of. General talks that apply to array languages in general. With that said, I think yeah we are going to hop into today's topics. So today's episode is a continuation of I believe it was two episodes ago. So about a month ago we recorded part one of the tacit programming conversation. Shall we call it? And so we said at the end of that that we had barely even scratched the surface and that. We were probably going to have Part 2, three, and you know maybe it'll be just become a ongoing series where we continue to talk about, you know the pros, the cons, the differences between different languages, like APL, J, BQN, k, et cetera. So I guess maybe yeah, as Bob alluded to, there was a little bit of news on the J reflector or sort of email list, so maybe we can start there. I'm not sure how relevant if it's specifically tacit programming, I'll just throw it to Bob and then we'll go from there and see where we end up.

00:03:26 [BT]

A couple of episodes ago we we interviewed Henry Rich and during that interview we talked about the way J used to be and in the in the past there was a whole series of trains that you could do and use modifiers and then you could, you know, verbs work on nouns in J, so nouns would be your arguments, and then in the case of conjunctions or adverbs, they're often what other languages call operators and their arguments are verbs, so they affect verbs. So just as you would expect, a conjunction affects a verb or an adverb affects a verb. So that's how J works. In the past you've been very limited as to what you could do to tacitly describe conjunctions or adverbs.

Now in the far distant past, about 20 years ago or maybe 18 years ago, you actually could modify those tacitly, so you wouldn't have to use an argument to change what a conjunction or a combination of conjunctions and adverbs and verbs did. This all sounds very confusing and that's why they took it out so they they took that out and then they said, OK. Well, if you really want to do this stuff, you're going to have to do it explicitly and explicit means. Now we're going to identify where the arguments come in. So in the case of a conjunction and you're using verbs, u and v would be your variables that would drop in. Beside your conjunction, you'd say how they wanted to work, and you would actually have to explicitly put a u if it was the left operand and a v if it was the right operand, and you could really control what you were doing, but you couldn't do it tacitly for adverbs or conjunctions.

So we interviewed Henry Rich. We talked about that at length, well, not length I guess for a couple minutes and he said, well, don't bother talking about it. 'cause it's not coming back.

Well, on Sunday it came back. It showed up. It showed up in the most recent version of the 903 of beta for for J and its version are they've been working at this for a while and my guess is they're going to work out for a bit longer because suddenly all this stuff came back and I I guess. I think it was Henry's email about it. Eric sent an email saying OK, make sure you get this one downloaded. There's a lot of change, and it involves the old way of describing, you know, tacitly describing conjunctions and adverbs. And so I was on vacation, and that's a real I was really frustrated because I didn't get back until last night my time and I only had an iPad as access to the Internet and everything.

The problem is an iPad, although there is a J, a version of J that you can access through iPads and through iOS, and in fact, if you're really interested in using J, that's a really quick way to download the app and get it working on your iPad or your iPhone, 'cause you can do that. It's actually kind of a neat little thing. Only problem is it's version 704? So as a result, it's older. And there's a lot of new things that won't apply to it, but but the bones of J are there. At least I could have said that up until Sunday because now if they go forward with this, there's a huge difference as to what you're going to be able to do in J as opposed to what you could do before Sunday. Now, having said that, it's really important to note you don't have to do any of this because and and this is what was frustrating is I I'd heard about this stuff.

I'd never had a chance to play with it, and suddenly there's all these discussions going different directions. We should do this. We should do this. We should change this. We should change this and at first Henry was really holding a firm line. Saying we're not changing a thing, we're going to be doing it the way it was described, but just literally this morning. There's been some ideas about using tacit descriptions of conjunctions that weren't being implemented. That Henry saying, you know what? That would probably be a really good way to implement that and it's in beta, but at least there's there's there's some thought now about where we're going with all this, but it really, honestly, if you're if you're just new to J, don't even bother with this stuff right now because for one thing, it's in beta and it's going to it's going to evolve a bit and and then the other thing is it's really confusing and I've done J for a long time and I'm I've got to get my head around some of this stuff 'cause it's.

It it really changes the way the... I got it on last night, I tried to do something just to check what I was doing. I described something that was pretty easy and then I tried it in the previous version of the beta and it just said no, that's not correct syntax. So to that point, I can make it kind of do things, but I honestly I don't think I can read it yet because it my brain isn't bent in a way that says this is how these things go together. I'm used to the old way the way computer worked and it's it's changing so very exciting time. Though you know, may you live in interesting times.

00:08:38 [CH]

So it sounds like we definitely have to have Henry back on to just have the expert/implementer explain it. All right, I'll throw it to Stephen and then yeah, I'll follow up with my questions.

00:08:49 [ST]

Well, I want to speak here on behalf of the easily confused of whom I I I claim the position as representative, so tacit or point free programming will be a novel concept to a lot of our listeners. So let me start off by offering the simplest one, the simplest examples I know in J. The symbol sequence plus slash divide … sorry, percent … and then hash [ +/%# ]. Representing plus slash is sum and percent is division and the hash is count or tally. Just those just those four symbols together are the program for the statistical mean or average. And I would point to this as the answer to, or the first part of an answer to my question. My question is why on Earth would you use tacit or point free programming? Why would you want to use it? And the first part of my answer here are points to that. So how cool is that that you could just jam those symbols together and that does average. So something very deep in me, something very important in some part of my soul wants to write programs that just look like that that have so little ceremony. It's just the pure functions. Now I could provide a second answer to my own question here. Why would you want to use tacit programming?

Yeah, I think we discussed this when we had Henry on, but if you've got a tacit program, J at any rate can invert it and provide you with a formal inverse of that. So you can use a tacit program to do under type transformations where you transform something to do a particular operation and then transform it back where you might convert a date to some other representation and then convert it back again. If you can express the transformation in tacit form, then J can provide you with the inverse, the other after the transformation. OK, so there's my aesthetic satisfaction, and there's doing `under`s.

What's the rest of the story?

Why do we want to?

Why are we interested in tacit?

00:11:12 [BT]

Well, just before I answer the question about why we would want to, 'cause, I think that's an excellent question, I'll just mention that in addition to `under` being able to have something tacit do the inverse, there are some verbs that you actually can't do that with, and so, for instance shape. If you if you took the shape of something, you do the inverse of it and well, it doesn't really make a lot of sense anymore. But in J, there's a conjunction called obverse where you can define what that is. So in the case of something that isn't easily defined, you can say oh if I'd use, I'll create this verb and the first part of it going in. For instance, I'm going to have it as shape and going out I'm going to have it as doubling all it's, you know, results. You could actually do that in J, so you can actually create your own verbs and and and that is an advantage of tacit as well, because again, this is, you know, we talked about conjunctions being tacit. This is something you can do with verbs and how conjunctions work on verbs, and so this is the old style J. It's not that hard to understand once you sort of get your head around it, but back to your first part of the question I should be answering.

Which is why tacit and oh man, my mind has been going a bunch of different directions the last week because. There are to me and I haven't put this together yet because I don't know this stuff well enough and I'm probably going to defer to Conor at this point because there are combinators and there are is category theory, and my sense is all these things are linked into tacit and what tasks it allows you to do. Compartmentalize the programs and use them like modules that you're working with other things and because of the way these modules are used, I guess things are used if we're using category theory.

It's you can start to get relationships between these things and prove these things without having to know what's in these things, and that's a real strong, strong part of tacit, because you can actually get, uh, I I believe anyway that you can actually get results that you wouldn't think you can find by the relationships between these tacit modules and I'll defer to Conor, 'cause I may just spoon out a huge modem malarkey and the the next thing I'm going to be doing next couple weeks is trying to find out where these links are? Because I think there's a lot of excitement in what you can do with tacit and how you do test it, because as I said, J is just developing this new tacit conjunction and adverb stuff, and the better I understand where it lies in relationship to the combinatory logic, yeah, yeah, logic and and and where it it links up closer to things like category theory I think we'll have a better blueprint about the appropriate ways to go forward, and it'll be really interesting to watch.

00:14:22 [CH]

That, well, I'll just see. I think I think Rich is going to weigh in, and so I'll I'll hold my thoughts until you, Ö Rich, you go ahead.

00:14:28 [RP]

Yeah, I mean to me some of that does sound a bit highfalutin I don't, I gotta be honest, but.

00:14:35 [CH]

Shots fired. Here we go.

00:14:38 [RP]

But like I, I'm not, I'm not really against that, because I understand especially the language enthusiasts, the language implementers side and people get really excited about the new features really connected that sort of stuff and I'm not like a hardcore pragmatist, I just end up using what's useful to me, and so it did take me a while, so I know tacit from Dyalog APL operators, and then trains as they live there. And it's not something you learn very early on in APL, but eventually you sort of stumble upon upon it, or someone like Ad·m shows you look at this cool thing or or someone like Stephen goes hey look, you can define the mean in in 4 glyphs, isn't that great? And it is. But for me it took me a while to figure out where I where my sort of happy place with tacit and especially trains was. Because it is this thing where when you get a really long train, people do get thrown off by it and they go oh, that's kind of wild and it's actually hard to, even some of the power of the terseness of the array languages is being able to see, at almost at a glance what's happening, and once you've got a long train, maybe you start to lose that for some people. Or maybe it's just me. I mean, you did. I remember hearing you discussed it. Like oh maybe there's a generation of people growing up with tacit and they're going to just get it like that and they're going to run with it, and that would be great and really fascinating to see. That's just not me personally right now, but I do find that I'm keep coming back to this, I don't know, some paper, or some remark made by... I think it's his name is Guy Steele. Is it the Scheme guy? And I remember one of his criticisms of APL being the issue that you can't extend like a Lisp, right? If you write your own Lisp code, it looks exactly like all the other Lisp code. Now I don't know Lisp, I've just heard this, but with APL typically, although there are some implementations that allow this, typically you can't redefine the glyphs and you can't define your own glyphs. And so his criticism was something along the lines of Oh well, how can I then extend the language as a user in a way that feels like the core language or something along those lines?

And what I've found that I've started to enjoy now is that with trains small to medium sized trains and other tacit definitions, you do get that power to extend the language in a way that feels like the original. So then you end up with like a small toolkit of your favorite little trains where you don't have to have a whole utility library or utility function that you're pulling out because you can just remember off the top of your head; these 4-5 glyphs or to, to five or seven glyphs depending on how mad you are, you know, that do this specific thing that you end up doing quite a lot. But one of my favorites in Dyalog at least, I don't know what the equivalents would be in J or k or BQN, but it's the split by delimiter train. It's a little 3-train. It's a fork that takes a character vector and then some delimiter could be comma or space or any other character you define and then it will partition that into a nested vector of character vectors. So the easy example is comma separated values. Something like this. And it's just, not equal, partition and then, uh, right tack, and it's kind of cute. It's easy to remember and I use it a shocking amount of the time to be honest. So that's my yeah, that's all I wanted to jump in at this point and talk about my yeah, personal philosophy with trains as it were, or at least how I use them.

00:18:30 [BT]

Yeah, and to me that you're using them absolutely properly because I one of the things I mean in addition to being hard to understand when a train gets too long. The other thing is, it starts to become less general and less usable because you now if you make them too long, they're only going to work in a specific area in a specific way, and chances are you're never going to use that again. Whereas if you use the modules, I believe it's easier to prove that the module is going to do what you want in that area. And then putting them together. Now you just have to make sure that they're working together properly, so. That's exactly right. Don't make them too long and and I'm I'm sure we'll end up discussing hooks later on in the in the story, but there's other things that I've thought about in in that area too, but I'm gonna... I want to hear what Conor has to say about combinators and category theory and tacit.

00:19:25 [CH]

Yeah, I'm not sure I I think category theory, you know somewhere down the road relates 'cause it it is all about composition. But yeah, for for me, combinatory logic and combinators uhm. Yeah, to to answer specifically Stephen question, it enables, bar none unparalleled. The most beautiful code I've ever seen in my life. And like you know, there's APL expressions that I've seen. That are just so beautiful and so expressive in my opinion. And yes, you need to understand tacit programming and trains. Or, you know, like they referred to combinators in other languages. But just there are there are. Like, combinators and and trains are basically composition patterns and their ways of composing algorithms and functions together in just extremely elegant ways that is not possible if you're not doing pointfree or tacit programming. You're going to need to create some sort of Lambda and then use a bunch of parentheses to basically replicate the composition pattern that you want that is given to you from these trains and combinators in array language is just one small example that I just I tweeted it out a couple weeks ago. So, and I think I was, it was probably it was, you know, 1:00 AM or 2:00 AM in the morning.

And I find these days like I have trouble going to sleep 'cause I'll be thinking about a problem like this just happened on Sunday and I'll be thinking about it. You know, solving it throughout the day in the back of my head and then I'll realize, oh, there's some other way of solving it that's really simple and the way that you can express that in APL or J or BQN or any of these languages, it's just so elegant, so this one example is a very simple problem. Given a list of numbers, you want to partition it into lists of increasing sequences by 1. So anytime you have a difference that's either negative or greater than one, you want to start a new partition so very simply, if you have 1 2 3 10 11 12, that's your list of 6 numbers you want to end up with a list of two sublists. That's 1 2 3 and 10 11 12, and you could come up with more complicated examples, but it's pretty simple. And where really the beauty of this showed up was when I was trying to solve this in Haskell because there's an algorithm in Haskell called "groupby" which basically takes a list of elements and a binary function that compares adjacent elements and based on the binary predicate (so it's a binary function that returns a Boolean whenever it returns false) it starts a new partition. So the most common use case of this is, you call it groupby and then equal, which is checking whether elements are equal. So if you've got the list 1 1 2 2 3 3, it'll divide that up into 3 sublists of two ones, two twos and two threes and so, clearly this is the algorithm that you want to reach for to solve this, you know increasing sequence by one, but you need a different binary operation than equals. Obviously you need a binary operation, specifically, that first takes the difference and then checks whether that difference is equal to 1 or ñ1, depending on the order that you're passing things and to do this in Haskell is nontrivial, like to try and do this with just little small sections where you're composing them, because first you have a binary function that takes the difference, and then you follow that with a unary function that checks is that difference equal to 1. And so you can't compose those using the default composition operator, which is equivalent to the combinator otherwise known as the bluebird because the bluebird compose is like unary functions, it doesn't compose a binary function, then followed by a unary function, but there are multiple ways to express this in APL, the simplest being the the 3-train or the fork where the left tine is an array or a noun.

So if you go one equals minus.

That's it, you're done.

Because what that forms is the equivalent of, it's called an atop, I believe. Technically it's a 3 train and the top refers to another glyph, but there are two different ways of expressing the same thing.

This is known as the B1 Combinator or the Blackbird, where basically you first apply a binary operation and then you apply a unary operation. Now, here it's not really a unary operation because one equals like you're binding it, but the equivalent is to go, you know, one dot equals, which is 1 bound to equals.

That forms a unary operation, and then atop, which is the rank glyph with minus. So there's two different ways to express it. You can use bind and atop. Or you can use a 3 train, which they call a fork. But what's amazing is that three trains actually express 4 different combinators. They express both the Starling prime or the S prime combinator, and the, I believe it's called it's a specialization of the   Combinator, which is known as the Bald Eagle. And then in the versions where the left tine is an array, it corresponds to the D combinator and the E combinator which the bird names are double eagle.

So like a 3-train because you have the versions where one of the tines takes a noun instead of a function, and then you have both the monadic and the dyadic cases, with a single like juxtaposition of basically three things you have four different combinators, and then on top of that you have the 2-trains, which gives you a whole other two combinators. But the point being is that when I was trying to solve this problem to get back to the Haskell, I had groupby and then I tried to do, basically paren paren equals one and paren dot, which is the B Combinator and then paren equal equal M paren and parent. m. So you just heard a bunch of parentheses and you know dots. And there's multiple operations, and that didn't even work, because as I mentioned, the B Combinator is not what you want. You want the B1 Combinator that takes first a binary function. Applies the two arguments to that, and then applies the unary operation. So you specifically want the B1 Combinator.

And as I mentioned that that exists in two different versions or forms in APL, and like it exists, and you spell it with three characters, one equals minus.

Like you want, like Stephen mentioned, talking about like no ceremony, it not only is it is it cool like it's more than just cool.

It is so elegant and beautiful. The fact that, you know, Ken Iverson came up with like I I would die to know if he spent like a certain amount of time studying combinatory logic I I think it's a crazy coincidence that the the seminal text by Haskell Curry was published in 1958 right when he started working on notation, you know, the Iverson notation now clearly, as this came up in the British APL webinar, just a last week, I asked, you know, when did or is there a list of when these things showed up in APL and sure enough, Ad·m linked to the Dyalog APL wiki and so trains showed up in 2014 in Dyalog 14.0. The left and right tacks they showed up in Dyalog 13 and 2011 so three years before trains and and left and right correspond to the the K and the Ki combinators, kestrel and kite.

And then also in Dyalog 18, which was just in 2020 over the summer or two summers ago, we got atop, over, constant and and a bunch more combinators. So basically it wasn't until the earliest that really sort of powerful combinators. You could argue that K Ki in 2011. I've talked to some people that have programmed in APL, and they aren't a big fan, and then I'm like, well, what about like the trains and the you know, like all these expression there. Oh like we don't have any of that like the new APL doesn't have that stuff, and so it's it's curious to me that APL actually didn't get what is probably my favorite part of the language, until like the last decade and really these things showed up in J first anyway, so I'm I'm we gotta, we gotta find some people that know or that worked closely with Ken.

To get the history of like how did this stuff, like, not sneak into the language. It's like a huge part of what J was and now what Dyalog APL is.

And in my opinion like, APL, it's known as this array language, but it should be known also as like the world most powerful combinator language. And that yes, it is, you know to to use the word that Rich says like a little bit highfalutin, you know to hear someone talking about how beautiful it is, et cetera.

I'm, I'm sure sounds.

00:28:21 [RP]

I said I regret I regret having used that phrase.

00:28:25 [CH]

But like there's something there where like I I implemented the the groupby algorithm in Haskell, in APL and then the resulting solution is literally just like the fork, which is 1 equals minus, and then I called it chunk by 'cause that's the better name for it. And then the array on the right hand side and you're done, and it's just. Like in what language is that, is that possible in and is like, elegantly and I'll stop my whatever that was 15 minute monologue on how in love I am with combinators. Stephen?

00:29:00 [ST]

Oh, thank you I I want to come back on the highfalutin in the 1980s I I drank a lethal, lethally poisoned Kool aid called Software engineering, which took me away from APL programming for about 15 years, actually, and when I finally came back to it, I made two discoveries.

The first was that the the PC implementations of APL, Dyalog APL, was the stuff we'd barely dared to dream of back when I was originally an APL programmer, all kinds of wonderful stuff I'm running on machines with huge amounts of memory, and it is definitely a language designed to be run in a fast space of, abstract space of huge memory. The second thing I discovered was that sometime around 1990, it seemed like all the smart people had gone to J.

I felt a little bit like the lame kid who's left behind at the end of the Pied Piper story where all the children had been led under the mountain and I didn't quite manage to get in, but I I struggled and I, you know it downloaded some J and learned a little bit about tacit programming.

I said I was coming back on the highfalutin. This relates to a different, so different fairy story. Which is Jack in the Beanstalk. Jack gets magic beans and the magic beans take him up through the clouds into a land where he finds again Giants Castle Gold Goose. It's new territory, and APL had always seemed to me like that. I started with FORTRAN when I got to APL and didn't have to spend my poor scarce brain CPU cycles are writing loops and could attend to other things.

Right, APL said Yeah, don't you worry about all this here. We've got a new land up here you can think about much more interesting things.

When I saw tacit, I saw tacit, I thought. Here's the next step. This is going to take me somewhere else. This is going to help me wake up from whatever I've been sleeping about, and I'm still not there. I still want to know what it is that I'm that I so that I could wake up to and and get to.

00:31:42 [CH]

Yeah, it's a I think yeah, I don't know why I fell so quickly and hard hardly, or yes, deeply, I don't know. I fell off some cliff and I've just been like skydiving since. But yeah, just. So many times. I'm thinking why do I have to write this this way?

Like there was another problem, just the other day was how do you determine if 2 sets or lists of numbers are disjoint. They don't overlap at all. And a lot of languages, including APL, they have a glyph for intersection. And then you basically just want to check is that intersection empty? And Haskell has both of those functions null for checking if it's empty and intersect for the intersection, but once again, like you can't use you can't use the basic dot operator for composition because intersect takes takes 2 lists of numbers, takes 2 arguments and then null takes a single one.

So like these, like you might hear me like. Whoa man, this guy sounds crazy. But like these, these composition patterns, they're everywhere. We are just as at least, as me, you know, I've been programming in C++ professionally since 2014. Arguably, I didn't know what I was doing for the first three or four years, so we can sort of not count those. But you know, I, I feel like I've sort of really been, you know, comprehensively knowing what I've been doing for at least a couple years. And we don't think about these latent implicit composition patterns, 'cause if we need to pass things or evaluate things in a certain order we just use parentheses and we just we, you know, we evaluate something that's going to get passed as an argument to another function, and so we're just used to like, oh, OK, I'll call that first and then store that to a variable, then pass that in and that, a lot of the times that's just all not necessary. You're really at the end of the day, you're just composing a bunch of operations together.

Some of them have two arguments, some of them have one, some have three, and if you know these composition patterns, you can just go, you know, function function function, put the right, you know, combinators or operators.

That was another thing. Take a small break. You know we started talking about conjunctions, adverbs, nouns, verbs, and I meant to do this at the beginning, but totally got carried away with how excited I was. It's actually doesn't. It's not as hard as it sounds, so a verb. If we translate that to, you know Python or C++ programmer speak, that's just a function. It's just a function, uh, conjunction or an adverb. And maybe actually we can get some what's the delineation between the two that was referred to as an operator in certain dialects?

That's just a higher order function. That's just a function that takes a function and can return a function so you know, very simply, you know an example of a verb that's a function is plus takes 2 numbers, adds them together or unique, takes a list of numbers, returns only the distinct elements without duplicates. Those are both, you know, we call them verbs in APL and other languages, but that's just a function in Python or C++. An operator or conjunction or adverb takes a it's a function that just takes another function.

So for instance reduce or scan. Those are higher order functions that they take functions like you know, plus or minus, and then when you combine those they return you another function.

So plus reduce returns you a function that's equivalent to something called sum.

So we have different words for this, in APL, thanks to Ken Iverson. But really, it's it's there are concepts that exist in even non-functional languages like C++ and Python. I don't actually know. Yeah, is there a difference? But I assume there's a difference between conjunction and adverb in J.

00:35:16 [BT]

Yeah, and is is the difference between conjunction and adverb is is really simple. Conjunctions take 2 arguments and adverbs take one.

00:35:25 [CH]

That seems I haven't, it, I take issue with the fact that we use verbs for both monadic and dyadic, and then we have like where's the where's the symmetry between naming in verbs.

00:35:36 [RP]

No, it's all based on, it's all a language metaphor, isn't it? An adverb applies to have a, uh to to change with some quality of it, whereas a conjunction applies between two, helps to to bind them in some meaningful way.

00:35:52 [BT]

Yeah, and the other thing is that you don't necessarily, conjunctions combined nouns to verbs, and and adverbs can take nouns as arguments.

00:36:02 [RP]

Like do they do that in natural language is, does that all breaks down there?

00:36:06 [CH]

I mean but you you could say the same thing right with with the verb that acts on two nouns, like that. They're binding them in some way, but like we still only call them.

00:36:15 [RP]

Yeah, yeah, but what's the again? What's the natural language in linguistic, uh, metaphors?

00:36:23 [CH]

Oh, I see. You mean like in the in the grammar, world of grammar, in terms like that, what's the word?

00:36:28 [RP]

Yeah, exactly. That's where all of that ís where all of it comes from. It's a little bit of a, it's like APL is a language because it exhibits significant syntactic patterns and blah blah blah like a language does you know? And they just took that to the nth degree when writing the J document. Which is cool, but it's just added to the whole part of are we going to market this outside of the APL world? Then we have to teach them like 20 new words to do stuff that they've already seen.

00:36:59 [CH]

What's the terminology terminology that BQN uses? Do you know?

00:37:02 [RP]

BQN has these roles, doesn't it? So there's functions. But then it's also got the 1-modifiers and 2-modifiers, which is very sci-fi matter of fact.

00:37:11 [BT]

Yeah, and and 1-modifiers I think is the equivalent to what an adverb would be in J and 2-modifiers which take two operators would be the conjunction.

00:37:19 [RP]

Right, so at least it tells you it says what it does on the tin, right? Or whatever the phrase is.

00:37:23 [BT]

Some highfalutin phrase.

By the way, as as somebody who's in Western Canada, I dub you Richard, correct usage of the term 'highfalutin'.

00:37:38 [RP]

I'm glad because that would have been even more embarrassing than even using the phrase, was to use it incorrectly, isn't it?

00:37:44 [BT]

No, you nailed it. Stephen?

00:37:46 [ST]

I've got I've got a question really about the language metaphor.

I know that Ken Iverson was that deeply committed to having a strong isomer, isomorphism between being an English language grammar and the terms in which he described J. And and I feel for that. But they both particularly came into play when he formulated J and I was certain was introduced for the first time to the notion of using nouns, adjectives, adverbs as terms for in computer science, and they continue, I think, to confuse people from other computer science disciplines to this day.

Now in my long life I've noticed that Canadians of a certain generation have a much better formal education in English grammar than anybody I know in England. I've kept Canadians of certainly roughly my own age can spot a misplaced modifier and all kinds of grammatical constructs that I've if I, if I recognize errors like this, it's only from an informal knowledge of the syntax I I I don't know these things. So this question came up practically a few years ago and I started working for Kx. And it was pressed upon me that most of the people using q didn't actually know what an adverb was in English, let alone in q. And we did some informal polls around the office and found there's quite a lot of support for this point of view. So we came to our, we came to their conclusion that a metaphor which had been valuable for a certain generation, certainly of North Americans, or perhaps only ever for Canadians of a certain age in explaining the relationship between these computer science terms, but in terms of a metaphor from English, grammar had outlived its usefulness.

And despite a reasonable amount of quite reasonable resistance we we we mandated that, Oh excuse me, that adverbs were to be called something else. We eventually settled on the verb iterators and that operators would denote things like plus and minus and divide, and the same way they do elsewhere. And that verbs would become known as other operators or keywords. And all of these would be called functions and so forth. We basically redefined the terminology for talking about q to align it with the way people you talk about the same things in other computer science fields?

And I wonder what you guys think about that?

00:40:54 [RP]

Honestly, 50/50. Uh, massively of two minds. I think it makes a lot of sense. It's a bit of a shame and and I but, but I'm also completely dumb. Spoiled aren't I, you know, living, living in a Dyalog world, getting used to all those words. Sorry Bob, what are you going to say?

00:41:18 [BT]

Well, I was just going to say is one of the questions might be is why have a difference between an adverb that only takes one argument and a conjunction that takes 2 and give them two different names? It's because as the language is parsed, it's interpreted as it's parsed. It will parse differently, of course, if you're taking one argument or two, and if you if you just say, oh, we just have this thing that operates on verbs or nouns, suddenly everything dissolves into this complicated mess where you don't know whether you're taking one argument or two, so you do do have separate different names to for a reason so that you can interpret the language properly. As to whether you call it A1 modifier or A2 modifier or an adverb or a conjunction, my sense is there's a lot of truth in what Stephen says.

My grade 8 self had to learn to diagram sentences, parts of speech as well as grammatically. And if I wasn't able to do that, I wasn't going to see grade 9. So I had to learn to do it. I'm not sure, although I think I'm just thinking my my my younger son can do it as well. So I think it's probably part of the education. I think it's just necessary, and it becomes, to me, sometimes, a question of whether you focus on the spoken language or you've spoken focus on the written language. Because spoken language, of course there's all sorts of things we do that are non grammatical. I've probably broken a dozen rules since I've started talking, but you know, that's that's just the nature of speech and language.

00:42:50 [CH]

Yeah I would, I would say, I I tend to I got, I don't know I I heard you know Stephen, you mentioned the terminology so I don't they, they seem to be slightly different across, you know, J APL, different dialects of APL, k, q, now BQN as well, so yeah.

The as as, as Eric mentioned, yeah the the proliferation or the fracturing of these sort of sets of terminologies I think is just awful. But I I think one of my goals is to build a bridge to the rest of the programming world, which is massive compared to the little island that we're on the islands.

Awesome, we're having a party, lots of treasure, but like I want to build a bridge because this island is doesn't need to be tiny, it can, you know, everyone can be partying here and and I would also say that like I think that there's extreme value in notation as a tool of thought and combinators, and you know, verbs or functions and higher order functions, whatever you want to call them then being a single glyph or a digraph, it has immense value in the playfulness and the malleability of the language and the way you can explore things so quickly and your your brain starts to change the way that it solves and sees different, like, you know, areas of the island and way to solve problems like there's an immense value in having a concise language.

If you have to, you know it seems silly that oh, why would one keystroke versus 12 make a big difference? But it does make a big difference.

You know the fact that flipping the order that you pass arguments via commute as they call it an APL or reflexive or passive in J having it be a single glyph, it's a big deal and that's great for notation and whatnot, combinators all that stuff.

Fantastic when it comes to the names of these things and the terminology. I don't see where the value is there like it just gets and that's coming from someone who's coming from the imperative world where we already have predefined terms for this. I guess if the rest of the world wasn't different, it wouldn't necessarily be a barrier, but especially even going from APL to J. Like a lot of the times, the names that they come up with things in J it's it's even once I've learned it.

It's a. It's a barrier, like unique is called nub and I think there are a couple other languages like Haskell, that also have that, but I don't. I still don't even know why is it called nub. What is, does that mean something? Can someone answer that question?

00:45:18 [RP]

That's like some kind of archaic term for something or other, I don't know.

00:45:21 [CH]

Yeah, what where does tt come from like I assume like Ken puts so much thought into this stuff, but like at a certain point, it's a, it's a wonderful short word, straight out of the American Heritage Dictionary, which means exactly what you want it to mean.

00:45:37 [ST]

And there's a good tradition quite outside computing of reviving and reinvigorating useful words, and keeping the keep keeping them alive. Nub ís, nub ís a great word.

00:45:49 [CH]

So what's the definition?

00:45:53 [ST]

If unique means all the unique elements out of all the all unique items out of the list. That's what nub means in everyday life. The nub of the matter is isn't is an expression you might come across.

00:46:04 [RP]

Right, right? It's it's it's archaic Conor, yeah it's a it's old nonsense that.

00:46:12 [CH]

Yeah, but so that's that's... The thing is Bob, Bob stepped away quickly, potentially to get a dictionary or hopefully something not on fire, but like the passive and reflexive names for, you know commute or or swap.

00:46:26 [RP]

It swaps good swap and selfie.

00:46:27 [CH]

I I can I can. Yeah I can well now. So yeah, so that those are the APL terms, which is which is great, but like reflexive and passive I just I understand what is trying to be done there.

But it's just like even though I know it and like putting it, putting it in passive tense. It's not really what we're doing like it's a stretch, and then, and that's not even like you can, you can be a little bit generous and say, OK, that's good, but when you get to the combinators, there's absolutely zero parallel for composition patterns in the English language. So then we just start making stuff up like atop.

What is that? Oh, it's a it's a word that, or maybe atop is a word, I don't know atop of mountain but like.

00:47:06 [RP]

Yeah, that's literally.

That is what it is, yeah?

00:47:08 [CH]

But like, how does that have to do with anything of a binary operation before.

00:47:11 [RP]

The left the left, the left function is applied atop the result of the other function. That's kind of the idea. Yeah, they they they. There is a there is an explanation for basically all of these things, but I am inclined to agree. That's why I said I like swap and I like selfie. But actually I'm less in favor of commute because it is a bit more. It's like technical, technically correct and all the stuff or whatever, but it is less familiar and intuitive.

I mean, again, there's also this massive bias towards people who speak English right? In all of this terminology, so that's kind of not fair as well to the international community, but they're kind of cursed to live in that world just because that's how programming is developed as well. So I feel a bit bad about that in general, but I can't change history, so.

00:47:58 [ST]

Oh, it's a it's a bias, not just towards people who speak English, but people who can diagram sentences.

00:48:05 [BT]

Well and and in in I find honestly a lot of times when I'm programming in J, if as especially since I've come on to the podcast and started talking about J I have to go back and look up these names because that's not what I. I don't name them. I've got the symbol, I know what it does.

And that's what I use. And and I don't think about the name so much until I'm, you know, maybe specifically reading about something that I need to learn about, or I'm trying to explain it to somebody else. And then it really helps to use the same terminology. And then I guess that's you know, we're sort of in the Tower of Babel. You know, that's the we all speak different languages and kind of mean the same sort of thing. But there's a little bit of nuances between them so that I don't think it's quite as simple as just using the same terms for J as APL, because there are differences in how the languages are structured. I think one of the things that Eric talked about in the last episode was strand, and that is a difference between the two. If you start to mix that up, that kind of notation. You're going to have to learn that there's a difference there. There's just that's the two languages are different in that way.

00:49:18 [CH]

Yeah, I was. I was thinking in the back of my head that I was trying to remember what they call index in J. And I think it's from.

00:49:23 [RP]

Yeah, I think that's true

00:49:24 [CH]

And I just like when I stumbled like so so many times I stumble across something and I'm like. Yeah, come on, I can see what you're doing there, but come on.

Alright, well I feel like we've been on, sort of like, uh, wandering through the tacit programming forest, at some point, I mean, I think at the tail end of the last tacit episode we were talking about how we didn't even get to any of the like Dyalog 18 operators and we didn't talk about dyadic hook, I know we're we're getting close to sort of the hour mark, do we even want to, you know, open any of those cans of worms? Or is there? Is there something that's like a bite sized topic? That we can. We can tackle or should we just ask another open-ended... Stephen?He’s got an idea.

00:50:14 [ST]

Well, after the end of that session I was reflecting that for people listening who not actually experienced or tried any tacit or point free programming. It could sound pretty confusing, so I did a short blog post with an example in q and in Dyalog APL of a very small piece of tacit programming, so you can get the feel of it, and it's basically a range function. Takes 2 numbers and generates it. It takes 2 integers and generates all the numbers between them and you wind up at the bottom with a tacit Dyalog expression for it, which is a down arrow, a little circle, and an iota, and that does it. So you can find that on 5jt.com and I think we put the link in the show notes for last book one, but we can put them in again.

00:51:11 [CH]

Yeah, definitely. I think honestly, yeah, these things are for me the easiest to understand well honestly, everything in life for me is easiest to understand if I just see an example. Yeah, especially when the names of these things, I mean atop, ok, sure, that's great, but like really what it is is it's it's a composition pattern. If you just see that pattern a couple times, you're like, oh OK, and especially if like you run into it like that's where it makes the most sense for me is when I'm programming and I need, I know exactly what I need, but it's not the tools that I have at my disposal. And then you go ask on you know some Discord forum and there, Oh, it's just this and yeah, oh my goodness, this is exactly what I was looking for.

00:51:51 [BT]

When I went on my second career of education, one of the things I learned about was different ways of learning and I think the way of learning that fits best a lot with the array languages is called brick collage, where you build and you build something and you play with it and you would you know it's we all we've often talked about Lego bricks and stuff. Those kind of things to me there's there's so much in this, and I guess if we were to talk about why is tacit useful. If you were just given a toy and it existed as it was, it would be great, but that's all you could do with that toy, the function it was. But when you're given a toy that you can build things with, like Lego or in the old days mecano if you remember that Lincoln logs all those things that you'd have the little bits of it and you put it together and you'd build something else. And then you could take it apart and build something else.

Well, that's to me what tacit is. You don't want to have the thing built. So you can't take it apart. The idea is that parts are the tacit parts, and you can take those tacit parts and name them and then put them together. And because you're working in a virtual environment, you can have as many of them as you want. You're not going to run out of a certain type of brick at some point because you just make another one. And and that's what is kind of exciting to be able to build these things tacitly, because because you're not including the things that you're working on. You're just including essentially the connections. That's what puts bricks together, and at least that's kind of the way I always think of tacit yeah.

00:53:24 [CH]

I can definitely. That resonates with me a lot. I mean, that's any language with the REPL I just, those are the languages, that's the way I prefer to develop even with C, like we have a website called [inaudible] like the community and and it is basically you just type in this little sort of window, and every time you stop typing it starts compiling in the background.

So it's not a REPL, but it has the feel of a REPL that you're incrementally your coding, you're seeing your result, your coding, you're seeing your resume. As opposed to I know some folks that they'll just first don't think for like 3 days, and then they'll they'll code for a day and then on the 5th day they just hit the compile button and it works.

I I'm not that good. I need to see that I got it the idea wrong the first five times incrementally and then.

Oh yeah, OK, that didn't work. That didn't work. That didn't work. And then on the 6th time oh OK, that's a good idea. And then.

00:54:18 [RP]

We are Dijkstra's coding bums, aren't we? Then he talks about he, this yeah. Anyway, we don't actually think. We just smash on the keyboard until it works.

00:54:30 [CH]

I mean I wouldn't, I wouldn't say it's smashing. It's it's just like a.

00:54:35 [RP]

I'm being harsh. I'm being harsh I, I work the same way Conor, to be honest.

00:54:39 [CH]

Yeah, I feel like that's the it's the rarity of the person that can like go and sit down ahead of time. And you know, think of the elegant solution and then just sort of type it slowly and have it work. Like, a lot of like even for problem solving for me like that's why I spend days, weeks, sometimes, even years just in the back of my head playing with little toy problems because it's it's never the first solution that, like the only time the first solution is the solution that's actually the best one is 1 where there's like prior art of like, oh, I've solved that problem in a different form, and so I already because I spent so much time thinking about this solution. Like that's that's. Like the classic quip about Feynman or whatever is that he always had these beautiful moments, where he would go up on a chalkboard and write something and everyone would be astonished. But you know in his whatever autobiography or biography tt was found out that like Oh no, he had just been storing this stuff up and and cashing it and thinking about it for years and then waiting for the perfect moment for the first time to sort of show his proof or whatever and then just to amaze people. But it was never that he was doing that in the moment he was that he had, he had practiced and trained for decades, and and then it led to these, you know, sort of moments. So yeah, you even even the overnight success, yeah, it's.

Yeah, there's there's some quote about that, Stephen, you should know. You're going with all. The quotes in the in the aphorisms something about how overnight success is, you know.

00:56:05 [ST]

Twenty years to overnight success.

00:56:07 [CH]

Yeah, exactly.

00:56:08 [RP]

It's like he's he's just been.

00:56:08 [CH]

There we go, see? Perfect.

00:56:09 [RP]

Storing them up this whole time.

00:56:13 [ST]

I have the same strategy. It's a substitute for serious thought.

00:56:17 [CH]

It's still, but it's the, the effect at the end of the day is wonderful. I always, I always there's wish I had aphorisms and witticisms that I could just rattle off the top of my head. But typically it's just like I'm pretty sure there's a old guy quote about someone that said something about death and life, and insert that here.

00:56:31 [ST]

Actually, I feel like I. I feel like I'd be busted here. I keep an XML file of them and you can find it on my and you can find the whole lot on my website.

00:56:42 [RP]

You know the hardest part of that, by far, is that it's an XML file that's.

00:56:42 [ST]

It's almost.

00:56:49 [RP]

Storing quotes is fine.

It's a upgrade your format.

00:56:53 [ST]

Oh, what would you recommend?

00:56:55 [RP]

I actually have none. I don't really know. All I've heard is criticisms about XML, probably Jason nowadays I don't know.

00:57:01 [ST]

Yeah, I was thinking of it mapping the whole lot into YAML as being much more human, much more human readable, but there's, you know, my operating systems got an XLT transform which turns the whole lot into Mark Doc and dumps it into my website every time I edit the XML.

00:57:07 [CH]

Oh yeah.

00:57:21 [ST]

So it's like I'm not sure, what I do with you.

00:57:24 [RP]

So it's funny you say that, and I don't think this will make it into the final editor.

00:57:28 [RP]

No, all my all my notes I make now, not your bit. What I'm saying now, all the notes I make these days are just pure markdown files, like everything marked down from the get go, and if I need to render it as something nice, there's loads of tools to do that for me and I make lists all nice and everything else is ASCII or APL glyphs? If I need to make little comments or notes, it's just yeah.

00:57:52 [BT]

And actually, to keep everything in the same thing, that's I think, face what we're facing in information technology industry. Is that everybody’s coming up with new things on a daily basis. And so that's where I think when Conor talks about building bridges and those kind of things those can be more permanent things that you can link the way you're thinking about something to the way other people might be doing something, even though the flavors or the details may change the if you can link the bridge to a structure that's pretty solid, you have a chance to be able to bring ideas across the bridge as opposed to the details and all the you know the implementation which can be really kind of screwed up between parts of the bridge.

00:58:39 [CH]

Yeah, I wish I wish when bridges were being built or there's idea you know ideas being taken from one language to another, one community to another like that stuff was more explicitly called out because a lot of the times you don't know that you know there's a really well known compile time metaprogramming library that was written by a individual by the name of Louis Dionne.

A really really young bright guy in in C++ and it got into the boost standard library and it's called boost HANA.

But very few people know that that is essentially just the implementation of, like the Haskell prelude, like the Haskell Standard Library in C++. In like a metaprogramming context, and if you're familiar with Haskell, you will recognize like a ton of the function names 'cause they're borrowed, but if you're not familiar, you'll never know unless I'm not. I'm not actually sure. Maybe in the documentation it it points out the the connection, but I think a lot of the times, yeah, these languages. That's why I love, I love, uh, Elixir. Because Jose Valim, the creator of that language, is constantly saying, you now, I we I explicitly took this one thing from Clojure, and you know, we took the syntax from Ruby and we took the concurrency model from Erlang and is never shy about, you know, explicitly mentioning you know where the ideas came from and and that that can lead to like cross pollination and whatnot. And some communities, just yeah, do a way better job than than others. Like Swift is my classic example of because they're out of Apple. I think all of the developers and the people working on Swift. They they do a great job, you know, whenever they give talks like they call out all that. But like the sort of upper level management, they would prefer to sort of spin it as like guess what we invented. It's like, no, you didn't invent that. It's based off of a couple decades of work and and. Yeah anyway.

01:00:39 [ST]

Well and and.

01:00:40 [BT]

The the invention part is sort of obfuscation like you're trying to hide what you've you know as somebody else has done. I think when you were talking about the first example, bringing something across from Haskell to C++ you're just importing something, but the building of the bridge, I think, is a significant metaphor, because the bridge is, other people can travel back and forth on a bridge. Whereas if you just bring something in and dump it, they can use it, but that's not a bridge you've you've imported, but you haven't built a bridge. A bridge is a different thing to build, and I think one of the big challenges of building bridges is when you're building bridges to an area you don't know. You don't know where the other footings are on the other side of the bridge, yeah? And you're trying to figure that out and then sort of. You have to learn both areas and then figure out if there's a bridge across and what things go in and what things come out. And to me, as much as it's all highfalutin, and to me, that's where the link to category theory comes in, because I think we're talking about different categories and seeing how they fit together. That's that's what you're trying to do. When you're building a bridge.

01:01:42 [CH]

Yeah, there was just a podcast on Lex Friedman podcast. He had the I'm going to forget his name [Travis] Oliphant. I believe it's the last name is the creator of Anaconda, Scipy, numpy, like a bunch of data science libraries and ecosystem stuff in the Python world. And he for a brief 5 minutes, mentions APL and the influence that it ended up having on sort of the early stages of what ended up becoming numpy, which is basically a, you know, an array DSL and in Python. But like the few minutes that he talks about APL, you can tell that you know, he very, very peripherally sort of knows about APL, and a few of the things he said was incorrect, and that's the thing is, it's very, very hard to be versed in, like you know when you're trying to build a bridge or talk about something, it's very hard to be an expert in everything, so you know inevitably when you end up talking about other communities or those what's on the other side of the bridge. Uhm, yeah, bridge building is difficult because you need to know what's on both sides. And like the language you know, how do you? How do you communicate to the people on the other side of the bridge so that they get excited about the stuff that's on your side?

Yeah, without sort of just sounding, sounding like a not a crazy person, but just like what is this guy going on talking about birds is probably, you know, a certain number of people are just thinking that and I haven't even mentioned, like people thinking where did the birds come from? Like why why is a bluebird blackbird and it comes from a book that was written by a mathematician, and.

01:03:10 [BT]

"To mock a mockingbird"

01:03:13 [CH]

Yeah, "To mock a mockingbird". What are you going to say Rich?

01:03:13 [RP]

Thought you are going to go on literally where the birds come from.

01:03:15 [BT]

Eggs

01:03:17 [RP]

It's a different topic entirely.

01:03:20 [CH]

Yeah, where where do babies come from? Where do storks come from?

01:03:24 [RP]

Was there eggs?

Was it eggs?

Are you sure?

01:03:26 [BT]

You asked where birds come from. If you'd asked where eggs come from, they come from birds, right?

01:03:33 [CH]

All right, we're in dangerous territory this this podcast like fully going off the rails here.

01:03:38 [BT]

Earlier on, Stephen was talking about how he likes to get all this hard work done in the the the beginning so that at the end it towards the end of the day you can sort of do emails and things like that. I think this episode has been that way. We loaded we front loaded an awful lot of information. And now we're just coasting.

01:03:57 [ST]

Are we still recording?

01:03:59 [BT]

That will be the opening.

01:04:04 [CH]

All right, any final last words before we wrap up this episode.

01:04:09 [BT]

Just that I hope Adam is happy that he will probably get a chance to represent his opinions on on dyadic hooks. 'cause I'm looking forward to that 'cause I we did not get the dyadic hooks.

01:04:19 [CH]

Oh yeah, we didn't even get the dyadic hooks.

01:04:22 [RP]

I'm mildly relieved that I managed to skirt past that without having 'cause I feel like I might have done a disservice. It's not that complicated, and the and the point is fairly simple, but you know how Adam's passionate about this stuff. I don't want to misrepresent him further.

01:04:37 [CH]

Alright, so look forward to part three of this tacit programming conversation where we pick up where we left off and Ad·m tells us about how he feels about dyadic hooks. Alright thanks everyone for listening. Happy array programming.

01:04:53 [ST]

Happy array programming.

01:04:54 [BT]

Happy array programming.

01:04:55 [RP]

Happy array programming.

01:04:56 [RP]

Do we say this now?

01:04:57 [BT]

We get different and better closes every time.