Transcript

Transcript prepared by Bob Therriault
Show Notes

Due to technical issues this transcript does not include timestamps. We apologize for any inconvenience.

[Lynn Sutherland] The power for me is the expressiveness of the language, how so tightly and concisely you can express very complicated computations, nested arrays. You could have, because of nested arrays, you could have functions inside data, inside data inside functions, in a nested, multidimensional way.

[MUSIC PLAYING]

[Conor Hoekstra] Welcome to another episode of ArrayCast. My name is Conor. And today with us, we have a special guest who we will introduce in a moment. But first, we'll go around and do brief introductions with our four panelists. We'll start with Stephen, then go to Bob, then to Marshall, then to Adám.

[Stephen Taylor] I'm Stephen Taylor. I'm an APL and q enthusiast. Yeah, this episode, I'm going to be an enthusiast.

[Bob Therriault] Well, I'm always an enthusiast for J, so I will join Stephen and I am Bob Therriault, a J enthusiast.

[Marshall Lochbaum] I'm Marshall Lochbaum. I'm actually a Singeli enthusiast, but a few years ago, I made BQN, and I still work on it from time to time.

[Adám Brudzewsky] I'm Adám Brudzewsky. I remain enthusiastic about APL.

[CH] And as mentioned before, my name's Conor. I am a polyglot programmer with a enthusiasm as well for all the array languages. All right, so we have a few announcements to go through. I think we'll go to Adám first, who's got two, and then I'll spin the bottle for our last two announcements and then we'll get to our conversation with today's guest.

[AB] Okay, so the 2023 APL Problem Solving Competition has concluded and the winners have been chosen. [01] Congratulations to them. We'll leave a link to information about that. And well, winning, bread-winning, Dyalog is seeking a system administrator/enterprise architect. So this isn't really an APL programmer, but it's in the general area. It's part of Dyalog Limited, of course. If you're interested, have a look. We'll include a link.

[CH] Awesome. Let's go to Marshall next.

[ML] All right. I have an announcement, which probably doesn't concern too many people, but maybe some people have run into this. CBQN, which is the C implementation of BQN, has been licensed under the GNU programming license-- programming, GPL version 3, [02] which is a pretty strict open source license that requires you to license all the stuff. If you make something that incorporates CBQN, you have to license everything under the GPL and so on. Dzaima has led the effort to actually change it to a more permissive license. So we got all the contributors to agree to this as well. change it to a more permissive license. So we're switching it to the Mozilla public license, which allows you to do things like incorporate it into another project without necessarily licensing that under the MPL, as well as AGPL, which is pretty similar. So yeah, if you've run into any licensing issues where you want to incorporate CBQN into some other open project, it may be easier for you now.

[CH] Awesome. And with that, we will go to our last announcement from Stephen.

[ST] I just want to say a big thank you to the people who stepped forward to help me work on my book. You may remember I'm working on a book on vector programming in q in how to retrain your brain from that one potato, two potato looping approach through to working with vectors. And a number of people have reached out and I'm working with them at the moment, one-on-one on tutorials. We may get to a group later on. I'm learning a lot about what the barriers are and what kind of help people need. Having a lot of fun working our way through some amusing problems in Q. And I've got bandwidth for a few more people if there's anybody else interested. [03]

[CH] Awesome, so as always, links for everything will be in our show notes if you wanna check those out. And without further ado, today's guest is Lynn Sutherland. Lynn has been for, I believe, the past nine plus years an industrial technology advisor for the National Research Council of Canada. But on top of that, over the past few decades, she has held a plethora of titles. I will rattle through, I think, all of them, which is chief operating officer, founder, director, executive director, vice president, president, board member, and advisor to a ton of companies, which I think will let her highlight the one she wants to. so she's been in the industry and is a veteran of what she's been working on. We reached out to Lynn, I believe a few months ago, to see if she wanted to come on and talk about the NIAL programming language, which I just recently, that's N-I-A-L for, you know, to be unambiguous, and that I recently learned stands for Nested Interactive Array Language. It came out of Queens University, which is where I believe Lynn encountered the language and did her undergrad before going on to do, I believe you did your master's somewhere else, And so we've brought her on, but before we talk about Nial, maybe I can let Lynn, I'll throw it over to you, and you can give us a brief summary of your, you know, immense experience that you've had over the past decades. And yeah, fill in any of the sort of spaces that I've left out here and yeah, over to you.

[LS] Okay, thanks, Conor. Hello, everybody. First, I'll say the way I got introduced to this group is I stumbled across the Discord channel talking about APL-like languages. [04] I saw that there was a stream talking about Nial and I couldn't believe anybody was aware of this language that we worked on in the 1980s. But it's an awesome language and that's what I'll talk about the most. But you asked me for my history from there. So just going way back, starting there, I was a research assistant at Queen's, etc, etc. We'll We'll get into the details of that. We started a company to make this language popular or try to sell it or all of that. That was in the early '80s. It was interesting times and didn't completely succeed. So then I moved to Calgary, Alberta, where I currently reside. Started working here at the Alberta Research Council doing applied research and actually quickly moved into artificial intelligence and robotics. And now remember, this was in the late '80s And it's all coming back now. Even that's coming back around. At a time I did some policy work for the government of Alberta about increasing the number of people in information communications technology. For about seven years, I was recruiting professors and grad students in key areas into Alberta, including artificial intelligence, quantum computing, high-performance computing, wireless, et cetera. And at one point in that journey, I ran the high performance network in Alberta, so the research network. And then from there, we spun off the first cloud computing company in Canada, somewhere around 2008. And at that time, the term cloud wasn't even known. Or it was.

[ML] There were clouds.

[LS] There were clouds, yes. Clouds were not exactly known and they might have been called grids in the high-performance computing world. People were starting to call it the cloud at that time, but it was mostly only known in the academic community. But some of the people that I was working with, we decided to spin off the first cloud computing company. That was around 2008. was a macro world financial meltdown around 2008-2009 and so that impacted tech companies at the time. More recently, I've worked for the National Research Council as an IRAP, Industry Technology Advisor, which means I coach and mentor and fund technology-based companies in Canada. And I work primarily with software companies, SAS companies, AI companies in Calgary, in Canada. That's the story.

[CH] Is that, because I'm familiar with the Mars Institute, which is sort of based or co-located with the University of Toronto. Is there like a similar program that you work closely with in Alberta or is it the Research Council of Canada is kind of different from that.

[LS] Well, every province, especially say the four, you know, the big ones, Ontario, Quebec and British Columbia, and then Alberta, every city or university will have a technology hub like Mars. Mars is the one that's sort of the biggest one in Canada associated with the University of Toronto. In Alberta, our innovation program, I did work for them, they were called the Alberta Research Council, but now they've rebranded their Alberta Innovates. And then in Calgary, there's a tech hub called Platform, where all, you know, incubator, there's, you know, technology incubators all over the world in every city, close to every university, you know, any, any jurisdiction that cares about tech will have these technology type incubators, accelerators, programs, government, either whether Whether it's federal government, provincial government, even municipal governments who care about growing their tech sector will have facilities and programs in these areas.

[CH] That's very cool. And what was the name of it. Was it VR Storm. Was that the name of the first cloud company or.

[LS] The cloud company that we started. If you're, yeah, we have got lots of rabbit holes if we want to go down some of them. VR Storm. At the time we were, it was, we were just talking about virtual machine. It was primarily the ability to run lots of virtual machines. We were getting some chips and some processors that had more than one processing power. We could virtualize a lot of things. So the VR, it wasn't for virtual reality. It was just a storm of virtual machines.

[CH] Interesting. Well, maybe we'll come back to that at a certain point, because yeah, we don't want to rabbit hole for 50 minutes and then everyone's going to be like, "What about Nial?" So maybe we'll skip back to your time at Queens and you can tell us the story how you ended up stumbling into Nial and array languages in general as I'm sure you have some familiarity with the ecosystem of APL and J and whatnot. At least you can tell us to what degree.

[LS] I haven't been following the area of array languages, but I will pick up a bit more now that I know that there are people who are actually interested in this area because it's It's an incredibly powerful programming paradigm. I was doing math and computer science at Queen's. [05] And this was, say, 1980. Like, give it context. That's 44 years ago. That's a long time ago. I was an undergrad, finishing third year, wanted to stay in Kingston, beautiful town in Canada, little city, and went around asking for jobs of the professors and got hired as a research assistant on the Nial project. Back then, the language didn't exist and the founders came from the APL community. So let me get into that a bit, who the characters back then were or who some of the players were. Okay, so the lead professor at Queen's, Mike Jenkins. And this is where I don't know how much people know about Nial. So I'm going to give the detail, some of the details. So Mike Jenkins was the lead professor at Queen's, Trenchard More. Trenchard More was a mathematician who worked for IBM, but came from Princeton. His background was from Princeton. He's the founder of array theory. And so Trenchard was key. It was pretty much Mike and Trenchard who took the concepts from the foundational APL programming paradigms and merge them with Trenchard's array theory and designed this Nial language from scratch. Now Nial stands for nested interactive array language. It didn't have a name at the time, same way the cloud didn't have a name for a while there. This language didn't have a name, but eventually we called it Nial, also the name of a Norse legendary god, the god of truth and justice and law, although spelled with a J. And there was in fact researchers from primarily Denmark, but APL was very hot in the Scandinavia at the time. So there was APL enthusiasts from Scandinavia, there was a Trenchard from IBM, and IBM Cambridge, Boston, so essentially on the MIT campus there, and the group at Queens. So we had a research group at Queens and we were designing a new language and it was basically being designed from scratch and we wanted all the power of the functions and the operators of APL, but we didn't want the funny characters, so we gave them all reasonable names. And we also wanted procedural programming abilities because that's the way most people, like a lot of people think. So we wanted to combine sort of, you know, uh, looping and conditionals, but also power of operators. Um, and anyways, we were, we were building this from scratch. Uh, and let me describe the computational power back at that time, how we had, what, what, like the tools that we had, we would program on a Vax mini computer, micro mini computer. [06] And we had, and we'd run some tests on that. We also had the first IBM PCs ever in Canada that had 64K originally, memory 256, for the researchers. And you could only talk to 64K at a time. and you had, you know, and we would compile our code on the VAX, maybe run some tests, and then we cross compile it and put it on IBM PCs 'cause our research was kind of being funded or supported at least by IBM. We'd put it on the IBM PC. We'd have to do magic to get it run on the PC, but we were running, you know, this powerful programming language on VAX 750s and 780s And IBM XTs, they're the first version, they didn't even have a hard drive. So the power for me is the expressiveness of the language, how so tightly and concisely you can express very complicated computations, nested arrays, you can operate on loop, you can use an each, we call it the each operator, is most, you know, with Express parallelism with one word, you could have, because of nested arrays, you could have functions inside data, inside data, inside functions, nested in a nested, multidimensional way. [07] So we were building this, we were writing users manuals, we were porting it. You know, once we had a pretty stable compile, a stable version, we started porting it to multiple different platforms because there was a Unix boom going on at the time. So telcos, AT&T were being broken up because computers were pretty new and networks were pretty new at the time. There's a lot going on. I got to pause there because I just rambled on for a while.

[BT] I've got a question, Lynn. At that point, Nial was interpreted, right. It wasn't compiled, is that right.

[LS] Nial is always interpreted. Yes, it's an interpreted language, But we had to build the interpreter. We had to compile, the interpreter was compiled. Okay, and the interpreter, the Nial interpreter is written in C and you would get a compiler that would compile for either Unix platform or like a cross compiler, compile for the PC platform or you'd use the compiler of whatever. Back then, there's a lot of Unix machines now popping up, like Sun Microsystems started the time, Amdahl, there was Silicon Graphics, there was the start of lots of different Unix machine companies, and they'd have their own compiler. Even though Unix is sort of standard, you'd have to compile on each machine. But yeah, Nial itself is an interpreted language.

[BT] And it was a nested array language, right. That was part of Trenchard More's theory of arrays, was he was nested arrays. So I believe even scalars are considered arrays. Is that correct.

[LS] Yes. Everything is an array. Let me describe Trenchard's work a little bit. It's foundational. And I hope it doesn't disappear. And I have some hard copies of all of his reports and read all of his reports. They're all from the '70s and '80s. And it's a whole field of mathematics, which involves these nested arrays. And so even scale, so, you know, vector, there's vectors, they have one dimension, arrays can have any number of dimensions, and even scalars, you know, are sort of considered to have zero dimensions. And any array, any object can contain other, like any array object can contain other array objects, and even functions and operators are arrays that have a value and and can be contained inside arrays. This is where I don't know what the modern line, how modern--

[ML] So maybe I can give some context, APL context to that. So Trenchard More is definitely recognized in APL's history. Jim Brown, [07] I would say, is the big name associated with this nested theory, which is what the APL implementations now almost all use. But Trenchard More is definitely recognized as contributing a whole lot to it. And I would say his name is probably even more prominent in the Neo programming language. So the function and operator thing is what I'm pretty interested in because APLs definitely don't, well, they almost definitely don't do that. There's some weird mechanism in Dyalog, particularly, where you can take an array of namespaces and then you take a function out of each namespace and then you get this thing that's sort of an array of functions, but you can't, like, it's not very easy to work with. So Dyalog doesn't like really say, oh, you can make an array of functions. That's not what it is. But yeah, the nested theory, as far as I know, they're exactly the same. So they share the idea that the enclose of a simple scalar, like a number, is the same thing. So if you keep enclosing it, and it just stays the same array, and you can disclose it as well, and it's still the same. And yeah, that's the major feature. There's another branch in like sharp APL that doesn't have this. So it's called, that enclosing thing is often called floating and the other model is grounded, which I don't know if we really need to get into 'cause that's veering pretty far away from Nial. But so yeah, this array theory was definitely very influential.

[LS] If you're thinking about it as a math, array theory is as a mathematical domain and you want all kinds of operators to like to work interchangeably, things like associativity and commutativity the basic things that you want in a math world, like you want reciprocal functions, like you want to have an inverse function so you can untransform them back and that you want to have math to hold in every case because then it makes the simplicity of what the language does or how to use it, it makes it even more simple. So we worked on a lot of that, the theoretical part. We had lots of debates about what to do in certain conditions, what it meant mathematically, what would help hold most mathematically consistently. And I don't know that we wanted to balance that. We wanted to hold true to that, but also to make it a very easy to use programming language. So we did our best at that time to combine those two things, sort of the the mathematical pureness of what we were doing, being able to, where everything is an array, whether it has zero dimensions or any n dimensions and anything can hold anything, it's sort of, that's some of the fundamental part of the math behind the language. But we also wanted to make it a very, very expressive and usable language, which I think we did. And in the 40 years that have passed, there's divergence of different array languages. I can see that from what I'm uncovering now about the state of the world. Wouldn't it be nice, I'll just say that, if there was one array language that everybody could agree on and we could.

[ML] And they all picked mine.

[LS] And we could promote the power of this kind of programming to the world. It would be great. I still, you know, I'm a big, big fan and I think mentally in arrays and I don't do it very little programming these days and very little program or language promotion.

[CH] So you mentioned that they, or they, Mike and I'm not sure if Trenchard was a part of that, they started a company. Was that company started before you started your work. And I think you said it was the summer after your third year at Queen's. Or was that after. And what's the story behind the attempt at kind of commercializing and promoting the Nial language.

[LS] There's two things that went on in that time. So one, and let's give it a year, let's say, let's give it a start year, 1980. But it started before that. It started in the late 70s. But let's say starting 1980, but the language didn't exist for writing the compiler, etc. Then sometime, okay, the creating of a company, a number of things happened. A company to commercialize this. It was created and that was called Nial Systems Limited sometime around that time. Also at the same time, another thing that happened that was very significant, by the time we had a full version that we could port to different computers and things, it was the company that started doing that. We started trying to make it commercially available. At some point, I moved from being a researcher after I graduated. Maybe I moved to becoming The only employee and the programmer, the bookkeeper, whatever, everything.

[CH] You're reporting HR complaints to yourself.

[LS] So I was the employee, but also around that time, and the year was '83 to '84, the Queens Nial Group went on sabbatical at MIT, on campus at MIT, but IBM, Cambridge was right across the street. And lots of other researchers came and worked on the project. And I did more, some more of the project, whether it's a theory work or the research work, the academic work. But at the same time, we were trying to port it to as many new Unix machines as possible and get paying users. That's what you do when you're a company. And we did, at the time, get some, it was insurance companies or people who were, who are you, who are using, it was people who used APL at the time, who wanted, you know, a language that ran on all, any of the new Unix machines. And, you know, you could easily one for one kind of take an APL code and kind of rewrite it into Nial code that had all the same similar operators or, you know, very, very similar operators and even more power. So we were making a few sales here and there. I have a box of all my Nial stuff from that time. So I keep things, I collect things, and I have one full box of Nial content from that time. And I have, Trenchard was giving a course at MIT on array theory. I have the notes from the grad course that I took at MIT at the time. I know this is an audio recording, but I'll show other people who are on the call here. There was a magazine called Computer Language at the time. [09] So there was enthusiasts talking about computer languages, like computing was pretty new back then. So, and now there's just lots of people who talk about different programming languages and paradigms and things, but it was in a different way back then. It seemed pretty new and exciting. And the other thing I want to mention is I also, uncovered in the box that I went through. My office mate at Queens and another person who came down to MIT with us, Carl McCrosky, he went on to become a professor. And he, and I have his PhD thesis, and it was on creating a processor, a microprocessor, a Nial-specific microprocessor. I just think these languages express parallelism so well. They're so powerful. Now I think wouldn't it be neat if there was an array processor running that could run arrays.

[CH] Kind of sounds like a GPU.

[ML] It sounds like a modern CPU to be honest. You may not know this, one of Intel's latest instruction sets, the AVX-512, has instructions that are just taken straight from APL, including the names. It's got compress and expand.

[LS] Yeah, it's great that these capabilities are becoming available at the chip level. Although if you have to program at the chip level, yikes, I don't want to do that. Here's another aside and rabbit hole that I won't go into in deep way, but the whole now that we have a large language model, generative AI, all of transformer architectures and things like that, that obviously uses massive parallelism. And I don't know how they express that. Although there was a time when I did parallelize neural networks, because obviously they're parallelizable, but that was also 35 years or 40 years ago.

[CH] I mean, I don't know in great detail, but I do know that OpenAI released a programming model called OpenAI, [10] for listeners that aren't aware, is the company behind ChatGPT that, you know, you could argue popularized this massive wave that's, you know, going through the industry right now. And they released a programming model called Triton, not to be confused with NVIDIA's Triton, It's OpenAI's Triton. And it is a kind of dialect of, you know, you write in Python, but it exposes to you operators that are basically similar to the kind of APL array-like where you are performing, you know, some kind of binary scalar operation on a matrix and a vector and stuff like that. So, you know, you can play around with it. It's obviously a far, you know, away from the convenience of array languages, but if you do play around with it, It does have some stuff that you can see kind of the shadows of array programming and array languages like APL and such influencing it. Which is, I mean, you can say that about all like NumPy and all the popular frameworks and insert your language.They all kind of are inspired at some point if you follow the historical lineage back to APL or a language like that.

[LS] Nice. Yeah, thanks. I'm learning about things like that through this conversation here. So, Conor, you were talking about starting the company at that time, so I know it's, let's say, '83, '84-ish. I'll just describe a few things at that time. One, the Apple Macintosh was released in '84. So this is putting the computational power in context.That was the very first Mac showed up at the MIT bookstore or library, and everybody would go and go, "Ooooh, aah." And it too, at the time, it didn't have a hard drive.All it had was a floppy disk at the time. There was APL conferences. I recall going to APL conferences. Actually, I recall APL 84 was in Helsinki. [11] But eventually, it didn't take too long, two years or so. We realized we weren't going to have a lot of paying customers immediately. So we went our separate ways. when I moved to Calgary, I still kept in touch with the Queens group forever, for a long time, still do a little bit. That was essentially the end of my involvement directly. And we open sourced. So Queens had given the rights to the language to our company, you know, Systems Limited. We decided to wind down the company and we open sourced the code.

[CH] So was the business model was similar, although I'm not exactly, Adám can correct me if I'm wrong, if I'm misspeaking about Dyalog Limited, [12] but you're selling the executable or however you're releasing the interpreter. So it's closed source at the time. And you know, for reasons that that was a challenge. And after a couple years or several years, you decided to just open source the code and wind down the project. So the listener can't see but Lynn is nodding her head. So do you know if at the time there was any alternative like because I know there's a handful of companies that are trying to build their company around a language Julia is an example of a language that did that and they've been successful to a certain extent Closure is a lisp that also and and so there's different business models, you know There are some similar to Dyalog Limited and KX has their q programming language where they are basically selling the language and you have to pay some kind of fee and there's others like Julia and Clojure that give the language away for free and then try to build either some kind of consulting company around it or you know it's it's it's you know, it's definitely a Especially these days now in the world of open source.

[LS] It's a very challenging thing to build a company around a language you know at the time though in the mid 80s. It sounded like what other alternatives considered or was it this was kind of the one attempt and then it wasn't proving successful enough Because it does sound like you did have some customers and then you just decided to wind it down at the time the dominant software revenue model was licensing and executable and then perhaps adding on consulting or monthly maintenance fees or things like that. But there was an open source movement at the time still, and this was Unix before Linux, but there's the open source Unix and there's companies that would support open source Unix at the time. So Red Hat, their revenue model being supporting an open source operating system. But it was, we would sell the executable for whatever operating system, whatever hardware you were running. It was an on-prem executable, like license for an executable. In my thinking about the evolution of programming languages, this was pre-Java, and even Sun Microsystems is gone and things like that. So I think it's very difficult to sell programming language, if not impossible. You can enhance an open source tool, programming tools, you can program environments, support, expertise. This leads now into, what I do now is I coach software, say, startup companies or technology companies. It's very difficult, although you want to have a technology that is a platform, so a language is a platform, you can do anything with it, very difficult to sell a generic platform. And especially these days, everything's open source. People expect to get their code for free almost. And in fact, openness of code these days is what's helping the area grow exponentially or the capabilities of code and humanity to kind of scale or grow exponentially because it's all available to everybody, I think. But the business models around code or selling languages are a big challenge, very challenging.

[CH] Yeah, I was just checking and at least according to my rankings that I've compiled, I think all 20 of the top 20 programming languages are free. The most popular one that is paid for is MATLAB, [13] which I just actually looked up and coincidentally was initially founded in '84. So right around the time that Nial was being worked on as well. Stephen, I think you've wanted to ask something.

[ST] Yeah, thanks. I'm very interested in this period of the early 80s, because that's pretty much when APL usage went into a very steady decline. And the business model was operating before that, at any rate, and what I saw was primarily time sharing revenue. So people will pay for the processing because before PCs, APL was pretty much the way in which you could get any kind of personal computing done. APL and BASIC, we used to beat the pants off BASIC programmers. You were mentioning earlier, Lynn, that APL programs could could be pretty easily translated into Nial and a major obstacle for getting APL onto the early PCs, which is where people wanted to move their personal computing because they really didn't want to be paying those timesharing bills. The major obstacle was you just couldn't do the character set. And I'm wondering what your experience was, because you'd have this big reservoir at the time of APL customers would be wanting to get off the time sharing services. I wonder if you've got any memories you can share of that.

[LS] In retrospect, would have made sense to probably just sell to APL users and have almost whether it's a converter or just instructions on how to convert your APL to Nial or to go at it that way. I don't think we really did that. It was APL users who were our early adopters of the Nial language, but I think there might have been too many shifts going on at the time. I don't know. It's not factions, but leaders and leaderships and thoughts and different directions that were even happening at APL at the time. So I don't know how big the market for selling a language on a new platform could have been too disruptive at the time. It could have been us not marketing well at the time. I'm sure that had a lot to do with it. We had no idea what we were doing. We were starting a company trying to sell a language that we thought everybody would buy. That's what a lot of entrepreneurs think. If I build it, my customers will just flock to my fabulous product. I don't know what dynamics led to us not particularly succeeding. But APL declining, we knew that a barrier to APL moving onto more commodity hardware like personal computers was the character set. That was very foreign to be able to make programming accessible to normal people, people who couldn't think in character sets or whatever. That was a huge barrier to APL's success, transition to the personal computer generation, I guess. We had a solution to that, but we didn't really know how to sell it or who would want it. We didn't have a market really for that at the time.

[BT] I guess the other functionality that you sort of introduced with Nial, and I think you mentioned early on, is that you had procedural aspects of it too. So if you were making that transition, it was a little easier to ease yourself into the array mindset.

[LS] Definitely, you could take procedural languages and move them into the interpreter. Like we were an interpreter. We were also sort of like a piped or a piped language. Like you could pipe things into the interpreter and have things come out like Java ultimately. So I think Java was the thing that ultimately succeeded from the work that was being done in the general area. So you could have, if you wanted to take a procedural algorithm that written procedurally, you could just translate that into pretty easily into Nial also, because we used if then statements or the assignments in the same kind of similar syntax to procedural languages. And then you could just wrap it and make it a function if you wanted to and more powerful than could be communicated at the time possibly.

[BT] Well, and I guess the other thing as you were mentioning was the fact that you could use your operators and put them into arrays, which even today, I know it can be done with BQN with Marshall's language, but a lot of APLs, it's not something that's done easily. J sort of, does something like that with its gerunds, [14] but that's a really awkward approach to trying to do anything at any tech kind of capacity. So Nial, I think probably had the jump on a lot of other language with that. Was there anything else that you're aware of doing the first order, the high order functions, or operators with at that time.

[LS] And actually, I can't imagine if you put, like if you had an array of operators, what you would do with that, but you--

[ML] Oh, I use it all the time.

[LS] Show us a great example, Marshall.

[ML] Yeah, so one thing is if you have, if you're going to make a choice and you have a number of different things that you might do, you might take an array of functions where each function does a different action and then choose from that array and then evaluate the function you get.

[LS] I haven't thought of programming in this way in a long time. The main things was the mathematical consistency of being able to have reversible operations.So for almost every function, we wanted to have an inverse function that makes it really nice and mathematically sound. So that was one of the uniqueness. The other, being in a regular keyboard, but having the powerful functions and operators, And then having operators, functions, and atomic data as fundamental first order objects that you could manipulate in the same way. They were the same thing all the time. They were just a nested array. Everything is a nested array. Having all of that made it incredibly powerful.

[ST] I must say, I hadn't realized until I did the Nial tutorial, [15] just how many features Nial shared with q or k. Some examples of projection of left arguments. So five plus is a function in, is a unary function in both languages. The simple thing of defining multi-line functions by as many lines as you indent, just the indent does it. A direct access to the interpreter. Nial's phrases, the same as k symbols, built in enumerations. even down to using the term valence for the number of arguments for a function and for the number of dimensions of an array, which is something that q does. But in q and k, of course, you can actually apply this implicit application of arrays, essentially through implicit indexing. And you've got each and each both, the use of the at symbol for functional indexing. I was getting quite shivers running through the tutorial. I also wanted to mention, I don't know, Conor, how closely you've looked at this, but I know you're fondness for combinators.

And I was most impressed at the way that you can use the atlas notation and strand notation in combination to get the effect of a combinator. [16] So I guess the iconic example would be that you could take the two operators, sum and tally, and put those into a list, which Nial calls an atlas, which is kind of fun, because if you think of a function as a map, then a list of functions is an atlas. So enclosed in square brackets, sum, colon, or tally. you juxtapose that to a vector, and you get its sum and its count. And then to the left of that, you write slash for divide. So if you take slash and then that Cloud Atlas strand notation, you've got without any extra syntactic definitions, you've effectively got a Point-free programming. It's a it's a composition that I thought was blew me away.

[CH] Yeah, I remember looking at Atlases and they were kind of an ad hoc way of building up Combinators based on the order of operations, but it there it's a different approach to what BQN and APL and J do by providing you basically symbols for different composition patterns.,You can kind of build up, if I recall, a subset of those. You can't do everything, but if the composition pattern can be built up linearly, 'cause that's basically what an atlas is. It's just a comma delimited set of functions in brackets. Yeah, so Adám's putting in the chat. Average is slash bracket, sum comma tally, n bracket, which essentially, if I recall correctly, sum and tally are both unary functions. They each take a single argument. Those two functions will be individually applied to whatever list you give it. So you'll get back the sum of the numbers and then the number of elements in the list. And then the division is basically a prefix application of a binary function. But so you can do probably almost everything you can do, or not everything, definitely a subset, close to everything. but some of it will end up being more verbose. Like the psi combinator, [17] which is what is called over, and I believe both BQN and APL, is the application of a unary function to two different, actually, I guess you could just do that. That would be the same thing. No, yeah, so I was correct. I'm confusing myself. The psi combinator is the application of a unary operation to two different arguments. I don't think you can spell with an atlas 'cause you need to apply a unary operation that is the same to two different arguments and then pass the results of those to a binary function. Whereas an atlas, I don't know, can you, I don't think you can pass like an atlas. I'm actually, I have no idea. Maybe you can do this. Can you pass it a left and right argument and then have two, one function apply to both.

[LS] You have to give me an example, but we have, there's some, there's each both. So we have some operators that would do that kind of thing. There would probably be some way that you could do that, but I can't off the top of my head.

[ML] Well, maybe you'd want to make an atlas of two-- of the same function twice.

[LS] Yeah. There's different ways. You would never get an error in the-- you could make it do something. It would do something for you.

[CH] I mean, each both does sound right. Does sound-- >>Yeah, each both-- >>Correct, yeah.

[LS] There's each left, each right, each, each, both. Those are sort of the four different ways of describing what to do in some circumstances. Like whether you, yeah, whether you distribute it, like each left would distribute the leftmost thing over everything on the right.

[ML] Well, these also made it into k. Actually, Stephen might've mentioned these. So I do wonder if Arthur studied Nial a bit, or if he was reading the same sources as the authors or something like that, 'cause there's a lot in common.

[ST] Oh, it's quite unnecessary. Canadians are just like telepathically linked. (both laughing)

[CH] Well, speaking of the similarities between q and Nial, when Lynn, you mentioned that there was built-in parallelism to the each operator, the very first thought in my head was that, huh, that's interesting. That is, there's one other language that I know that has like a parallel each operator, which is q. q has both each and peach. [18] The P, I believe, stands for parallel. So it's not built-in parallelism to their each operator, but they have a parallel version of the each operator called peach. And I'm not sure if you can recall if there's two different ones, or if there's just one each and it just automatically is parallelizing based on some kind of heuristics. But just that like, that is a very interesting artifact that there are, or at least that I know of, two languages that have this sort of parallel operator built into the language, and that's q and NEO for what I know. And I know J has added support for threading and APL has some support for this kind of stuff, but it's not built into the Eacher mapping primitive, if you will.

[LS] Yeah, in the early '80s, there was no such thing as parallel computers, really. We had concepts of parallelism and we started conceptualizing parallel algorithms, and when can you parallelize things. And actually, I mean, there was theory about what is parallelizable and what isn't, but we weren't doing parallel code. But the each operator implied that you could do it. It was peach. Our each was peach. But we weren't parallelizing the each. But conceptually, each was parallel.

[CH] Well, we are past the hour mark, but I wanna make sure that everyone has an opportunity to ask any last questions before we let Lynn go.I mean, I or Bob, go ahead.

[BT] Oh, well, just one quick question. Um, I believe k and q allow for multi-adic, like more than two arguments. It was, uh, was Nial dyadic, like it would, you could have two arguments, but could you have three arguments without doing a little bit of acrobatics or was it, was it pretty much two or one argument.

[LS] No, you can have, um, as many, as many arguments, uh, to function as, as you want. I don't know how you would compose the operators. Some of the built-in operators were kind of two-argument type things.

[ML] Yeah, it's one of these big limitations of this APL syntax, which is great in its own way, but if you step outside of it, you kind of notice that a lot of the limitations were not necessarily arbitrary, but they really were pretty constraining in order to be able to use all that infix syntax. So BQN also adopts that model though, right. It's just unary or binary.

[ML] Yeah.

[CH] And I have thought that sort of there are these features of q that, and I guess maybe k as well, it has to be seeing as q is just wordified k, that I think get talked about much less often on this podcast and I'm sure in other array language conversations because It's not really that interesting from a notation as a tool of thought kind of point of view of the number of arguments that a function can take and that it just, you know, APL, BQN, J, they just, they take one, they take two, that's the way it is. And if you want to take more, typically the pattern that I've seen is you just, you know, you bundle all your arguments up into an array and then you just destructure it on the first line and name them. And I wonder sometimes if part of the reason that q had found its niche and been very successful in its sort of space is because that is one of the many things that delineates it from other array languages is you can use it in a way that is very similar to other programming, more popular programming languages that people are familiar with and that you aren't limited to the number of arguments when you're creating function you can have however many you want. Even in their, I'm not sure what they call them, anonymous functions or defuns is what they call them in APL, but they have XY as the first and second argument, but they also have Z built in. Not that that makes a huge difference, oh, your anonymous functions can have one more argument, but it is something that doesn't really come up much in, oh, let's compare the languages and why would you prefer one over the other. And it definitely sounds like if folks are looking for an open source array language that doesn't use Unicode or ASCII symbols, you know, typically the one that people get pointed towards is q, but it's, you know, open, well not open source. It is available to a certain extent. You can get a license that I believe is either 12 month limited or something like that and has limitations to the number of cores, but you can play around with it if you're not doing commercial things. But if you are looking for sort of an alternative to that, that is more open source and you go and play around with the code. It sounds like Nial might be a language worth checking out because it does, you know, I checked at the beginning of this episode, Nial-array-language.org. [19] It's got a website, it's got documentation. And as you mentioned earlier, Lynn, there is a, how large the community is, is unknown, but there definitely is a channel on a Discord that people frequent with some frequency. So yeah, I guess maybe that's a good spot to leave it. I'm not sure if Lynn, if there's anything else, or we'll let Marshall say.

[ML] If I can point out one of the big differences, Nial being, you know, Trenchard More's array theory coming into that, Nial still has the multidimensional arrays of APL. So if you've tried q and U, but multidimensional arrays are very important to you, you know, Nial also does that. So that's a pretty big thing.

[CH] And Adám, you were about to mention one last thing too.

[AB] Now there's something we haven't talked about at all is as far as I can tell NL is left to right whereas all the major array languages are commonly called right to left when functions have long right scope in APL and J, k, q, and BQN and in Nial they have long left scope [20] If 10 plus 1 times 2 gives 22, because 10 plus 1 is 11 times 2 is 22. In all the languages that we regularly deal with, it would be 1 times 2, which is 2 plus 10 is 12. So that gives a very different feeling, I think at least, the way you think about expressions.

[LS] Yeah, I don't know what to comment on about that. I don't know how it compares to a lot of other languages, but I think it would be trying to be more similar to traditional programming languages or the way the order that you read procedural programming languages. I'm not sure why that decision or why exactly it goes that way as opposed to the others.

[BT] But it's the hierarchy, right. That it doesn't have a different hierarchy. It just goes left to right as it's working through its answers as opposed to right to left, is that right.

[ML] Well, or as opposed to having precedents, right, as exponentiation, then multiplication, then addition.

[BT] Which I think in a lot of math is what you're trying to get away from in these languages is that you can consistently just take a series of symbols and process them and not worry about all the hierarchies of pedmas or bedmas depending on what part of the world you're from. mathematical notation goes both ways when you say that there's equal precedence.

[AB] If you've got division A divided by B divided by C divided by D then most people get all confused but the rule state you go from left to right and if you have A to the power of B to the power of C to the power of D then the rule state you go from right to left. So that's of course the worst of both worlds. One rule of course is better than multiple rules and not having a hierarchy, especially when you have so many different functions, it would be unbearable to remember the order. Just look at the precedence table for something as simple as C and then look at the precedence table for something like JavaScript. [21] It's hopeless. So hopeless that some things can't even be in the same statement without parentheses. The interpreters will just say, "Nah, you've got to parenthesize that."

[BT] You've got to tell me what you mean.

[AB] Yeah, well it's interesting because the precedence order is given in the table, but they still complain and say you have to put parentheses. But I'm curious to know, if the law at all is written anywhere, why the change from APL. I mean, I mean, I've much considered that maybe APL should have gone the other way. But that means there's some interesting aspects that I haven't really thought through fully. I don't know if anybody has any ideas about this. Prefix functions, unary functions, monadic if you want, but everything goes from left to right. That's interesting.

[CH] Yeah, that was the thing that I was just juggling in my head is that You could compare, let's get even more esoteric, you know, you could compare Nial to Smalltalk, which is left to right, but there's still a difference in that unary functions, the argument is on the left. So you say argument and then you pass it a message. So when you, like for a very simple example, if you wanna take the length of a list of numbers, you would, in Smalltalk, you'd write out your list, you know, one comma, two comma, three, and then call a message. I don't actually know what they call it in Smalltalk, we'll pretend it's size, but your sentence is, you know, one comma two comma three in brackets, space, the message size, period. So you're, technically you're passing your single argument on the left, and then your binary functions are infix. Nial, from my understanding, is left to right, but it still has its unary functions prefix, not postfix, meaning that if you're calling the length, which I believe it's tally or count in NEO, can't remember which one, tally is the length you would go, it wouldn't be bracket one comma two comma, or I guess it's not brackets, it'd be one, two, three space tally, that would be incorrect, it would be tally space one, two, three. So even though it's left to right, in the case where you have a function that takes a single argument to monadic or unary function, it's still prefix, which is, So it's very similar to Smalltalk in the way that it reads left to right, except in that one case, which I'm not sure if that ends up having some kind of byproduct effect that you have to keep track of, but.

[AB] Must have a profound effect though, because that means if you have a function name, or any function really, and you want to apply it to the result of an expression, you must parenthesize.

[LS] Well, and that's what I'm thinking, But this just my thinking right now, to be consistent with mathematical notation, so f of x in brackets. So it's just reading things that way, like a function applied to x. That's how you would express things mathematically, and how most languages express things, and that's how Nial expresses things. So it's just usually the function is usually on the left and the thing the function is applied to is usually on the right. That's what I would think the reasoning would be but I can't exactly recall and I don't think I'll find anything in my notes on that.

[ST] I have to confess that the effect of combining that with strand notation and with the names of APL primitives is an absolute bear trap for an old APL programmer like me. I kind of, because I've got decades of experience of expecting the right argument of reshape to be everything on the right.

[AB] But this is interesting, then there must be some kind of look ahead, being that functions are also kind of first class. I'm a bit confused. Like, if you write opposite, that's the negation, so opposite, average, and then something that we're taking the average of, then I'm confused as to how far to the right do we look. It's not that as soon as we see a unary function we apply it to what's on its immediate right, because that would be the opposite of average. Which doesn't make much sense, but could be somehow convenient, but it's not. So because there's another function name, then we keep going to the right. Which essentially means if we have. if I understand it right. If I write a long sequence of function name, function name, function name, function name, function name, and then some data on the right, then we're back to normal APL running from right to left.

[ML] Or just for the unary functions.

[AB] Well, that's the only way you can really have function names after each other, right.

[ML] Oh, yeah, yeah.

[AB] So any time you have function name, function name, function name in a sequence followed by some data at the end, then we're running from right to left. Doesn't matter if they're unary or binary, because if you have a binary and we're writing it like that, that means they're taking pairs as in the-

[ML] Well, I mean, that's not how a binary function is written.

[AB] What.

[ML] Well, a binary function has its arguments on both sides, so neither of those function.

[AB] No, no, not in, yeah, if I understand right, a binary function can just take a two element array as right argument.

[ML] If you don't give it a left argument, it will assume that the elements of the right argument are to be its arguments. Is that not correct.

[ST] That's correct, you can write three slash four or you can write slash three four.

[ML] I would still call that unary syntax.

[AB] Okay, so if you're using unary syntax.

[ML] Sorry to split hairs.

[AB] So that means if you're writing function name, function name, function name, function name, so on, and then some data in the end, no matter what the nature of these functions are, everything runs from right to left. But if you're writing infix, then you run from left to right. So the language isn't really one or the other, it's kind of depending on how you use the functions, can run right to left or left to right. And that certainly makes my head spin.

[ST] Yeah, so if you had such a chain, if you imagine it, and one of the unaries in that chain was slash three joined with whatever being computed to the right. No, sorry, my head's spinning again.

[LAUGHTER]

[LS] Maybe APL, so I don't program in APL. I could not get past the symbols. A value of APL programmers and Nial programmers is elegance and simplicity, elegance in math and notation, but I'm also a proponent of clarity. Putting brackets in doesn't offend me. So to show the precedent order that you want it to be executed in explicitly, I don't mind putting brackets in, but there are definitely, you know, to force findings. But there is implicit, and I don't know which is called left to right and which is called right to left, but there's implicit brackets in the notation, true, but they're trying to make them as simple and consistent and usable as possible would be some of the design reasoning behind that. If you really want to be explicit about what you order, you want things to be done, put the brackets in. But I like the example in the chat that we put up so people on the call can't see this, but it's average is slash, and then kind of in macro functional brackets, sum comma tally, that just allows average to be applied to, presumably some data on the, like some data on the right.

[AB] So here's a fun example I just came up with playing with Nial, it's an outdated version, but it exists on Try it on-line, we'll leave a link to that, where I have the expression plus times and then open paren, 10 20 close paren, open paren, 1, 2 close paren. It gives 50. What just happened. So if I understand things correctly, we're trying to apply plus, it needs two arguments, can't do that, keep going to the right. Then we're doing multiplication, these two arguments has nothing on the left so we keep going to the right. Then we find two arguments to multiplication which are 10, 20 and one, two. So multiply those two with each other. So using array multiplication, 10 times one is 10, 20 times two is 40. So now we've got 10, 40, and then we do the plus. 10 plus 40 is 50.

[LS] Correct, yeah.

[AB] And even though Nial at first seemed very familiar to me as an APLer, this all of a sudden looks very alien, almost like some stack-based thing.

[CH] So wait, are binary functions prefix.

[AB] No, they're both.

[CH] They're both. OK, so if you change that to 10, 20 times 1, 2, does it still compile. Because the result of 10, 20 is a single array. And I thought plus-- So you can take the times and put inside between the 10, 20 and the 1, 2. And that still works.

[AB] Now, well, yes, because now you have a left argument on the left of times. So now it's plus. And then we can't continue.

[ML] Can't we?

[AB] We're stuck. We need two arguments, I think. I think. I don't know.

[ML] Shouldn't it call the plus on 10, 20.

[AB] Oh, yeah, no, it would. Yeah, of course it would. Yeah, so it would say-- I'm sorry. Yes. I'm confused. You see how confused I get. Yeah, so plus 10, 20 times 1, 2 means the sum of 10 and 20 times 1, 2. No.

[ML] That's not right either. I have to say from the small examples, this isn't messing me up. Yeah, it gives 30.

[AB] Hold on, so why does it give 30. Plus 10 20 times one two.

[ML] So we can go from the left, plus 10 20, immediately evaluate that, that's 30, times one two.

[AB] Oh, it's the write is also a function. All right. That's a little weird. So it does the write, and then it multiplies. Oh, so we wrote out the result. OK, I just need parentheses.

[CH] Can someone paste it to the chat. [22] Because we're-- I mean, actually, maybe it's better that we're just explaining through it. So just to recap, folks, we've got the first expression that we had was plus space times space in parentheses 10, 20, and then in parentheses 1, 2. And the result of that was a binary multiplication between the two lists, which does element-wise multiplication and that gives us what 10 times 1 is 10 and then 20 times 2 is 40 and then the plus does a reduction or does a addition on those two which gives us 50 if we move the multiplication in between the two arrays then that reads out plus 10 20 times 1 2 that gives us 30 and yeah so the problem is that oh - Well, yeah, the most important thing is that it doesn't, that's not the whole expression. The expression is write plus 10, 20 times one, two.

[AB] So we write out the result of 10 and 20 summed, and then after we write out the result 30, we keep multiplying by one, two, which does give us 30, 60, but never is written.

[CH] Oh, I see, I see, I see.

[AB] That's why I have to parenthesize on the right. I keep forgetting that if I use a unary function, like right, then I must parenthesize everything on the right, because it has short.

[ML] Yes.

[CH] Right, so I assume you're doing this on like TIO or something versus in the REPL, 'cause the REPL there wouldn't be too much confusion here.

[AB] Yeah, you wouldn't need this write function.

(laughing)

[CH] We were wrapping up and then someone asked something about precedence and here we are 20 minutes later and Lynn's going, "What did I sign up for." (laughing)

[AB] Nevertheless, assignments are on the left. So we had our example, average is a division of sum and tally. But that means is must be special. Is must have long right scope, I guess. I don't know. What is is?

[ML] What is is.

[AB] It's defining a function. It's just notation. So it's-- It's not a function.

[LS] It's not a function.

[AB] So it's assigning the function that's on the right hand side, the div sum tally. It's just making the word average, assigning a function of any number of variables, but two would work the best. Or no, any array, like any array. Yeah, it's just defining average. Yeah, and in my opinion, it shouldn't be a function, because the thing on the left is not a value that you want to pass in. It's a name that you want to define, so it's a different-- Neither is it a phrase, symbol thing. But that means is, in a sense, has long right scope. It takes from here into the rest of the expression. Now, I haven't made any experiments, and we should probably stop with this. I can use-- can I do an inline assignment, capture the result of is, and keep doing computation.

[LS] I don't know.

[BT] It's really interesting, though, when you just take something like that and do it a different way, how much it twists your mind, do you realize the patterns that you've actually got so ingrained that you really have to take a big step back and then think about it in a very different way. I mean, I just look at this and say this is what my head's currently going through is a lot of the same stuff that somebody first encountering an array language just feels like.

[AB] That's a good point. It's humbling, this thing.

[BT] It's just like it is humbling, but it's also really kind of neat I mean I'm looking at it going wow and and you know I'm bouncing all the things that I do know about it off in my head at the same time. I guess you know something that Rob Pike mentioned not everybody is built that way to go. "That's kind of cool." They just go "that's a mess I'm out of here, right."

[CH] cause for concern though That if all of us are staring at this going wait what what for 10 minutes if that's the beginner experience

[ML] The 10 minutes is nothing, come on.

[CH] How long do you expect people to.

[ML] Learn a new language in 10 minutes.

[AB] When you tell somebody, "Hey, go and take a look at the APL," you want them to give up after 10 minutes of staring and thinking this is line noise.

[CH] But I mean, if their first 10 minutes is being completely surprised by what they think should be quite simple, I could see that that is.

[BT] That's the nature of building out the tutorials and the introduction material, right. You want to surprise them in a positive way, not surprise them in a very negative, confusing way. You do want this element of surprise because that's important, and I think that's important as well. I would just, in my own head, going through that process though was really instructive. It was quite neat. I don't know what it's going to translate over audio.

[ML] I promise that writing Nial with letters is a lot better than listening.

[CH] All right. So go ahead, Lynn.

[LS] So is it Adám. Are you running these on Nial Interpreter somewhere. My kind of question is, does anybody know. And I don't. I think I know what the answer is, is no. But I downloaded the source code a few months ago, and I compiled it on my local computer, and I can run a local version. And I'm just wondering if anyone knows, is there a version out there in the cloud on the Nial site. I'll go look in the Nial site, but I think the answer is no, there isn't sort of the interpreter out there where somebody can just plug in these computations and see what happens. So you have to have your own version. So that makes it limited to those of us who can download the code and compile it and run it on our local machines. So that's just, if I wanted or somebody wanted to make Nial more available to people, having a public, you know, a version where people could just type in these expressions might be useful. That's one thing, so.

[ST] Lynn, that's already done.

[AB] Yeah, but it's not the same thing though. TIO runs a script.

[ST] What Lynn is asking, and that's referenced on the website.

[AB] What Lynn is asking for, if I understand right, is an interactive one, right. Where you can add the same experience you would get offline, where you can write something, get the results, write something else to use that. And like we know from pretty much every area in language.

[ST] Yeah, there's one of those offered, or a link to it offered on the website, plus downloads for Mac, Linux, and Windows.

[LS] But I just, yeah, sort of a, just a portal somewhere where you can type in the Nial code and see what happens. that's already out there ready that you can do that. I'm going to look into that. And my comment on all of these different mysteries about what's going on here, what's the parsing structure, what's the operator precedence, what does all this mean. And that's where, same with when you're communicating in human language, sometimes you have to be more explicit about what you want or people will not understand what you are asking of them. This language allows a lot of surprising things because you have to think in nested arrays and what we're uncovering here is that there is definitely some implicit precedence, whether it's left to right, right to left, or a combination of both, it's in there, it's in the design of the language. I think our design principle would be what looks most like math, like mathematical notation and the way that people would interpret math for the most part. And sometimes you do want to use parentheses in math. Anyways, so as you see, lots of combinations of pluses and multiplications and arrays can lead to very mysterious behavior. But there is some consistency there, believe me. Or trust me, there is.

[CH] Yeah. I think the thing that was most confusing, for in case the listener missed it was that we weren't parenthesizing the expression and where Adám was running it on Try It Online, TIO, which is the closest thing to a, you know, online Try APL or BQNpad, but it's not exactly the same thing. You don't get the same experience. So you have to wrap your expression or proceed it with a right is that we were all expecting "AB" you know and we only saw "A" and it really the expression was yielding "AB" but it right was only taking the first thing that was evaluated and I think actually it does make sense and it's actually an exploration of a space that J and BQN and APL can't do because they overload all of their Unicode symbols or ASCII graphs to have both a modatic and dyadic definition but when you are giving names to your operations that all have a fixed arity, you now have the ability to say, "Well, we know what the arity of this function is." It takes two arguments. If you want to put it prefix or infix, you can do both of that, which then leads you to be able to do something like the average function, right. Like if all you have is atlases and you want to define a average Function using an atlas if you don't give it if you don't get the language the ability to put a binary function prefix you then you then have to do something different with atlases and basically say that whenever you have a pattern of unary function binary function unary function inside an atlas so you'd have to say specifically when we have like a Atlas of length 3 where the arity of each function follows the pattern 1 2 1 you then get what is the equivalent of the. pattern that forms average. But because we can do both prefix and infix, you can just say, "Oh, form an atlas." The atlas is going to give you two things because it has two unary functions, and then just prefix that with a division, and then that just forms your pattern. So I can see why it's a little bit more confusing from a beginner's point of view that like, "Why do we have two different ways to position a binary function." But then And if you look at the average and think how do you form that, it basically, you need something like that. Or an alternative, which arguably, you know, I actually think that's an interesting space, like forming a list of functions, looking at the arity of each function with your interpreter, and then forming a different pattern. That's a totally cool idea. Very complicated, very hard to parse from a user's point of view. But the point is that I don't think it's -- there's zero motivation behind why the folks that designed the interpreter did it this way. I'm pretty sure that at the time, they said, oh, this leads it to insert property y, which is-- it's nice, or nicer than the alternative. It just isn't always immediately obvious.

[AB] Yeah, I think this is pretty fascinating, and I'm intrigued with it. I don't feel like I need this kind of thing in my day to day APL programming. But certainly when I write JavaScript, how I wish there was a way to use these prefix functions that take parenthesized arguments in Fix instead. Just do what Nial is doing. Let me put one thing on the left and another on the right, and do everything from left to right, because that is how the language works. That would be so wonderful.

[BT] And with that, I will do my plug.

[CH] Yeah, 30 minutes later.

[BT] Well, if people want to get in touch with us, and they want to leave messages, or responses, or questions, You can contact us at Contact AT ArrayCast DOT Com [23] And we welcome your input. And this has been fascinating. I think, again, with Adám's last question, we went down into something we weren't expecting, but I think it's really interesting. And it does show you how important a lot of these languages that sort of, Nial's still there, but it's kind of drifted off. It still has a lot of things to offer if you're into these kind of things, interest in those possibilities.

[CH] And I will follow up from what Bob said. Thank you so much, Lynn, for taking your time to share your history and your experience. I'm not sure how many folks out there have the knowledge that you do of the history of this project. So I feel extremely privileged to have been able to get you on the podcast and to pick your brain about this stuff because I love this, finding out about the history of this stuff. And I walked one time through the Queens campus, and I think I was looking for a little plaque, you know, homage to Iverson, because I think Iverson did his undergrad there, but I couldn't find anything. But then to find out that there was, you know, maybe not a plaque, you know, saying Iverson went here, but that, you know, this history of a pocket of the array language community, you know, is was not just, you know, a pocket at the time, but sounded like there was some massive research and collaboration between MIT and Queens and definitely like an interesting piece of history that I'm glad we got on podcast So it doesn't get lost in the annals of time. And yeah, just thank you so much for taking your time. This has been a blast having you on to chat about all this.

[CH] Yeah, absolutely. I took this opportunity because I wanted this to be recorded too. It's a period of time and a history that is not alive for many people, but it's interesting for those of us on this call and listening to the podcast. So thanks for having me.

[CH] No, absolutely. And with that, I think we will say Happy Array programming.

[ALL] Happy Array programming.

[MUSIC]