Transcript

Transcript prepared by Bob Therriault, Igor Kim and Sanjay Cherian

Show Notes

00:00:00 [Kai Schmidt]

That's a good place for me to thank the Uiua community and everybody on the Discord for taking part in it and for using language and being excited about it and contributing actual code and pointing out bugs and coming up with design ideas. Just thanks, everybody, there. It's really been great.

00:00:31 [Conor Hoekstra]

Welcome to another episode of ArrayCast. I'm your host, Conor, and today with us we have a special returning guest who we will get to briefly introducing in a moment, but first we're going to go around and do introductions. We'll start with Bob, then go to Stephen, and then go to Marshall.

00:00:45 [Bob Therriault]

I am a J enthusiast, and I'm looking forward to finding out about the special guest.

00:00:53 [Stephen Taylor]

I'm Stephen Taylor, I'm an APL and enthusiast on my good days. Sometimes I just like them. And I too am looking forward to finding out about our special guest.

00:01:05 [Marshall Lochbaum]

I'm Marshall Lachbaum, I'm a Singeli enthusiast. I've worked with J. I'm known for BQN, and we all actually know who the guest is.

00:01:16 [CH]

I was thinking the exact same thing. We've already interviewed this person, so we're going to be learning about updates since the last visit. But as mentioned before, my name's Conor, host of the podcast, massive Array language fan. And with that, we'll get into the announcements for today. I think we've got three of them from three different people. So we'll go to Stephen with the first announcement, then to Bob, and then I'll finish with the third one.

00:01:38 [ST]

So Iverson College meeting for 2024 [01] will be in August at Trinity Hall in Cambridge. It's a very small event, just 25 places by invitation only. But the first round of invitations is going out soon, but not everybody invited will be able to come. So if you're still interested, if you'd like to be considered for an invitation, please do write to warden@iversoncollege.com and say why you'd like to be invited.

00:02:09 [BT]

Oh, I hope I'm invited.

00:02:10 [ST]

Well, there will be people from the ArrayCast at the event.

00:02:16 [CH]

And we might even record an ArrayCast episode while we're there for the folks that are there, I guess.

00:02:22 [ST]

Sure, you'd be delighted now.

00:02:23 [BT]

Again, I hope I'm invited. Anyway, I've heard, and well, I shouldn't say, well, I have heard that there's a new Prague Lancast out and they're talking about the J language and I'm not on it. Apparently Adám's on it, but he's actually, honestly, they made the right choice there because I think he's better qualified to talk about a lot of this stuff in J. But I will also say that they did reach out to me before they recorded this and asked me if I could be on it and I couldn't make it that week, this past weekend, actually, I think. And so I think the next one I'm going to be on, and you'll find out how useless I am in terms of programming, actual programming, but I'll have some things to say and I think we'll have a pretty good time. But there's a new version out about J and then I think upcoming I will be on if you want more of me. I don't know.

00:03:16 [CH]

And to clarify, because I think Bob's going off the information that I provided with him. And also, on a slight tangent, I'm not sure it was intentional to release it on Monday night to give me more than 60 minutes to get through most of it, but I finished 90% of it because we had a bit more time. And Adám is mentioned. He's not actually on the episode. It's just they referred to the correspondence that they had with Adám. But yes, looking forward to episode eight. I'm not even sure if we mentioned episode six of ProgLangCast was on F#, but maybe we'll throw a link in the show notes because we love all languages. We are predisposed to like the array languages. But anyways, last but not least, the final announcement is on behalf of Adám, who's not currently here, the APLseeds 2024 videos are now online. The conference happened, I think it was in the last couple weeks. It was on a Wednesday. And the videos have been in a very timely fashion been uploaded to YouTube. So I've seen I think one and a half of them. They've been pretty good. I'm not sure I'm going to get through all of them. But we will leave a link in the show notes for any of those folks that were not able to attend in person either due to time zone issues or any other issues and are interested in watching them. With those announcements, oh, Marshall's going to add something here.

00:04:34 [ML]

What I saw from APLseeds was somebody posted in the BQN forum was in one of the presentations, there was a graph that ordered programming languages by I think it was apparent strangeness and usefulness. And so of course, the point is that APL is up at the top right. It looks really weird, but it's useful. And it's there with like Haskell and some others. And then at the bottom right, both BQN and Uiua were considered less useful than assembly, as well as OCaml for some reason.

00:05:02 [CH]

What? What?

00:05:04 [ML]

I'm not sure all the content at this conference is trustworthy.

00:05:09 [CH]

Do you recall which talk this was because I got a reply.

00:05:12 [ML]

It was, it was the QU APL quantum APL thing.

00:05:16 [CH]

Holy smokes. All right. The shots have been fired, folks. Thank you, Marshall. You were the one that first put Uiua on our radars. And you've now put this what sounds like a blasphemous graphic, a blasphemous chart on my radar. We will deal with this presentation in this chart in a future episode once I have reviewed it and contacted said individual that has put this together, because that's two of my three favorite languages right there. And we can't let this stand.

00:05:46 [ML]

So apparently you need to learn OCaml, right?

00:05:48 [CH]

I mean, OCaml is right up there with Haskell. Arguably, OCaml is more useful than Haskell.

00:05:54 [ML]

They had the same apparent strangeness, but Haskell was up at the top and OCaml was at the bottom.

00:05:59 [CH]

I'm very curious about this. We might have a whole episode. We might get this person on and just talk about this one graphic for the entire episode. Anyways, we need to loop in the guests now, because we're referring to Uiua. You've probably read the title of this podcast episode, so there's no surprise coming here. Long awaited returning guest, Kai Schmidt, the creator of the Uiua language. I went and checked. It was September 29th, was the first time, episode 63. So this is roughly, I think over half a year ago. And I think Uiua had just kind of been publicly launched at that point. And there has been a flurry of activity over the last half year plus. And we've actually been talking about getting Kai on for like the last two to three months, but we have just had such a long queue of guests that we already had sort of reached out to. But now, finally, the wait is over. We're here today to talk to Kai again about everything that has happened in the last half year plus. Just on like a little side note, I knew or I saw at a certain point that you started pushing out releases on GitHub with like the little get tags. And for some reason, I had thought that those get tags existed before you came on the podcast, but I think it might have happened shortly after. So I went to them to check, you know, what number of releases happened since we were on the podcast. But all of them have happened since the last time you were on the podcast, which is a full 29 of them, folks. I went and like started reviewing the notes. He bobbled his head, meaning that a lot of them were bug fixes and stuff. So it's not official releases, but I went and looked at them. We're probably not going to get to all of the updates. So I won't even attempt to summarize the interesting things that caught my eye as I peruse the lists. I will just throw it over to you and let you give sort of an update, maybe introduce yourself again for folks that, you know, maybe started listening since then. Highly recommend. We'll put it in the show notes, a link to episode 62 or 63, sorry. If you haven't paused this episode, go back and listen to that one. Probably we're not going to cover everything we covered in that episode. Anyways, over to you, Kai. Super excited to have this conversation because it's been long awaited at this point.

00:08:08 [KS]

Yeah. So for anybody who hasn't heard from me before, my name's Kai. I'm the creator of Uiua. I'm a long time listener of this podcast as well, from way before I started working on it. I guess where I should start is so right after we recorded that original episode I was on.

00:08:25 [ML]

Before Uiua released.

00:08:26 [KS]

Yes, yes. That's important. So we recorded on a Tuesday. The episode was released on like a Friday night, Saturday morning. But on the day, on that Tuesday, Conor released a video on his YouTube channel about it. And I, it's okay. I didn't, but I didn't know that was coming. And so suddenly I got a huge influx of interest, like out of the blue. I was like, oh, okay. I guess I have to manage a community now, which is, it's good. It's good. But since then, it's been kind of a wild ride. The amount of people that are just getting involved with Uiua. And it's been great. And part of the, it's been really important, I think, in the development of the language, having so many people trying things out and thinking about it and giving their input on like, oh, I think this should work this way. Oh, this, here's a bug. And I don't know, that's been kind of the biggest change in my life, I think, is managing the community and just being able to interact with so many people who take such an interest in this thing I've put my heart and soul into. So there's that. If we want to talk about changes, and yes, you said there's a lot. At the beginning, I was changing a lot more because when it was first available, it had just been stuff that I thought should go in the language, right? I didn't really have, I hadn't used it to make anything real. So I didn't know necessarily the quote unquote correct way that certain things should work. And it wasn't until I got lots of people working on it and using it, that it became apparent. I don't know, I know that Marshall, you've talked about, by the time BQM was public, you had most of the stuff ironed out, right?

00:10:12 [ML]

Yeah. Well, I mean, it wasn't an experimental language. It was meant to consolidate. That is the APL family up till now with some outside ideas that are very well accepted.

00:10:24 [KS]

Yeah. So I don't know how I could have done a complete language without just myself, right? Or even with just a small group. I think it's important. I think for Uiua especially, it's been very important to have that input from lots of people. And so that's why, and as you guys have noted on the show before, the language has been through a lot of very quick and very drastic maybe changes. [02] In the past couple of months, the number of breaking changes has decreased considerably as, I don't know, we're asymptotically approaching something that I would call stable. That won't be for a while now. But I think that the core ideas of the language are kind of solidifying. So I mean, if we just want to talk about some of the big ones, when I first came on here, Uiua had first class functions, functions that you could put in arrays and pass around those values. That is no longer the case. We still have higher order functions as all array languages do in the form of modifiers. We still have anonymous functions that you just write in line. But I decided that a lot of the design of Uiua has moved toward having well-defined what are called stack signatures. So every function has to have a well-defined number of arguments and a well-defined number of outputs. And because of that, there's problems when you start getting functions as values, functions that can be different at runtime. And so for that reason, it also simplified the interpreter and the compiler a lot. But functions can no longer be put in arrays of their own thing. A lot of people weren't happy about that. A lot of people really like first class functions, but I think it was the correct move. And so now Uiua just has boxes, just like J, basically. And they're like J, they're mostly opaque things. They're not like APL or BQNs, like nested arrays. Yeah. Any questions about that before I keep going?

00:12:24 [BT]

Well, you were saying they're like J's boxes. Are they sort of more like J's gerunds that are sort of a...

00:12:30 [KS]

No, no. So they are just boxes. They are

00:12:34 [ML]

Like you can't put a function in a box, can you? Or is there a

00:12:37 [KS]

You cannot. They are just a scalar that holds an array of any type or shape. And there's a few -- there's various ways of manipulating them, whether it's in loops or as arguments or whatever, to try to make things a little bit more ergonomic.

00:12:53 [BT]

And for the people that were missing the first class functions, how are they using them? Is there a way around them? Or is it something that you just think

00:13:01 [KS]

So the problem with the first class functions was that because everything needs to have a well-defined stack signature, you can't -- well, you probably could to an extent, but because of like a halting problem and things, I don't think you can statically analyze every single possible way that a first class -- First class. Yeah, that a function has value could be used. And so you had to annotate in the code. This uses -- I'm calling a function here that's a first class function. That's a value. And I need it to -- you have to annotate the signature in line there at the call site, which is both ugly to read and it's kind of annoying to write as a programmer. So that was a big part of it. The other part was that it majorly simplified the language to remove that as well.

00:13:50 [BT]

I guess if you have to declare all that stuff, it's like a declaration really sitting in the middle of all that.

00:13:55 [KS]

Yeah, yeah. In the middle of your regular code in a way that you really didn't have to do for much else. And so more and more, the design of Weaver has moved toward basically your signature should always be inferable for the most part. The one edge case is recursion. I don't think it's generally actually possible, like logically, to infer a recursive signature. So that's the one time you have to annotate it is when you have a recursive function. Another design that I've -- I don't know if I've come up with, but that I've integrated into a lot of stuff is -- so like in the other array languages, you have this concept of the rank modifier of a modifier that you pass a number to or in some cases, a list of numbers that define the rank at which a function operates. So whether if you have a big multidimensional array, you want -- oh, I only want to operate on the elements of this array or the vectors that are subcomponents of this array or the matrices or whatever. I did implement that at one point in Weaver. And it was called level because the word rank was already taken. But it functioned more or less how it does in like APL or JVQN. But then there's also other modifiers that would require that. And the two other cases of iteration that I'm thinking of are table, which is like your modifier that applies a function over combinations of pieces from two arrays. Or because we can take -- can have functions that take more than two values, it can be a combination -- it could be combinations of any number of arrays, right? 3, 4, 5, and it would get exponentially higher. It's that and it's also fold. So fold being your like stateful reduction sort of generalization. And that can also work on different ranks of different arrays. And so I had these three modifiers that handled these cases. And you did pass a rank or list of ranks just like you would in the other languages. I didn't really like that a whole lot. I don't know. There's something about -- and you still have to do it in some cases. But there's something about writing numbers next to the functions.

00:16:10 [ML]

Yeah, it it. It just doesn't like. It's not what you mean.

00:16:13 [KS]

Yeah, it feels -- I don't like how it feels. And so the way that I've now moved to -- because you have to be able to do that stuff, right? You can't not have an alternative. And so I have this concept of fixing. And so there's this very simple function. It's called fix. And it adds a length one axis to the front of an array's shape. So if you have the list 1, 2, 3, it turns it into a matrix that just still has 1, 2, 3. It doesn't change the data at all. It just changes the shape. And then various modifiers, like for example, rows, which is the typical iteration modifier, if they see a shape mismatch, but then one of your arrays has one row, because that's what fix does, that array will be repeated in your loop. So if you fix one array and don't fix the other, the rows of one array will be used while the other one will be repeated over. And this extends to any number of arguments. So you can have rows. You can call the rows modifier with a function that takes, say, four arguments. And you can fix some of them and not fix others. And the fixed ones will be repeated, and the others will be iterated over. And so it's a general way of combining arrays in loops in whichever way you might want. Is this a one-to-one mapping? Because it sounds very similar to broadcasting as it exists in Python and Julia. Or is this like a superset or subset of it? I don't really... So I'm not really familiar. I've heard the word broadcast a lot. I'm not really as familiar with the semantics of what that means.

00:17:45 [ML]

I think it's different in each language. So definitely the pairing of an axis of length one with another that doesn't have to have length one is the same.

00:17:55 [KS]

It's basically -- I mean, if you're writing like C, right, it's just looping over one array while using another entire array inside.

00:18:06 [CH]

Yeah, it sounds like it's very similar.

00:18:08 [KS]

It's something you have to do basically constantly in all code you ever write. And it's necessary to be able to do that.

00:18:14 [CH]

And so I think in J and APL and the -- I guess BQN is not an OG array language. But when you have a mismatched matrix with a vector, that's just not going to work. But in languages that support broadcasting, broadcasting is a type of rank polymorphism that basically when you have these mismatches, it'll find a way to extend the shapes by cycling the rows. So then you end up with a matrix kind of behavior. And a lot of times it's very convenient, but at certain times it'll do something that you weren't expecting it to do. And it's just like a different

00:18:50 [ML]

Yeah, well, so for the rank mismatches, J and BQN and -- I don't think any APL has ever done it, but J and BQN at least, leading axis agreements.[03] So they can pair basically no axis with an axis, assuming that all the axes go at the end, the actual ones that exist. So the Julia NumPy concept is pairing length one with any length. And oh, this is also what Tali Beynon mentioned is the same as pairing one function with another constant function if you consider the array as a function of its indices. So that's an interesting connection.

00:19:31 [KS]

The stuff you talk about, Conor, where sometimes it's really useful and sometimes it's not. But I have considered things like, oh, it'll know what to do based on the length of each dimension and things. But in general, I've tried to keep a principle that every primitive function in the language, given the same rank of inputs, inputs of given ranks, will always produce an output of -- well, a consistently ranked output, right? And this is because -- I don't know. The rank of your output being dependent on the length of one axis or another can lead to some really weird

00:20:12 [ML]

Well, it's also the same thing as stack signatures, actually.

00:20:20 [KS]

What do you mean?

00:20:22 [ML]

Because, yeah, the stack signature is the number of arguments that a function takes in. Well, and there's no real correlation to the output. But the rank of an array is the number of indices it takes to get to an element.

00:20:29 [KS]

Yes. So the idea is that you wouldn't have to write weird conditions and stuff like, oh, if it was this input, if it was this rank of input, then do this with the output.

00:20:39 [CH]

Yeah, yeah.

00:20:40 [KS]

It just makes the language easier to use and more consistent.

00:20:43 [CH]

The weirdest behavior that I've seen that really surprised me was that in Julia, and I think it might be the same thing in NumPy in Python, when you have two equally lengthed rank one vectors or arrays, so just two lists of numbers, and you want to do an outer product. Because in Julia and NumPy, or I think in Julia at least, the broadcasting operator, so you have your binary infix operations, and then just by suffixing it with a period, that is a way to turn your binary operation that is scalar or whatever element-wise into a broadcasting operation. But I think if you do that, and you have your equally length lists, it just does element-wise still. And in order to make that do an outer product, you have to shift or add an index. So basically what you're doing with fix. But the way you do that is very cumbersome. You have to change the shape. And then it's because you've changed the shape of this that it'll do the broadcasting that you want, and you get outer product. And in my mind, I'm just like, why don't you just have an outer product higher order function and don't try to make these two things all under the broadcasting umbrella? But maybe to Julia and NumPy folks, it makes sense to them.

00:21:59 [ML]

I think J made that same mistake, and I guess we can talk about Uiua, in considering outer product to be a kind of rank. And actually something I found recently was Jim Brown talking about his development of APL2. He was kind of going between a model of array nesting as sort of keeping the thing that's nested totally contained and having each nested thing be a single element versus a model that works with cells. And Iverson and Falkoff were really pushing on him to go with the cell model. And one thing he said was, they actually, IBM's sort of prototype of APL worked with cells sort of. And one thing he said was the outer product in particular was really not useful because it was no longer associative when you model it with rank in general. Because if you have a function where the output, that's not a scalar function, where the output has a rank that's different from the input, like with a general outer product, I mean, it's not that every outer product is associative, but if the function being taken, if the function whose outer product you use is associative, then the outer product is also associative. Because what it does on the axes is just concatenate them all together. But if you apply the function with rank, what you're doing is taking the function's output, which might have axes, and sticking those axes onto the end as well. So if you try, I think Jim Brown gave the example of a catenate outer product, that kind of mixes the axes that the function uses together with the axes that the outer product uses and catenate outer product no longer really makes sense. And you can probably find an example in J of this just not working. Where in BQN, I know I've been able to do outer products to construct like a table of a bunch of possibilities that might be like three dimensional or something. There's an example in the documentation. So, yeah, for that reason, I think that trying to model outer product with rank, even if you use rank, just doesn't really work.

00:24:11 [KS]

Yeah. So I've also made it with outer product in particular. I used to have a table modifier, which called the function on every combination of elements and a cross, what I call cross modifier, which is a recombination of rows. And I decided that actually, it's very rare that you want every combination of elements of two higher dimensional arrays. So I just removed cross table just operates on rows now. So doing the equivalent of outer product, the table, will now no longer concatenate the shapes. It concatenates the lengths of the first part of the new shape.

00:24:49 [CH]

So that was what I went when, Marshall, when you were explaining that, I was squinting my eyes trying to follow. And I was thinking in my head, this doesn't totally seem like it applies when you're just doing the basic like multiplication table, multiplies outer product. Marshall Like it sounds like you're referring to the more general cases where you can just have arbitrarily ranked arrays.

00:25:12 [ML]

Yeah. And I think I use these a fair amount. Like one example that I've done a lot, this is kind of only for recreational programming. But if you're looking at, if you want, for example, the sum of the digits of numbers up to a thousand, say, you could do this, of course, by just generating all the numbers and getting the digits of each one. But what's much faster is to do a three dimensional outer product where you have three arrays going from zero to nine and you just do an outer product sum of all of them. So you get a three dimensional table that kind of retains the digit structure and it has the sum of each number's digits. So if you ravel it, you'll get just the sums all in order. And I feel like I use that structure a fair amount for, you know, other things where you have like several different possibilities and you're choosing all combinations of those options.

00:26:05 [CH]

Yeah, to the listener that was kind of confused as I was. And we'll leave a link in the show notes to what is the cells term?

00:26:12 [ML]

Well, I talk about the, or I'm not sure I actually got to it, but there's a section on this in my outer product talk from LambdaConf. So that's probably the best thing to link to.

00:26:22 [CH]

Is that the one from 2019? Not like a upcoming thing? Yeah.

00:26:26 [ML]

That's yeah. Yeah. No upcoming talk scheduled. [04]

00:26:28 [CH]

And I think, yeah, the hang on to the point is just if you only live in the outer product or table, like also we haven't mentioned outer product table are loosely the same things that I mean, BQN, there's differences in the implementations clearly, because Kai's talking about that. It's, I think you've said you've limited it to work on just the rank one vectors.

00:26:50 [KS]

No, sorry, sorry. The rows or the major cells.

00:26:54 [CH]

The major cells. But yeah, I think APL originally called it outer product and then J renamed it table and BQN and we have borrowed that name. So it's all under the same umbrella.

00:27:04 [ML]

Yeah. But J's version is done. It's just like a cover for a particular rank. It'll actually take the function rank from the function it's using, which I also have issues with. But so it's modeled using rank. And for that reason, I never really found it useful when I was programming in J. But then when I learned a bit about APL, I found that the outer product was a really useful tool for a lot of things.

00:27:30 [CH]

Interesting. So it's actually definitely still all under the same umbrella, but the differences are bigger than one would imagine just from looking at it at surface level, simple examples.

00:27:42 [KS]

So that's fix [chuckles]. And fix is something you'll find like ubiquitously in Uiua code. You basically, you need it all over the place because you just have to combine it, right? Another big thing: I've embraced inverses as a way of doing a lot of different things. So that's the normal inverse, just like do the opposite of what this function does or do a function that would conceptually undo this one. And also "under". So that's the same as the under from J and BQN, where you do one function; you do another; and then you do something that undoes them. I have a generalized system: basically two functions work with ... [sentence left incomplete]. The inverting modifier is called "un" (just U-N, "un"). And that's so you can write something like "unbox", and that's the modifier that unboxes something. So there's no dedicated unbox function. Basically anything that will work with "un" (if you have two functions that work with it), you can put them together inside the "un" and they'll work with it. Same thing with under. There's a general system of composing invertible functions so that they work in lots of powerful ways. And there's also lots of extensions; there's lots of ways of achieving things that are best by leveraging inverses. One example is [the] modifier called "both". This will call the same function on two sets of values. If it's a monadic function, "both" will call that function on the top value on the stack and the second one on the stack. If it's a dyadic function, it'll call the function on the top two values and the second two values. Then people will just like: "oh, I want one that does on all three sets of values on the stack". i could add that; it'd be trivial to add that. But then, where does it end? You have your "both" and your, I don't know, your "three' and your "fourth". So the way that you would do that is we would have the array notation, right? You wrap something in brackets and any code that's inside those brackets runs as it normally would and puts the outputs of that code into an array. And conceptually, that is invertible, right? Let's say my function just does nothing with, say, three values, right? It's just an identity function that takes in three values and outputs three values. If I wrap that in brackets, that'll put those three values from the stack into an array. Conceptually, you can invert that and say, take three values out of an array and put them back on the stack. So that's a way of like spilling your array values out. And the array has to have the correct number of values, otherwise it'll error. But then what you can do is you can say "under that". So "under" putting all my things in an array; do something with the array; and then it'll reverse that and put the things back out on the stack. That's how you do your generalized "call the same function on some number of values". And it all has to be static, so the number of values that you're doing this on has to be known at compile time. And there's tons of things like this.

00:31:00 [KS]

Another one: if you have just the number one sitting in your code, that's a thing that pushes 1 onto the stack. Well, if you ask yourself: well, what if we do "un" 1? What is the inverse of pushing 1 to the stack? Well, it's it's popping 1 from the stack, right? But what if it's not 1 on the stack? What if it's 2? And then you say "un" 1? Well, you can't pop 1 from the stack. So it should error because there's no 1 there. That is sort of like a pattern match, right? If you say: "oh, if I have a 1 on the stack, and I say 'un' 1, that should just succeed and remove the 1; but if there's a 2 there, it errors". And it's in this way that I discovered: hey, "un" creates a system that you can use for pattern matching. And it's funny, I came up with this before your guys' most recent episode ... [sentence left incomplete].

00:31:56 [ML]

Where we talked about pattern matching

00:31:57 [KS]

With Conor McCarthy about pattern matching in q. [05] And I was like: well, that's so funny, because I just added this to Uiua too. And so you can do pattern matching just matching a certain value; you can use it to extract expected values from from arrays; and you can put them in like these try-block things. So you have these branches, just like pattern matching in functional kind of languages and it creates this cool system. Then people immediately people start suggesting: "oh, we should be able to pattern match this way and this way and this way". And I say: maybe. Maybe we'll get there [chuckles]. Uiua has format strings, right? This is your your C sharp style. Basically, you put an underscore in a certain kind of string, and it formats the values on the stack into that string, just for convenience and output and stuff. And you can "un" on that, and that'll extract values that match a certain pattern. And it's kind of like a poor man's regex. There's all these different things that that it's neat when you think about what is the conceptual inverse of this operation. Sometimes it's well defined; sometimes it's not well defined and there's a bunch of things that can be so you just don't pick one. Sometimes there's a bunch of things can be but there's one that would be really useful if we had it. One example is "join"; so joining two arrays together. So I've defined "un"-joined to be "split the head and tail of array". Split off the first row and then also give the tail on the stack. There's just lots of things like this that make this really powerful system and the fact that you can chain all these functions together, and compose the inverses and stuff turns out to be really powerful. Yeah.

00:33:43 [BT]

So when when you're building your stack, say you're using under, would you put a function on a function and then put under? And then would it be the order that you saw the functions that would create it?

00:33:56 [KS]

The way it's written in the code, if you were to do "under f g" (is what you'd write), it would call f, call g, then do the under of f. In many cases, the function you get when you call "un"-f is different than the than the inverse you get when you do under f. That's because a lot of the time under has to store some context to be able to properly invert something. This is kind of like this bqn's structural under, right? In that it has to know something about the the original input, right? So this is something like if you want to say multiply the value that some index in an array by a number you would do "under pick that-index times" right? And so it has to be able to put the new value back into the original array it has to keep track of that original one even though it's not on the stack anymore. There's tons of things like this where it has to properly track context.

00:34:56 [BT]

Whereas if it was just "un" it's just going to do the reverse of what you'd expect. It doesn't care. Yeah.

00:35:01 [KS]

Yeah, so anything that works with "un" automatically works with "under" just because you don't need any context and then there's special implementations for stuff that wouldn't.

00:35:09 [BT]

And when you're using "un" for pattern matching, is it efficient to use "un" as a filtering mechanism or are there other ways to do that?

00:35:18 [KS]

I mean, it depends on what kind of filtering you're doing. The main way would be like you would do [in] other array languages, right? Where you just make your mask array and then you do "in what we want to keep" but it's [a] replicator. Yeah. [06]

00:35:29 [CH]

I was just gonna say it's worth mentioning too because you've said this very (I don't want to say) nonchalantly and then we've kind of been like: yeah, this is great; this is great". But I think it was an ADSP episode probably, when we were driving in Italy or something on the highway and I did like a mini "why array languages were so awesome". And like this feature always makes the list. And so right now I'm just like: "yep, this is great; I completely agree". But this is something that doesn't really exist in other languages and so this "un" and under, in Uiua exists as "inverse" and "under", you know? Slightly differently but the same idea. BQN and J has "inverse" and "obverse". I'm not very familiar with ... [sentence left incomplete]..

00:36:14 [BT]

We've got "under"; "obverse" I think is if you've got an error or is that "adverse"? I can't remember. There's different ones that if there's an error, it triggers something else. It's a conjunction that will allow you an option if something happens that shouldn't happen. But I mean, the thing that Henry starting to play with a bit more is starting to edge towards BQN's structural under but we talked about that on the episode that he was on.

00:36:42 [CH]

And in fact, in J the "each" is actually just a ... [sentence left incomplete].

00:36:46 [ML]

"Under open".

00:36:47 [BT]

It's a cover name for "unbox whatever you're going to do and then box again". Yeah.

00:36:53 [KS]

I just want to actually have that exact operation that "iterate over all the rows, unbox them, do your thing, then rebox them". In Uiua what's called inventory because you're managing your inventory of your boxes.

00:37:04 [CH]

Yeah, yeah, that's cute. And I mean on another tangent off this little remark: if it doesn't sound like we're crazy excited about this, it's just because we're at this point, you know, all used to "under", but I'm trying to remark that this is just an amazing ... [sentence left incomplete].

00:37:15 [ML]

Well, I mean, no. I have to say, applying the idea of inverse to functions that take multiple inputs and can have multiple outputs, I think that's really new. The only thing that I can really compare it to is Prolog, but the way Prolog deals with this is having functions be completely unordered; they don't have a distinction between input and output. So when you have multiple outputs then you're going from taking something that can invert ... [sentence left incomplete]. Like, "under" in APL only works one-to-one, because you can only consider one input when you're inverting. So in Uiua, you're going from one-to-one to many-to-many, which I guess you could do with functions of lists in APL, but there's no implementation that does anything useful with that. So this is a really different model.

00:38:07 [KS]

Yeah, I do have a policy for that for "un", like all "un" implementations that are in the interpreter itself. They have to have the property that the inverted function has to have the opposite signature as the original function. So if a function takes two arguments and gives one output, the inverted function has to take one argument and give two outputs. One example of this would be like "couple", right? So "couple" takes two arrays and makes them the rows of a new array. "Uncouple" takes a length two array and puts those two rows on the stack. Stuff like that.

00:38:41 [CH]

Yeah, so this sounds even a step more powerful than what is possible in the ... [sentence left incomplete].

00:38:49 [KS]

It turns out to be very powerful in ways ... [sentence left incomplete]. Like people post stuff. They'll post links to the UiuaPad of like: "oh here's this thing". I'm like: "whoa, that works!". That's crazy! [chuckles]. Because when you have it composable, you get these emergent properties of various invertible behaviors and things.

00:39:08 [ML]

Yeah, structural under felt like that a lot at times.

00:39:10 [KS]

Yes.

00:39:10 [CH]

Yeah. I was going to say that that's the telltale sign of a really impressive feature; is it ends up getting used in ways that you never imagined. Like when you initially designed it.

00:39:20 [KS]

Yeah. And Conor, what you were saying [that] this is not something that most other types of programming languages have, since I've been exploring it more, I've really started to feel it. So even in code in the compiler and the interpreter there's times where I'm like: "oh, do this thing and then run the function and then reset it back to how it was". I'm like: "why doesn't Rust have under; that'd be so nice" [laughs].

00:39:44 [CH]

Oh yeah.

00:39:45 [KS]

And you start to see it everywhere. And I get why it's not in a lot of other languages because it's kind of a complex feature that can be solved in other ways, but it's so nice to have.

00:39:56 [CH]

Well, I'm now like three tangents deep in my head. But just the other day I was taking a closer look at NumPy and Qpy and very quickly discovered that there's no generic scans (and I think there might not even be the generic folds in NumPy). Like the two scans that you get in NumPy are called cumsum and cumprod and then [for] everything else, if you want to do something generically, you have to go like: "numpy.frompyfunc()", pass it like your binary operation and then call dot accumulate on. And those things aren't optimized. You get a fast plus scan and a fast product scan and then everything else is just garbage. And this is not even table stakes. It's like the first thing, like folds and scans in array languages. And we're way past that. We're talking about like unders and stuff. So it's just like when you go to these other languages and other libraries, I'm just like: "how is it possible that they don't even have like generic". And I know Python's its own thing and dynamically typed and whatnot.

00:40:59 [KS]

I think it depends on the angle that you come from, right? If you're coming from a data science, linear algebra [angle] and oh, you learned Python to do your math stuff. You're not coming at it from the angle of the array language developer who knows about all these useful ways of ... [sentence left incomplete]. Who transforms their data all the time that way. If you're used to writing Python, you think about it in a certain way. It makes sense.

00:41:22 [CH]

Yeah, it's true. But yeah, when you end up wanting something that exists in other languages that's so convenient ... [sentence left incomplete].. And yeah, just to just to pop off a couple things that I have in my head, [for] Dyalog APL, "under" is on the horizon. So this is just further motivation for [laughs] the folks at Dyalog that are listening to this in the future. I'm sure Adám will be listening later. Then the final one just like a tangential thought is how convenient is it ... [question left incomplete]? I mean, it could have could have been differently the way that English evolved, but that "un" is a prefix to these words, and that all of your higher order functions are prefix in your language, whereas in [Kai agrees] almost all the other languages, they're suffixes in APL and J and BQN. Anyways, it's a very nice coincidence that you're designing a stack array language where your higher order functions are prefix and "un" is a prefix to these words.

00:42:14 [KS]

Well, I don't think I would have called it "un" if it wasn't. It was originally called inverse. And you'd say "inverse box". And then I was like: "wait, inverse box could just be the normal way of doing it; oh, you could type 'un'-box, you could type 'un'-aware; you could type all these things". And I just thought it was neat.

00:42:35 [CH]

And so yeah, it's a happy coincidence that definitely makes you smile [chuckles] when you realize what it's what it's spelling out.

00:42:43 [KS]

Yeah, you mentioned briefly the optimization. I mean, I could talk a long time about the implementation of an array language, because I found it to be a very non trivial thing while the design decision ... [sentence left incomplete]. I think you you actually reached out to us not that long ago, Conor, about a potential third podcast for you about like array ... [sentence left incomplete]..

00:43:04 [CH]

Technically, it's the fourth one. I've got a less well known running podcast that is just for fun. And this is still I have on a list of things to do. I mean I think the first time the listeners are hearing about it. Was it back in February? Yeah, I pinged a few people and said: "that's it!". I can't remember what caused it. But I had some conversation.

00:43:26 [KS]

I think it was after the Kap episode.

00:43:28 [CH]

Yeah, that's right with Elias or Elias (I apologize for the pronunciation [chuckles]). But we had that and then every once in a while my brain lights up and I'm like: "what are we doing with only two array podcasts out there". Because technically, Adám and Rich have one that's kind of on hiatus right now that was called "APL notation as a tool of thought" or something like that. I was like: "we need to have a third one". And then originally, I was going to call it ArrayCast. Not ArrayCast! (Originally I was going to steal the name of this podcast for my other podcast [laughs]). No, I was going to call it "Array Talk". And then I think I decided I was going to change it to "Tacit Talk". This will be the motivation. It was supposed to be done in February, then it was supposed to be done in March, I promise in April, we will launch this. I'm thinking for the first few episodes, I'll just do one on one interviews. We actually haven't gotten it gotten to this. I've interrupted your point Kai. I'll hand the mic back to you. But we haven't gotten to it. I was thinking, if I do it after this episode of array cast, then we can just focus entirely on your combinators page that hasn't been brought up. You've done something, I don't know what you're doing. You've invented a bunch of birds and a bunch of things. [Kai says "ah yeah"]. I was floating through this and I thought it was cute that you reference the combinatory logic site and the table there. But then I got down to the bottom. Anyways, we might save that for the fourth podcast that's happening. Anyways, I totally interrupted you back to you [laughs]. Bob's wants to say something.

00:44:50 [BT]

What I want to say is on top of introducing another podcast that you're doing or going to do ...

00:44:58 [CH]

Yeah, that doesn't exist.

00:44:59 [BT]

... and mistakenly stealing the name of the podcast, which I'm assuming wasn't something that was in the back of your mind (it was just like a not a Freudian slip or something). But on top of all that, you just promo-ed what you'd like to talk to Kai about on your first episode. What is going on here? [Conor laughs]

00:45:15 [ML]

I just really like how all of the terminology people use to talk about this podcast discussion is stack based. [Others laugh]

00:45:25 [KS]

Yeah, well, yeah, I mean, you get topics, you push them on, you pop them off. So real briefly, yes, the combinators page, I had fun writing that. The idea was to list every combinator that Uiua can express in one or fewer glyphs. So some of them take zero glyphs, like you just write the functions you're combinating and they're just there. So you'll see those listed as like: "oh, it's just nothing".

00:45:50 [CH]

Ooh, "combinating", is that a verb? Is "combinate" a verb?

00:45:54 [KS]

I've just coined it. And then the other ones are various combinators. And the reason that there's so many is because the way that they work depends on the valence of the ... [sentence left incomplete]. I also coined this word, I think, "addicity".

00:46:08 [ML]

No, you didn't.

00:46:09 [KS]

No, I didn't? Monadic, dyadic. Okay, that's fine. But it depends on the number of arguments that the functions take. And so a combinator [07] that combines the functions one way, if they're one, takes some number of arguments for different lists of arguments [TODO: I'm not sure how to exactly parse this sentence]. I made up birds. I tried to find birds that had common names with both Eastern and Western variants for the combinators that correspond to mirrored versions of the same operation.

00:46:40 [CH]

It's like a dream come true. When I stumbled across Marshall's "BQN for bird watchers" page, I don't know what year that was, but it probably was one of the highlights of the year. And now I was trying to introduce bird names, which at one point I introduced. I think there was an eagle, but then I realized that the E prime combinator was the wrong name and it was already named and so I ended up calling it the PhiOne or Phi or something. I made up pheasant. Anyways, but now we've got multiple people in the world making up bird names for combinators. It's a golden age, folks. Martin Smullyan.

00:47:16 [KS]

If you read Marshall's page, you'll see a little sentence that says there's something wrong with these people.

00:47:21 [ML]

Well, yeah, the whole thing is written in a mocking tune, I might say [laughs].

00:47:27 [CH]

[Laughs] Is that pun intended? To mock a mockingbird? Holy smokes.

00:47:30 [ML]

No, no, because that would make me crazy [everyone laughs]. What are you talking about?

00:47:37 [KS]

Okay. So that's the thing about the combinators. We're talking about ... [sentence left incomplete].

00:47:40 [BT]

Actually one more question about the combinators. How do you see people using them? There was a discussion that came up recently on the APL farm. It's one thing to define them, and it's another thing to say, how can they be used to make better programs?

00:47:55 [KS]

Ah. I mean, so I think I even wrote that page. It's not necessary to read this page to be able to write Uiua code, right? But in Uiua you have to use combinators, right? Because there are no local variables, you have to use them. And we could push that onto the stack. My whole philosophy about tacit and stuff is its own separate section. But I'll get to that. I do want to talk about optimizations and performance and implementation briefly. I think part of the reason I've been able to iterate so quickly and add new features and stuff so quickly is that implementation-wise, Uiua's arrays are very simple, right? Every array is the same. Every type of array, whether it's characters or numbers or complex numbers or boxes or whatever, it's all the same generic array. In the Rust code, it is a generic type called array T. You just put whatever type you want in there for T. So like APL and J and BQN have these optimizations for, like, bit arrays and different integers and stuff. For the time being, I only have three number types. But because I can write the same for every algorithm, I write the implementation one time, right? And then in the code, it instantiates, for example (what's a good example?) join, right? If you're joining two arrays together. There's only one implementation of join. There's not one for every type of array that you could have. And so there's the actual implementation function and then there's another function that says: "okay, if it's a number array call the number version of join; if it's a character, call the character version". But it's all the same version because of the way that generic code works. So that's made it really easy to iterate.

00:49:49 [ML]

What about mixed types? What if you concatenate a number array to a complex array? Can that happen?

00:49:54 [KS]

Ah, there is a system. I have a function that does various tasks and things. So if you try to append a number to a complex array, it will turn the real numbered array into like a mixed, with a zero imaginary, and just concatenate them. So if the conversion is valid, it will do that. That's the same thing. So there's also like byte arrays and they're mostly for when you have like Boolean results. I don't have like bit Booleans, right? I just have the smallest thing that you could have that would still be generic, which is bytes. You also get them when you read in files and things just to save space. Those will get promoted to floating point numbers and things too. This system is very simple. It also lends itself to certain optimizations very well. It does have implications for, well, yes, the performance of the code but also the performance of compilation, of running the Rust compiler to compile all that and the size of the binary and things. So there are trade-offs but I think it's a decent way. And I think it also probably makes the code pretty approachable for people who would wanna contribute. But yeah, like, I don't know the overhead of adding new array types and other array languages. I've tried to look at the BQN source (like the CBQN source), and it's a lot.

00:51:13 [ML]

Yeah, well, I mean, new element types like 8-bit, 16-bit integers and so on, that's not really a problem because they're all handled the same way. What we've had problems with are the bit arrays because of the alignment and also slices. And we don't even support bit array slices because instead of just having a pointer to the start and a pointer to the end or a byte index or whatever, you then have to specify the alignment as well. So it's not really the different numeric types and so on, so much as completely different representations.

00:51:56 [KS]

Bringing up the slices, that's another thing about the way that Uiua's arrays are implemented. So I remember the semi-recent episodes you guys had talking to Henry Rich about the new J version,[08] where he was talking about implementation details and things and how certain functions either return or operate on a virtual array (I think he called it or something) but it's like a slice of the memory. And Uiua arrays don't actually make a distinction for this, so every array is encoded as a conceptual slice into some backing buffer. And this is another way that like all the code that runs on something is the same code. So for example, if you call a rows modifier and call a function on every row of an array, it doesn't create new array. It has to pass an array to that function, right? But it just slices up the original one and uses reference counting to maintain the memory and stuff. And for example, if you did like uncouple, right? If you separate a two length array into both of its rows, you don't actually create any new arrays. Or you don't copy any data rather, you just create two arrays that have the same backing buffer and they just have different offsets in their representation. This minimizes basically the number of copies that you had end up doing and it's good for performance and things. Another one of the good performance things that I think works correctly is that because everything's super generic and because the Rust compiler aggressively inlines and optimizes things, I'm fairly certain (I've looked through the generated assembly and stuff) but I get things like SIMD optimization. I think mostly for on like mathematical operations.

00:53:50 [ML]

On simple stuff [chuckles].

00:53:51 [KS]

On simple stuff, yes. But if you're doing like adding things; if you're adding two big arrays of numbers, you should be getting SIMD at some level, right? Which is good.

00:54:02 [ML]

Well, so what the compilers often have an issue with is overflow checking. I guess you're just using big enough numeric types that you generally don't have that, right?

00:54:09 [KS]

It's using F64s. So yeah, IEEE doubles.

00:54:13 [ML]

Yeah, yeah. So those don't overflow. But yeah, you, if you have like an optimized 32 bit integers type, then I don't know of a way to get the overflow checking to vectorize. So.

00:54:27 [KS]

I consider adding an integer type. And I think if I were to do that, I would define all those operations as either overflowing or saturating. - Yeah. 'Cause I want those optimizations, right? But I'm not sure about all that yet. But yeah, so that's what I had to say about optimization and performance. And oh, sorry, lastly. And then there's also the fact, I don't know. The approach I've taken to making, trying to make a wheel look fast is basically picking all the super common cases of pervasive operations, but also like various looping modifiers, like reduce and scan and table. And all the simple cases are meant to go, are hand implemented, usually with some kind of macros or something in the code, but hand implemented to be basically as fast as they can be. So as long as you work with those kinds of operations, and there's a whole page on the site about which things are optimized, but as long as you stick to those, I think in general, your Weibo code can run pretty fast.

00:55:26 [ML]

Yeah, and Conor was talking about like NumPy doesn't have all the optimized versions of all the scans. It's really surprising how few cases ever come up in a performance sensitive context. So for scans, it's like some, the products you really don't need. I mean, it needs to be sort of fast, but like if there's a long product, it's just gonna overflow anyway. So why would anyone use that? And then you need max and min, and then there are a lot of Boolean scans that you want.

00:55:54 [KS]

Yeah, one that I actually, we've actually, my community has actually discovered is useful and interesting is scan not equal, which is a cool.

00:56:02 [ML]

Oh yeah, that's used all the time in APL.

00:56:04 [KS]

I had never encountered it before. It didn't seem obvious to me, but then you try it out and you're like, wait, this site does some interesting stuff with the Booleans and stuff.

00:56:12 [ML]

I had to tell Jeff Langdale too, that he didn't invent it. So you're not alone.

00:56:17 [KS]

I guess lastly of the big features that I've added that I'd like to cover is macros. So Weibo has an entire system of, well, two entire systems of macros.

00:56:31 [CH]

One is formerly known as custom modifier. I actually was when you brought up the higher order functions. I was like, oh yeah, we should make sure we mentioned that 'cause I'm not sure if that was new since.

00:56:41 [KS]

Everything is new since.

00:56:45 [CH]

And I was looking for custom modifiers. And I remember that that didn't, and I remember that that didn't, I couldn't find that for the longest time 'cause I wasn't searching the right keyword. And then I couldn't find it this time. And I was like, is this literally the same thing is happening to me again? And I was like, I'm pretty sure they were called custom modifiers. And then I went to the GitHub source and checked that 'cause it's easier to search there for keywords. And then sure enough, I found the little line that said custom modifiers have been renamed to macros. So anyways, back over to you for the update on macros.

00:57:09 [KS]

So I have two systems of macros. They're loosely modeled off the two systems of macros that Rust has. And so basically you have your stack macros which are meant to be easier to write. They're for whenever you want, for all your higher order function needs, right? If you wanna write a function that calls a function, you write your stack macro. It has a little special syntax, but it's meant to be not that hard to write. It's also less general, but it's hygienic, meaning it doesn't like use variables and things that it shouldn't use when it refers to certain names and things.

00:57:44 [ML]

What are variables?

00:57:45 [KS]

Yeah, it doesn't use certain bindings, sorry, bindings. Any names that you refer to, it won't use the wrong ones ever. And that's just for your simple one-off, like I need this function that calls a function, here it is, or function that moves these functions around. And they're called stack macros because in the compiler they use a special operand stack. So there's a very limited set of, basically you're like duplicate, you're flip, you're very simplistic stack operations can be used on this operand stack. So however many functions your stack macro take, you can move them around in the code that they're gonna go into. It's kind of hard to explain with words, but you can take a look at the tutorial if you're interested. That's the simple one. The other type is called array macros. And these are much more general. They pass the functions that she passed to them to a normal Uiua function as an array of strings or an array of boxed strings, right? And it lets you arbitrarily manipulate and generate Uiua code using Uiua code at compile time. So they're extremely powerful. They're a good deal harder to write because inherently the operation you're doing is more complicated, but it lets you write some really interesting stuff. And people have definitely like tried abusing them and things, but you could, I don't know, you could do like, I saw someone wrote like a whole JSON parser with them. You could write, theoretically write like object systems and stuff. I don't know, there's lots of interesting stuff. It's a whole topic, but I just wanted to touch on that. That's a thing that you can do.

00:59:25 [CH]

Yeah, I love custom modifiers. And it's one of my favorite parts of BQN, especially that it changes color. Like, you know, there's small things. Some people say don't mean anything, have no value, but like the act of putting a prefix underscore and then a suffix underscore in BQN, and it changes the color of the name of the function. I think that's like, it's such a nice thing. And it's similar to, I've talked about on this podcast before of like when you're typing the prefix of the name of one of the glyphs in Uiua, then it changes color. It lets you know that like you've hit the minimum prefix needed to match uniquely to the glyph. It's like, it's such a small thing, but it's such a nice programming experience. And the same thing, or a similar thing happens when you're, I guess they're called macros now, so I should start referring to them as macros. But when you're writing this macro, if the number of exclamation marks changes the color of your function. And it's just like, I think, I can't remember if I did that in a YouTube video, but like I might've been recording something one time and maybe I scrapped that version and then re-recorded. But at one point I was hitting exclamation marks and I realized live while I was recording this for a YouTube video, like I hit it a second time and a third time, and I was like, ooh, it changes colors. And I was like, I literally almost said the name of the language in like, I was like, ooh, like it's fun to play with, 'cause I was like hitting them and backspace. Anyways, it's just a very small thing. Does it change anything? No, but it's just a nice touch and yeah, very, very useful thing, especially if you're coming from functional languages to be able to write this kind of code.

01:01:04 [KS]

Yeah, I've tried to make the online editor pretty good. Writing a correctly working editor as a content editable div on a webpage is very hard. I don't know how the BQM pad works so well. It's 'cause there's so many like weird browser compatibility things and getting the cursor to go to the right place. And there's a bunch of considerations, but people seem to like it a lot. I made it really easy to share links and things and it's embedded of course in all the tutorials. I have spent a lot. [09]

01:01:39 [ML]

Yeah, the embedding is really impressive.

01:01:41 [KS]

Yeah, thanks. I mean, it's all like WebAssembly and stuff, right? But I have spent a good amount of time also building out the LSP implementation. So I sometimes write stuff, like when I'm actually writing code on my own, I sometimes do it in the pad, but mostly I just do it in VS Code like I would anything else. And the LSP implementation is actually far more powerful than the pad is, right? Because you get all this UI experience for free because of your editor. So you've got things like your completions. It can tell you the signature of something in line as a hint. It can tell you the value that a line of code generates as a hint that you can mouse over and it shows you your whole data. There's lots of neat features like that that I've tried to build into the LSP to make it nice to write in your editor. And so there's -- I have official compatibility for VS Code, but people have made plugins and things for Vim and Emacs too. That's something I've paid attention to.

01:02:38 [CH]

I can't remember, too, if I ever mentioned this on the podcast, but at one point I think I said in a livestream or YouTube video while I was doing some Uiua stuff that when I formatted it or something or when I ran it, it would destroy my alignment of the comments by the hash. And then the next time I went to it, it did that automatically. Even if you hadn't aligned it, it would align it for you. I'm not sure if that's still the case. But anyways, I was like, I'm not sure if that was just a coincidence or someone saw that.

01:03:09 [KS]

I might have seen that and been like, oh, it would be nice if these aligned.

01:03:12 [CH]

It's such a small thing, but I always am doing that. I don't think BQN does that or BQN pad does that. If BQN pad, if folks are listening and you want to implement that. I think the only other place I've seen that in other than formatters for more popular languages I should say, like C++ and what is, I think the ride editor does have a line list, but their thing destroys all spacing. So if you've got spaces around like the assignment operator, it'll destroy that, which irritates me. So I don't typically use that because it's kind of irritating. But anyways, so many small nice things have been added that make it so pleasant to use.

01:03:48 [KS]

The Uiua formatter is its own whole design space, right? Of like, what should the formatter manipulate? How do you write the parser to support that and things? I also have, I don't know, if you write a double, like you write comments in Uiua with the octothorpe, the pound sign. If you write two of those in a row, it becomes what's called an output comment. So the formatter will turn the comment into the pretty printed version of the value that's above it, that code on the stack.

01:04:20 [BT]

Are you kidding?

01:04:21 [KS]

Try it out.

01:04:22 [CH]

That is amazing. That's amazing. I literally, my jaw just hit the ground because every single time I've probably ever made a BQN video, I usually, I code my little function. I started off with the identity, just assigning to the name of the function, and then I'll put a couple tests. And the comment next to the test that I always put there is what you expect it to be at the end of the day. And usually what I do is I just go like, I run it one by one, copy and paste, and put it after octothorpe as you call it. I try to avoid that just because people get lost in the sauce of what you're talking about.

01:04:52 [ML]

It's the hashtag. That's what it is now.

01:04:54 [KS]

It is the hashtag, yeah.

01:04:56 [CH]

That's what the normies call it. And anyways, I had no idea that Uiua did that. And I'm going to, this is motivation to go make one of the 17 YouTube videos that I haven't gotten around to making in the last couple months.

01:05:10 [KS]

It's meant to be like a Jupyter notebook type experience when you write those in.

01:05:16 [ML]

So yeah, the real constraint of that type of editor is that you only ever run one program, so it's not a session. And I guess this kind of works around it a bit.

01:05:25 [KS]

Yeah, I mean, you can put them in functions that you call multiple times, too, and it'll just list the additional results in a list. I don't know, I wanted a cool experience for that. I don't use it as much anymore because you get hints with the LSP of like, here's the value of this function, here's the value of this. Yeah. Lastly, I guess, of things I'll say, unless you guys have more questions about stuff, there's experimental FFI. And so we've started working on the first thing that we're working on bindings for is RayLib,[10] just like BQN. People have already made some prototype little tiny games.

01:06:02 [ML]

Yeah, we've done some episodes about that.

01:06:04 [KS]

With Uiua, which is cool.

01:06:05 [CH]

Well, as we're mentioning things that we want to make sure we include, because I've not checked the time, but odds are we've blown by the hour mark, as per usual. Definitely one of the things that I came across when I was reviewing the different release notes, and I didn't read them in detail, but I did notice a huge section on an experimental, and maybe this got yanked already, but experimentation with HashMaps. Is that the-

01:06:32 [KS]

Oh, this did not get yanked. This got stabilized.

01:06:34 [CH]

Oh, but you haven't mentioned you're adding HashMaps?

01:06:35 [KS]

I totally forgot.

01:06:39 [CH]

The clojure, it's like you're the clojure of, you're both the clojure and the forth of array languages, and I say that because closure is a Lisp that added HashMaps as a fundamental part of the language. Anyway, I don't blame you. 29 releases, although I don't know how many of them you consider major in terms of features, but at least there's probably five or ten of them.

01:07:01 [KS]

Yes, we're currently on version, I just released a few days ago, and I've been putting out bug fix patches for it, version 0.10. Now, there's some before that that are also pretty changeful, but 0.10 is what we're on now. Yes, HashMaps. So the way that HashMaps work is that they are just normal arrays. The normal array part is the values of the HashMap. So if you take a HashMap and multiply it by, and do a multiplication or normal math or whatever on it, it'll apply to all the values of the HashMap as it would normally. The keys of the HashMap are stored as metadata on the array. And then there are HashMap functions that work with this key metadata. So you get the value at a key, insert a key value pair, remove a key from the HashMap. And also those keys can be manipulated by things like some operations like, say, reverse. You can reverse or rotate your array to rotate the keys as well. Things like that. And it's just so you can get your -- there are cases where you do need your O(1) insertion lookups in a data structure by key. This is meant to fill that need. It took a long time to get right. That implementation of the keys being metadata was not the original implementation. It was this other thing. But that's what I've settled on.

01:08:28 [CH]

I also, too, I feel bad now calling Uiua the clojure as well as the forth of the array languages when q has had dictionaries for, I don't know, probably day zero. And K as well, I should mention. So I guess we've got two array languages now that are supporting hash tables. So how have you found -- has there been a ton of response to -- because it is kind of -- unless if you're a Q programmer --

01:08:52 [KS]

I mean, people have started using them in stuff, right? One of the big ones, the JSON parser that someone wrote obviously uses them for objects. I don't know. For the most part, I try to encourage a certain style. Things like -- the cases where that are use cases for hash maps in most languages can be handled by a lot of the array primitives like group and classify and things like that. It's only when you have your, like, accumulating value pairs that you need to work with as they come. The hash map, there's various times you need that. But a lot of the time, you can get away with just primitives, just your basic array operations.

01:09:32 [CH]

Interesting.

01:09:34 [KS]

For a long time, I wanted to see if I could just not add hash maps and just do those array primitives. But I think Marshall would probably know more about me about how you solve some of those problems since I stole classify and group.

01:09:45 [ML]

Yeah. And I mean, they're just not so -- actually, well, I guess one of the -- really, the uses of dictionaries as in k, you can often do pretty well in BQN. But what you don't get is for some things, you want -- you really want a mutable dictionary. And BQN's arrays are immutable, as with every array language. So I felt like, you know, there are enough differences accumulated. Like, you know, an index is -- well, arrays are multidimensional and usually hash maps -- I mean, you could make a multidimensional hash map, but nobody really wants that. They're unordered. And then you have this mutability. And so I decided, you know, we'll just handle this in a completely different way. So BQN has the traditional hash map object, as you'd see in Java or whatever.

01:10:35 [KS]

I mean, so the Uiua hash maps, because they are just metadata on the original array, they maintain the order. So if you iterate over the arrays rows, you'll get them in the same order that they were in.

01:10:45 [ML]

Yeah. And I mean, the BQN hash maps, we decided they would be underlying order. Maybe we'll make another unordered hash map type. But the main interface is don't expose that order.

01:10:58 [KS]

That's just a CBQN library, right? The BQN hash maps are a CBQN library, aren't they?

01:11:05 [ML]

Yeah. Well, no, they're not a library. They are -- there was a library. I wrote a library. And then we did a built-in implementation, which is, you know, ten times faster. Well, it depends on what you're doing. But it's usually a few times faster.

01:11:20 [KS]

I think that's the last big feature. I'm sure I'll suddenly think of one.

01:11:25 [CH]

I was just thinking, to try and wind things down, I could try and recap. We've covered so much. We'll see if I can actually remember every topic, large topic we've talked about. So we started with the dropping of first class functions. We went to the fixed primitive and the similarity with broadcasting. We then went to un and under. And then after that, I'm not sure if that was when we talked about a little bit of the implementation and optimization.

01:11:50 [KS]

Yeah. And then macros and then the hash maps.

01:11:52 [CH]

And then macros and then the hash maps. And then also we mentioned that there's experimental work on FFI. And that, probably, folks, is only scratching the surface.

01:12:04 [ML]

Yeah. Well, so I can say one of the bigger things to me that I've noticed is the introduction of these function packs. I think there was something like that when we did our episode. But it's very much matured and gotten into a consistent approach to language. So this is a syntax where you can write multiple functions together.

01:12:19 [KS]

Yeah, I can speak briefly on that. So, yeah, a function pack is a purely syntactic construct. And so there's certain modifiers like fork, which calls two different functions on the same sets of values. Or bracket, which calls two different functions on different sets of values. And there's, I think, a couple other ones like try. But basically, by default, those are dyadic modifiers. They take two functions. But there's a syntax of the function pack, which is you do your modifier and then you do parentheses. And then you separate any number of functions with the vertical bar. And that, for example, with fork, that'll call not just two functions on the same set of values, but n functions on the same set of values. It's just a syntactic thing. It keeps you from having to write fork over and over and over again to hit the number that you want. And it's also, I think, it's nice for the formatter when you have a lot of those and you have a nice big line of vertical bars that come down. It's just a syntactic convenience. But I think it's nice.

01:13:22 [ML]

Well, I don't know if I'd say that, like, if the alternative is having to write fork multiple times, it seems like.

01:13:29 [KS]

The alternative is you write fork, fork, fork, fork, fork, and then function, function, function, function, function.

01:13:34 [ML]

Yeah, I mean, I guess you can say it's all like it could be decompiled down to regular syntax. I hadn't realized that. But it doesn't really feel the same.

01:13:45 [KS]

I totally forgot about the compiling thing. Uiua is now fully ahead of time compiled. Not to machine code, to its own byte code, right? But when I was originally on, it used this weird model of compile a line, run the line, compile a line, run the line. And now it just compiles everything ahead of time, which is what makes things like macros possible. And that also means that you can compile off your whole bytecode assembly and package it and stuff and share it and send it. Not that that format's stable, but...

01:14:17 [CH]

How is this, how does this compare to BQN, Marshall? Because honestly, I think APL and J, they're purely interpreted, but you have a bytecode as well. Marshall

01:14:26 [ML]

Yeah, well, APL, or Dialog has a compiler that's sort of still, like, but it's not used all the time. It's an optional thing you can call.

01:14:38 [KS]

I didn't know you could compile APL since its grammar isn't...

01:14:42 [ML]

Well, yeah, that's part of the issue. You can't really.

01:14:46 [KS]

Yeah. So because Uiua grammar's context-free, that's, it makes that possible.

01:14:49 [ML]

So BQN also compiles everything ahead of time to bytecode. Pretty much every K implementation does too. And this is, languages like Python and Ruby or whatever have been compiling to bytecode since forever. So this is not anything groundbreaking.

01:15:05 [KS]

And my bytecode is very, it's very, very simple. It's, there's not much to say about it. I would like to, as like a final conclusion, just talk about, like, what my goals are with Uiua. And that's mainly, I would like to personally be able to use Uiua for most of my projects. Anything that isn't mega performance sensitive, like, for example, the Uiua interpreter itself or like intense graphics code, I would like to be able to use Uiua for most of the code I write eventually. That's the approach I'm taking. That speaks to, like, I'm trying to say that I'm in it for the long haul, I guess. And that I'm going to keep going until I, well, one, until we reach stability. And then beyond that.

01:15:53 [CH]

Yeah, this is awesome to hear. I mean, it's very exciting too. I mean, we didn't talk about the FFI much. But if Raylib bindings are happening, if I've always wanted, I think probably mentioned this probably five times on the podcast, to do like a longer series of Conor actually does something nontrivial with array languages instead of just solving leet code problems, which is, folks, listen, they tell me I can't run downhill races. They don't count. They definitely do. And doing leak code problems is all you do with it. That also counts. But I would like to do something less trivial. And preferably that would be like making a game. And my favorite game to make, which I've already done in Python, so I sort of have a whatever, control, is Scrabble. And that's kind of a good fit. And it's kind of a bad fit. It's a good fit in that it's a tile-based game. And it's a bad fit in that one of the data structures you use is a directed acyclic word graph or some kind of suffix trie for storing the words so that you can do a word search of possibilities very quickly. And so it would be, I think, a great test in that there are parts of it that would be perfect for an array language and parts of it that would be challenging to see, you know, how far can you go with an array language or do you hit a wall when you need to implement a data structure that is kind of node-based versus array-based. Anyways, I would love to do it in more than just one language. And if we've got two sort of more modern graphics libraries that are going to be usable from these languages, that definitely sounds like I should just do it at some point. And probably you could do it with J. I think DialogAPL would be the hardest. Or maybe qwould be the hardest because I don't think q actually has any... Although there is Torq, T-O-R-Q, and I know that's an application that has like a GUI, but I'm not sure if they do the GUI stuff in a different language and then it's just tied to q on the back end. Anyways, very exciting. I promise I will do this in Uiua, BQN, maybe J, maybe other array languages. And maybe we can just get like a whole group of people. Maybe I can get, what is it, TangentStorm? He's always doing stuff with the J language. So if I said I'll do BQN and Uiua, maybe I could just get other people to do the other array languages. And now that I'm thinking about it, why don't I just get other people to do all the work? This is fantastic. Bob, you've got your hand up.

01:18:10 [BT]

I think you should start up another podcast. I think that's, you know, you could call it whatever you want and, you know, drift off and do your games.

01:18:18 [KS]

Conor's programming corner.

01:18:20 [CH]

Yeah. Maybe I'll send a tweet out asking for suggestions on the name, ArrayTalk, Tacitalk. I'll get GPT4 to suggest me a couple and that'll, we'll definitely make that happen. And I think a perfect note to end on too, because I had this, I at one point tweeted out, when did I tweet it out? January 25th, 2024. It was ranking of open source array languages, asterisk, by GitHub stars. And Ivy at the time, Rob Pike's Ivy was number one and Uiua was number two. And I think since then, I just did a small check. Uiua has surpassed Rob Pike's Ivy with 1,337 GitHub stars as of like 40 minutes ago when I generated this thing. And I think BQN is about to pass Jelly. I think they're 15 stars or something behind. So it's now Uiua number one, Ivy number two, Jelly number three, BQN number four. And technically BQN, it's a little bit hard to measure because there's both the, there's two different repositories. There's the one that backs the website and then there's CBQN. So if you add those up, I think technically they become a third or maybe even second. I haven't done the math, but exciting that Uiua, like if you look at the graph, it's just a, whoo, straight rocket to the top compared to the other ones.

01:19:38 [KS]

I think, I think that's a good, that's a good place for me to, to thank the Uiua community and everybody on the Discord for just, I don't know, for taking part in it and for using the language and being excited about it and contributing actual code and pointing out bugs and coming up with design ideas. Just thanks everybody there. It's, it's really been great.

01:19:59 [ML]

All you Uiua-boos out there.

01:20:00 [KS]

Oh my God.

01:20:03 [CH]

Is that what they're called?

01:20:07 [KS]

No, no, no. I prefer Ui-wins. Oh my God, the Uiua-boos.

01:20:11 [CH]

The Uiua-boos. Is that like a reference to some show that I don't know about? Like Honey, Honey Boo Boo or in the, in the past?

01:20:19 [ML]

Weeaboo is just anybody who's into, into anime and other Japanese culture. So Uiua-boo with no... A Uiua-boo. Yeah. So it's a very small modification.

01:20:26 [CH]

Weeaboo ah.

01:20:30 [KS]

That is not my preferred ...

01:20:32 [ML]

Well, there you have it.

01:20:37 [CH]

Oh, what's going to be the cold open? It's up to Bob, but it's a contender.

01:20:43 [BT]

Who knows? It may well be your announcement of your new podcast. We'll give that right off the top. Anyway.

01:20:50 [CH]

Oh yeah. The one, the one that doesn't exist yet.

01:20:53 [BT]

As we're talking about communities and thanking communities, thank you to our community who listens to us. And if you'd like to get in touch with us, you can reach us at contact@ArrayCast.com and we look forward to your input. And also a shout out to our transcribers, Sanjay and Igor who help out with, converted this to a text format, which is very useful to a lot of people. So I think that wraps it up and I look forward to see how many podcasts Conor can get running in the next year. This will be fascinating.

01:21:22 [CH]

Yeah, yeah. It's, you know, three's not enough. We got to get four. And, and finally, thank you to Kai for taking your time.

01:21:28 [KS]

Yeah, it's been great.

01:21:29 [CH]

And, you know, this is, as I mentioned at the beginning, it's been a long awaited interview, I think by all of us, because we've been seeing the releases come up and being like, you know, usually we have the J, you know, Henry on once a year, but like at the rate that you're, I feel like if we had waited a year, it would have been a four hour podcast. And we've, we've said it live that we're not going to do four hours, even though a few of you have requested it.

01:21:50 [BT]

Run two episodes back to back.

01:21:53 [KS]

It's definitely a slowing down. But yeah, the development is, like I said, is state stabilizing and breaking changes are slowing down. But one day we'll hit that stability.

01:22:07 [CH]

And when and when that happens, we'll have you on and probably we'll have you on before then anyways, because I definitely know, like I said, we've all been looking forward to it. And typically when we're looking forward to it, our listeners are as well.

01:22:18 [KS]

So I mean, like you guys, I could talk about this stuff for hours.

01:22:23 [CH]

Yeah. All right. Well, with that, I think we'll say once again, thank you to Kai. And we'll finish with our typical sign off, which is happy array programming.

01:22:32 [ALL]

Happy array programming!