Transcript

Transcript prepared by Bob Therriault, Igor Kim and Sanjay Cherian.
Show Notes

00:00:00 [Adam Paszke]

I think it's really exciting because ultimately, you know, compute is underlying a lot of science these days, right? And so if you are able to like build those tools, you're contributing a little bit to so many like different fields.

00:00:23 [Conor Hoekstra]

Welcome to another episode of ArrayCast. My name is Conor and today with us, we have three panelists and a special guest. We're going to go around and do brief introductions. First off the panelists, we'll start with Stephen, then go to Bob and then go to Marshall.

00:00:36 [Stephen Taylor]

Hello, I'm Stephen Taylor, an APL and q programmer.

00:00:40 [Bob Therriault]

I'm Bob Therriault. I am a J enthusiast and I am not the host this week.

00:00:45 [Marshall Lochbaum]

I'm Marshall Lochbaum, former J programmer and Dyalog developer, and now I'm a BQN programmer and developer.

00:00:51 [CH]

And as mentioned before, my name is Conor. I am a, I guess C++ professional developer, but I consider myself a polyglot and I'm a huge fan of the array languages. And I thought you did a fantastic job, Bob, standing in. And I think that means I can retire at any point and I'll know that the show will be in good hands with a host waiting in the wings to swoop in and take my spot.

00:01:12 [BT]

Yeah, I'm not the swooping kind of guy, but yeah, since you were on the beach, we just soldiered on and I think it was okay.

00:01:23 [CH]

I thought it was a great episode. And I have to say, there is something nice about being able to listen to my own podcast without having to have been there. You know, it's a different listening experience. So I think we have two announcements, both from Bob. So we'll do those and then we will get into introducing our guest and chatting with him.

00:01:40 [BT]

Okay, first announcement is Eric Iverson has been working. [01] If you ever wanted to put an instance of J up on AWS, up on the cloud and run from there, There are instructions on how to do that. He's put together a post that all the steps you need to do, and then you can run J with whatever size machine you wish to pay for through AWS. And I imagine there's other cloud services you can do too with as well, but he's done it for AWS. And that's kind of neat that you can expand your computer far beyond what your interface might be. And the second one, to the surprise of many who attended KXCon, KXCon has put up their videos. And I say surprised because on Reddit, I actually found a link to the KX videos. And this morning when I was talking to Conor, said, "All the videos are up." He went, "I haven't seen them. There's another video set, but it's probably just old." I said, "No, it's the video set." He looked at it and sure enough it is. For some reason, they've put them on a specific link within their site and not on YouTube, which is their choice, but we will include the link in our show notes as always. If you wish to see the video from KXCon, on a previous episode we, of course, did a review of KXCon, and now you can look at the videos and see how accurate the review was.

00:03:03 [CH]

Yeah, this is exciting, because I definitely have a handful of folks that have asked to know when the links have gone live, and I thought they weren't, but I guess they are, and so you can all go watch them now if you're listening to this podcast and have been waiting for those videos. They are online, they're just not on YouTube. I think if you put them on YouTube, more people will discover them because then the YouTube algorithm will start you know, dishing them to, I don't know, the small number of folks that Google KX and q and APL enough to let YouTube know that they're interested. But as long as they're online, that's 90% of the work right there. So, alright, with that out of the way, we are going to, and I didn't ask how to pronounce his last name, so we're going to give it a shot and and he can correct me if I'm wrong, is Adam Paszke. I probably got that wrong, but we'll hear in a sec. He is most famously, I think, the creator of one of the most popular data science libraries and just Python libraries in general, PyTorch. [02] And that has gone on to be absorbed or consumed by Facebook. And so now Facebook does a lot of the maintaining and developing of that library. So we're gonna talk to him a lot about that, but also more interestingly, he has been working on the Dex programming language, which I'm sure a few of our listeners, I definitely know one of them has heard of it. The white paper went online, I think a couple years ago, and the research on that is taking place mostly at Google Brain, or if not entirely at Google Brain. And that's where he currently works. He spent time, I think, doing multiple internships, mostly at Facebook, one at Google, and I think actually you also did an internship at NVIDIA, if I'm not mistaken. So I'll throw it over to Adam. He can correct the pronunciation of his last name if I totally messed that up, and I guess go back as far as you want and tell us sort of your history and how you got into building a library that was quasi-inspired by Array Languages but has gone on to sort of massive popularity.

00:04:56 [AP]

Yeah, so thanks Conor for the introduction. I think the pronunciation was pretty good so there's no need to nitpick that. If we're going to go back in time, kind of, you know, back to where it all started, I think that really sort of the most important place was kind of in the first year of my undergrad. And actually, array languages were kind of completely not in my mind. So, what happened was I was actually sitting with some of my friends in a lecture and the lecture was pretty boring. It wasn't a computer science lecture or anything. It was more like organizational stuff. And we ended up talking and one of them pointed me to Coursera. And I thought that it was cool because actually we were still kind of early but Coursera had all those courses where you could just kind of try out different things that would normally like at a university. My university was very theoretical and they kind of put a lot of emphasis on first teaching you kind of the basics. So, just a lot of linear algebra, a lot of calculus. We hardly had any computer science courses really in the first year, but I was really excited to kind of try it out. So that was kind of an exciting thing for me. And this is where I found Andrew and G's course on machine learning. And, you know, computer science was always kind of cool to me. I've had, you know, I dabbled in various attempts at programming and whatnot throughout the years. I don't think I was very good at it, you know, before that but I always found it kind of interesting. But I distinctly remember this one time when I was reading some like computer science, I don't know, like PC magazines, right? Like nothing sort of scientific or anything. And I remember reading this article about like that was like almost when I was a kid or a teenager and about some people who were like, I don't know, making self-driving cars or whatever. I thought that was really cool but this was like way beyond my comprehension at that point, right? But I found this course and this is, I think, how it got started. So, initially, I actually did want to be kind of a machine learning researcher but as it turned out, that was also not something that I was great at. But it did point to me - I mean, especially at that time, right? Like that was still kind of early in my undergrad and the field was not quite as hot as it is today. That was like 2014, I think. So, like the big breakthrough on ImageNet was, I don't know, maybe two years before. So it was nowhere near the hype it's and attention is getting today and nowhere as competitive. But still, I mean, it's kind of useful to have more background to be able to actually do scientific research, especially in sort of a somewhat mathematically involved field at times. So yeah, and I did find a library that was Torch 7 in Lua. [03] And I wanted to learn machine learning. I played a bunch with it, but then I just ended up finding myself sending more and more more pull requests to it. And I guess that's how the journey started.

00:08:02 [CH]

So the initial basis of PyTorch was in Lua?

00:08:06 [AP]

Oh, I mean, Lua is... that was Torch 7, right? Like Torch itself is like a sequence of libraries that is... I honestly don't know. I honestly don't know where, like what time the first versions were published, you can still find like old web pages with like goofy pictures of the authors or whatever. I can try to dig that up. But yeah, I started working on the seventh version and the seventh version I think was the first one where they started using Lua as the sort of driver, the scripting language that would be driving sort of the array compute, right? Like Torch, I think it was, I think it might have been in Lisp, it might have been in like C or C++. I don't remember the details, but they changed the language throughout the years that they embedded the sort of array library in.

00:08:58 [CH]

Interesting. And so I assume when you started contributing, that was when the Python front end started being built out or what's the story behind sort of the, the PyTorch as it exists today? Because clearly that, or at least if I'm not mistaken, that is the most popular way that it's consumed today.

00:09:15 [AP]

Yeah. No, no. So back then the Python front end was not something that was contemplated in that community. There were definitely Python-based solutions that were fairly popular at that time. Tiana was definitely one very big and very popular library at the time. But Torch had a lot of weight behind it. It was the library used at Facebook AI research at that time. And so, you know, it had some engineering support through this. And, you know, it also had kind of an open source community from actually DeepMind back then was also using Torch 7. If you look at if you watch the AlphaGo documentary, you can actually if sometimes they're like pictures of computers, and if you look at the code is actually Lua code. And, you know, there are snippets of Torch 7 in there. So yeah, and DeepMind was actually kind of also a big force behind it. But once DeepMind got acquired by Google, you know, after some time, they ended up switching from Torch 7 to TensorFlow, which has been published a little after I started working on it. And that was kind of a big blow to the community, I think. And also Torch 7 had kind of a bit of a bad rep online because of the because, you know, it used Lua and people were just less familiar with Lua. And I do agree that Lua is really nice. LuaJIT, by the way, is an amazing interpreter. Lua has a really, really good just-in-time compiler and can run way faster than Python can. But as a scripting language, Python is a little more convenient. Ultimately, the Python-based tools were growing a lot faster than the Torch 7 community was, I think. And that was kind of siphoning the attention away from it. At that point, one of the things we were doing is we were actually separating the C bits away from the Lua bits. So we built the pure C library for doing the array math and then separated the bindings because at some point it got entangled. Once we had that, I had the idea that now that we have a pure C library, I started reading about the C Python API, like kind of binding, you know, writing initial bindings. And at the same time, there's actually a funny story where I was supposed to, well, I was supposed to, I applied to Google for an internship. And it was pretty late. I think I did end up passing the interviews. And after that, once you pass the interviews, it's not guaranteed that you do an internship. You actually sort of had like somebody from Google has to pick you as their intern. And actually, nobody picked me. But I knew Sumit, who worked at Facebook. And I was like, "Hey, I passed the interviews. Maybe you know somebody at Google who would..." He knew me from my open source contributions. So he knew I could do useful stuff. So I was like, "Hey, I don't know, maybe you know somebody who could use my help. And perhaps I could learn something from them at Google." And he was like, "Well, you can just come work with us." And so I came. And we looked together through the Python prototypes I had and ended up working on it together also with Sam and Greg and a few other people in New York.

00:12:36 [CH]

Interesting. I had no idea about the Lua interface. That was, I guess, the primary interface, you're saying, for scripting and in consuming it. And then it looks like, yeah, I looked it up. The back end was written in C and C++, whether that was the case at the time or C++ was added later. But so there was this sort of competing, well, not competing historically, but at one point there was Lua and then you started working on sort of a Python interface and, yeah. And, uh, and what was the motivation behind Google sort of shelving that and then switching to TensorFlow? Because I don't live in the machine learning or data science world specifically. Um, but I have heard that there is this sort of not competing interests, but I think PyTorch is more used by more by industry, if I'm not mistaken, and TensorFlow was more used more by academia. There's some kind of competing, like which one's more popular. If you, if you Google, should I use PyTorch or should I use TensorFlow? You'll get like a bunch of Reddit posts and stack overflows of people, people debating which is better for which. Um, if you, if you have anything to say or comments, obviously, you know, you're, you're quite familiar probably with the communities.Um, I'd be super interested to hear your thoughts on that.

00:13:43 [AP]

Um, so actually Google itself, like brain, for example, I don't think they ever used Torch as their primary software. Google like had, you know, its own early versions, like disbelief. And, um, I think some other systems, which in the end ended up transforming into TensorFlow, as far as I know, I'm also not like super up to date on the detail on, you know, the history of TensorFlow. It was actually DeepMind, which, you know, started off as a startup, uh, that used Torch 7 and got acquired by Google. And so that was, that's what you're saying was the blow is that, that, that, I guess, company was using PyTorch, which was great for the PyTorch or great for Well, it wasn't using PyTorch because PyTorch didn't exist yet. It was using the Lua torch. Yeah. And ultimately, yeah, when PyTorch got started, I mean, the whole idea was that, you know, we get to redo everything. We get to bring it closer to kind of where people, what people want, I think, when they, you know, want to do their machine learning researcher systems. Since pretty much every other solution that was popular these days wasn't Python. We were like, fine, we just have to do the jump. But at that point, since we would be breaking all of or most of backwards compatibility, we figured that actually we could also redesign a few bits and pieces of the library that were kind of, you know, at that point we kind of knew better. It's also kind of funny because I think most of the machine learning libraries, they kind of ended up sort of rediscovering a lot of things that were kind of obvious. If you ended up reading programming languages papers, there's a lot of research in how to build languages and systems like that. But in the machine learning community, since it was the development of those tools, in many cases, I think was actually driven by people who didn't have the background in programming languages and so on. It was actually we were doing a lot of programming language work that hasn't been discovered previously. For example, automatic differentiation. [04] That's like a technique that's been way older than anything we've done. But we have kind of been like rediscovering instead of, you know, because we didn't know like the field existed back at that time. We ended up like rediscovering a bunch of kind of similar things just for the purpose of, you know, making a good library. Once we actually started talking to those communities, this is, I think, where the, you know, development really took off. But at least initially it was like not obvious to us.

00:16:08 [BT]

So Adam, when you rediscover something like that, that already exists, you're not aware of it. So you're going, are there things that you learned that when you were talking to the community later on that they didn't know about? Was there actually new things revealed because there were fresh eyes on it?

00:16:23 [AP]

I think we had, you know, a bit of, I think there were definitely some new parts and, you know, we have sort of designed, I mean, the core idea and like the core algorithms, I think have all been known. But, you know, in many cases, when you build systems like that, The devil is kind of in the details. And so it's not like everything that we have done and every single problem we have had would have been found in the papers from the 70s. There were actually a bunch of engineering challenges about how to run those algorithms online, for example. A lot of the literature are kind of... AD in literature often was doing transformations on programs ahead of time, at compile time, as in you read one program and you spit out another source file that can be compiled. I think in Lisp, there might have been like more runtime systems, but ultimately the way we have, you know, we had to build like a really low overhead runtime that would kind of online be, you know, recording how what kind of computation has been performed and then could transform it to derive the derivative and so on. So there was just a bunch of kind of, you know, thinking about efficient data structures like memory management, then, you know, GPUs entered the picture. And that's something that hasn't really been thought of before. And then memory management is even more important than it ever was. And you want to differentiate at a different level. You actually don't want to differentiate, for example, the implementation of the scalar code. You actually want to differentiate whole operations oftentimes. So there's a lot of things like that.

00:17:50 [BT]

So the context has kind of changed the environment. The things that were available like GPUs, when these early papers were written, weren't available and now there were reasons to expand in directions that wouldn't have been available to them.

00:18:02 [AP]

Yeah, I mean, I think that honestly a lot of the problems in AD, a lot of the big problems in AD have been solved, but I think there is still quite a bit of a niche at the intersection of AD and parallel execution, and especially when executed on dense vector machines like GPUs or TPUs. [05] I think there's a lot of challenges there. We actually don't, I think we still don't have a great way to differentiate a lot of low-level code that people are writing for those systems. We have been doing this successfully, but we have been doing it at the level of array programming languages, I would say. Which is nice, because if you're programming with arrays, array programming languages map very well to GPUs and TPUs. you kind of have those big computational blocks that have a lot of sort of embarrassing, you know, parallelism inside and map really well to the hardware. And if you want to differentiate that program, then you can pretty much take kind of the, you know, stock algorithm from the 70s for differentiating sequential programs, which the, you know, back then was mainly talking about scalar operations. But you can kind of apply, you can say that, you know, a reverse mode derivative of a matrix multiply is like two other matrix multiplies, right? And if you just apply this algorithm, then essentially you take an array program and you transform it into an array program. And so the program you output, even though sort of all the steps you have differentiated end up being sequentialized in the way the array program is executed, the operators inside the program still have like enough parallelism to be able to like saturate and make really good use of the hardware we have available today. But if you wanted to have a lower level programming language, one which you know exposes more control like you can kind of open up and like do you know more branchy things i don't know you have a ray tracer right like this is something that's not as easy to write uh you know this is not just a bunch of matrix multiplies here um then did like generating a parallel program that computes the derivative of you know something like that um and still makes really good use of the hardware i think is a lot more difficult

00:20:16 [CH]

Speaking of array languages is that you, you kind of just mentioned at what point in your sort of journey of you know, you started with torch seven and then, you know, ended up at Facebook where you were working on this, you know, Python bindings to, to the C library that you had isolated at what point. Cause you said it wasn't on your mind when you had sort of entered this field. It was, you know, you were just aiming to go towards the, how to become a self driving, you know, car engineer, machine learning space at what point did.

00:20:45 [AP]

At that point, it's not like self-driving cars for my ambition. I just remember, you know, thinking it was cool sometime before I just, at that point, I thought machine learning was cool. [06] And, you know, they were doing a bunch of, it was different than a bunch of like other programming, right. It's not like, you know, it could do things we couldn't do with like standard programming languages.

00:21:04 [CH]

That's definitely true. All right. So in inspiring you to go in that direction, at what point did sort of array languages as a, as a, I guess, sort of topic, like first hop onto your radar? Do you even remember that? Like it was, maybe it was something that just sort of at some point in time, it's sort of, you realize that these libraries, PyTorch, et cetera, and now definitely Dex, like is, is adjacent to this world.

00:21:26 [AP]

I think it was about like three or four years ago. So that was like two or three years into the PyTorch project. I mean, the PyTorch, you know, building out PyTorch and, you know, supportingand building the community around it. That was like a lot of work. And at that point, it was a lot of kind of a mix of engineering challenges and partly research around those programming languages and APIs for computing. But after that time, I mean, so I actually have stopped, I have barely worked on PyTorch, I have barely done anything in the last three years or so. And this is, I think, roughly the moment where I just decided that actually going deeper into the research part of what we have been building, like actually learning the principles of those programming languages and automatic differentiation and parallelism and so on. This was a bit of a motivation for me to actually stop. I think that isolating myself a little from the project was helpful because as I was working on it, there was just a constant stream of things that we want to build. And that's really important, right? That was also really rewarding. People were excited and using it. But I think once you actually stop fixing the issues at hand and so on, you get to step back, see the bigger picture and actually learn about it and think a bit harder about what you're doing and so on. And I think this is roughly also the point where I started thinking more perhaps about, I don't know, when you say array language, the field of programming language just kind of jumps to my mind, which is maybe why I associate this. Obviously, we're competing with Arrays a long time before this. And I was aware that the libraries we're building are kind of domain-specific languages that are just embedded in Python. So I was definitely aware of that, but it was never something I necessarily, I think, pursued as sort of strictly research field or you know it wasn't at that time I wasn't it wasn't we haven't yet connected with like the the other you know experts and in those fields I think.

00:23:39 [CH]

So I guess before this is like the natural place to sort of transition to asking you about you know the the last three years or so working on Dexling but before I do that I just want to pause to make sure that Bob or Stephen or Marshall don't have any questions that we should ask now about the sort of PyTorch phase of Adam's history before we move on to talking about Dex?

00:24:00 [BT]

Well, I'm intrigued by the fact that you were at PyTorch and you were talking about trying to build that community. Because I imagine a certain amount of it, there'll be a lot of energy generated by the new tools that you're creating and the uses of them and that giving people power to do things that they might not have been able to do before. But there's always work to be done creating a community and getting that buzz happening, for lack of a better term. Did you have anything to do with that? Or was it more just you were head down making this stuff work? Were you responding to people in the community? Did you see that develop? How did you feel that? Because I think in a lot of ways, the array languages have yet to undergo that kind of a popularization. And to hear what it was like for PyTorch might be useful.00:24:46 [AP]

Yeah, it's a really good question. I had a bit to do with that. I think Sumith ultimately was sort of the main person responsible for driving the community effort. But I did a bit of that. I mean, I did a bunch of talks. I was kind of trying to be active on Twitter. And we had a Slack channel where like initial users... I think one really good thing we did was actually we sort of had a sort of alpha process where we have been inviting people to try out the library. And also we were kind of lucky because I think a lot of people were looking for new tools at that point. Like the tools they were using, they were not necessarily happy with them. And so it was actually kind of easy to recruit like alpha testers, I think. And so some, there was a group of people who had like access to like very early versions before it was public. And we iterated and talked a lot to them and sort of integrated their feedback back into the library. And after that, we have published it. I also, there was a time where pretty much everything I have done was I just spent time on forums. We had a sort of public facing forum. I was answering most of the questions initially. Now a lot of other people from the community, there are legendary community members these days who have pretty much answered every post out there. you know, initially sort of to give a momentum, I was also doing a bit of that. So it was really a mixed bag. It was kind of a mix of outreach and trying to support people who were trying to get on boarded. But I think that our job was to a large degree made easier by the fact that people were looking for new tools. And also, you know, kind of PyTorch has those roots in sort of the research community and the research community is nice. It's a nice place to start with new tools, I think, because researchers switch projects, they start new ideas every few months or years. And at that point, they often just can start their code base from scratch. So it's not like proprietary commercial systems that have been written and then will be running for the next hundred years in almost a modified form, except for some patches that are being integrated. Those people are used to code churn. And obviously, if you create more code churn for them because you're breaking APIs and so on, they will still get unhappy. But I think that a lot of research projects live on fairly short timeframes, and that's a really good place to start popularizing your tool.

00:27:29 [BT]

Yeah, actually, because you mentioned that, that's exactly what I was thinking. is that with a lot of the array languages, they are embedded in industry, and industries that don't want to see very much change because they're actually using those tools very effectively in very powerful ways. But it actually inhibits the amount of... Well, it's harder for you to adapt to, especially if they're proprietary, it's harder for you to adapt to other uses because your customers are saying, "We don't really... We want everything backwards compatible. It's got to work."00:28:01 [AP]

Yeah, exactly. And for research, for a lot of use cases, it's actually probably okay for them to pin a particular library version. And I think it will be overall for the community, I think it's worth more if you can deliver cutting-edge features that will enable more interesting research. That's way more important than actually trying to maintain backwards compatibility at all costs. Of course, PyTorch has grown up to this, and this has changed a lot. Now it is used in the industry as well. And at that point, it actually has to be a lot more stable than it was back then. But if you're building a purely research tool, then you can actually afford to have a bit more of a loose contract, I think, with your users. And at times, they're often easier to onboard thanks to that.

00:28:49 [BT]

And I guess now I'm as intrigued as Conor was, I'd love to hear more about Dex.

00:28:55 [AP]

So Dex actually, so Dex was, [08] there's also a funny story because now at Google I'm in a team where, so Autodiff in PyTorch is actually exposed through a module called Torch Autograd. And back then, as I said, we actually didn't know, I think, automatic differentiation or, you know, many people would shorten it to Autodiff as a field exists. We knew kind of about Autograd, which - and that's kind of what we associated with it. But it turns out that Autograd actually was just a library built by Matt Johnson and Dougal McLaurin and a few other people, I think Ryan Adams. And they're essentially people that I'm working with today. So, I have kind of went back full circle. I have built upon the things that they have built and now I'm kind of working with them trying to build more things. So Dex is, I think before we get to Dex, I think the first project they have sort of started building in that space after they joined Google was actually JAX. And so JAX was an evolution of Autograd. JAX was meant to add things like, keep things like automatic differentiation kind of being very close to the heart of the project, but then also add better support for running on accelerators, better and other transforms you can apply to your program. Automatic differentiation is kind of a transform, right? You take a mathematical function and apply it and this gives you back another mathematical function that does something else or an implementation of that function. But there are other things, like, I don't know, vectorizing, right? And by vectorizing, I just mean like applying the function over a bunch of inputs, which you can do just by, I don't know, in C-like language or in Python, you can just run in a for loop the original thing, but that would be inefficient. It's way better to just run bigger array ops. That will let us way more efficiently map to something like a GPU or a TPU. So that was the project, but that was still in Python. And then I think at some point, Dougal, he got very interested in programming languages too. And so I think there the idea was like, well, if we could free ourselves from the shackles of Python here, not necessarily thinking about, "Hey, let's build the next tool that will be better for this user group or that user group." It's more like, "Hey, if I was to build a tool just for myself, what would I do?" And so he had a prototype when I interned at Google. It was still pretty early, but it already could do a lot of things. And I thought it was really cool. I thought it was a really cool idea, basically, to kind of explore like, hey, if we were to build a language from first principles, kind of having the experience of, you know, me building PyTorch, him building Autograd, and, you know, working on a lot of on JAX as well. You know, what would we do? How would we design the language? And so that was kind of the idea behind the effort. So I joined Google after that. I think the project was already a year in at that point or maybe two years in, and we have been sort of having fun with it a little bit. Again, this is kind of not necessarily built as something where we expect immediate adoption. Also programming languages, I think, are notoriously difficult to build, and it takes forever to actually make something that a lot of people will find useful. But you know, I think there are a lot of research ideas that we have managed to explore. And actually a lot of the research ideas that we are, you know, have developed for Dex have also been upstream to JAX. And so we kind of use it as like a research vehicle. But along the way, we integrate sort of we, we integrate the advancements we do back into the systems that are, you know, used a bit more widely.

00:35:56 [CH]

Interesting. I did not know that. So it's less about always expressing your DEX program in terms of loops and indices. It's more you have two options. You can do it with the array style, if you will, whatever that looks like in DEX. Or if you find that cumbersome or inconvenient, you can switch over back to sort of for loops and indices, if that's a better fit for the problem that you're trying to solve at the point in time. [09]

00:36:23 [AP]

Yeah, I mean, the core language is based around for loops, and ultimately it decomposes to scalar compute. But if you look through like the built in library, there is map, there is reduced, there is I don't know, fold, like all of the sort of higher order combinators you would normally find in like combinator based languages, they are there, they are implemented generally in terms of the for loop, because we have found that actually just having the for loop which is a bit different than the for loop you would know from C for example, right? Because the loops are generally not executed for their side effects, the loops are generally executed...

00:36:53 [ML]

Yes, it's more of a for each.

00:36:55 [AP]

The for loop is more like a map, it's just syntactic sugar for a map. It basically gathers the result of evaluating the body over a bunch of input arguments, right? Over some finite domain. But then if you add effects to this, there are still effects that will not, you know, even though you have side effects, if you use something like an effect system, you can still like put enough control on it so that you're still able to, for example, parallelize those loops, even though they might have side effects. And based on that, you can build things like reduce or parallel scans. And then there are some effects, you know, that will necessarily force sequentialization of your code. So essentially, we're not taking this away. I think all of the combinators that you would normally find in, I don't know, Futhark, [10] for example, you will also find them in Dex. And if that's the programming model that suits you, then go for it. But if for some reason you would find a for loop more simpler to think about, that's also available.

00:37:54 [ML]

Yeah. And so maybe the distinction to point out is, I mean, all these APL derivatives have loops in some way. A lot of them have regular if and for and while. The difference is that if you use those, then you're giving up your arrays in the process. So then you have something that's acting in an interpreted language on one value at a time, which is pretty slow. So what Dex is trying to do, I guess, is give you the loops, but keep the compiled speed with it.

00:38:26 [AP]

Yeah, exactly, because all the array code decomposes into loops. So if we can optimize those loops well, we can optimize well both the code that's written in point-free style and code that is written in this pointful style, where you decompose arrays back into their elements and then perform transformations on those elements.

00:38:46 [CH]

So does this mean that you have the same behavior as Futhark? Because you said there are some rank polymorphic facilities, but it's not built in to everything. So when you are doing some sort of scalar operation across a rank two array, like a matrix, does that mean that you're calling some form of map twice, I think is how Futhark ends up doing it? Is it something like that? If you're trying to stay-- if you're a J programmer and you're trying to avoid the indices and the loops as much as possible, do you end up doing something like that?

00:39:20 [AP]

No. If you have two arrays, you can literally just say array plus array. But that's because plus is actually an overloaded operation. And we kind of adopted, you know, essentially type classes from Haskell. And you can think of type classes as being kind of a type driven code synthesis tool in the case of for something like plus. So plus, you know, you define the base case by saying, I can add two floats. Here's an implementation. And then you have an inductive definition, which is if I have, you know, an array of use that uses some, you know, of some size or of some index type. And I know how to add its elements, then I know how to add the two arrays, right? I will just have a for loop that adds together the results. And just given those two things, if you have a 5D array, then if you say, you know, 5D array X and 5D array Y, if you say X plus Y, then the Dex compiler will use the type class resolution to essentially, you know, compose the array instance four times and then terminate the code synthesis in the scalar case, and it will sort of synthesize the definition of rank polymorphism plus for you. So a bunch of operators actually do behave in a rank polymorphic way, but we do achieve it through type classes.

00:40:34 [CH]

I always forget, because Haskell [11] is where I know the word type classes from, and they have both parametric and ad hoc polymorphism. [12] Ad hoc polymorphism is the one that corresponds to type classes. Is that accurate? I can never remember which is which.

00:40:49 [AP]

So type classes are a restrictive form of ad hoc polymorphism. Yeah. They still have to be like uniform. It's not, you know, the sort of, I don't know, let's say, holy grail of ad hoc polymorphism, I would say would be closer to like templates in C++. Right. I think where you can like really do, you know, pretty crazy, pretty crazy, you know, pattern matching onto your static arguments. Type classes still have to be structured. But they do enable you to, yeah, essentially specify an implementation per type according to some constraints. Like, for example, the type signature of the function has to be uniform across all types. When you say this will be a plus on type A, then plus will always have signature-- it takes two As and returns another A. Or equality will always take two A and return a Boolean. Which, for example, NumPy doesn't satisfy. Equality in Python does not actually satisfy that predicate. If you compare two numbers, you get back a Boolean. if you compare two arrays, you get back an array of booleans. So the return type actually is sort of the shape of the type of the function actually varies with, you know, in a language like Python. And in C++ you could also do it this way, but if you use type classes, sort of there's always a single type scheme that applies to all the instances of the function. Which then enables better handling of polymorphic functions, like you can avoid a bunch of specialization and so on, but this is kind of getting deep into the implementation concerns.

00:42:18 [ML]

Well, so can you still do broadcasting or what we'd call an array languages, well, conformability or leading axis agreement with plus? So if you have arrays of different ranks, can it add them together according to some rule?

00:42:33 [AP]

Yeah, that is a very good question, and the answer is no. You have to, because the pluses always takes two a's and returns something, both sides have to have the same type and arrays of different rank have different types index.

00:42:45 [ML]

Okay, yeah, that makes sense.

00:42:46 [AP]

So you have to manually say, you know, but you don't have to specify the shape of the broadcast because this could be inferred through the type inference, but you would have to say, you know, if I have a 3D and 5D array, then I will have to say broadcast broadcast x plus y and that will work. Like the compiler will infer like what is the extent of the broadcast that will be necessary to make this program type check, but you have to explicitly indicate that the broadcast is supposed to happen.

00:43:12 [ML]

Yeah. And which argument needs to do it.

00:43:14 [AP]

And which argument, yeah. Which to some degree, I think also is reasonable. I mean, broadcasting errors are pretty common and they can sort of silently, you know.

00:43:25 [ML]

Yeah, definitely. We play fast and loose in the APL world.

00:43:29 [AP]

Yeah, exactly. So it is slightly less convenient if you're really used to it, but in the end, I think it's not the end of the world.

00:43:37 [CH]

So I guess another thing to comment that we haven't really, I mean, kind of implicitly we've referred to this, but the fact that, you know, type classes have been mentioned is that Dex is actually, if I'm not mistaken, primarily implemented in Haskell. Is there anything that...

00:43:52 [AP]

Yeah, it is implemented entirely in Haskell.

00:43:53 [CH]

Entirely in Haskell. Is there anything like worth commenting other than, you know, mentioning that for our listeners that might also be sort of functional programming fans? Is there anything sort of behind that decision or is it just a, you know, a great language for doing research in?

00:44:09 [AP]

I think Haskell or functional languages are just pretty good tools for building compilers. So ultimately, this is a big reason. The type system, I think, is very useful. And we have, in fact, even written short papers about how we use the type system to, for example, make sure we don't use our substitutions incorrectly in the compiler, which is a really, really nasty bug to try to track down as you have a miscompile. So I think it's just been convenient. And also I think that as a project that only has, let's say, one or two people working on it, because I also don't work on Dex full time actually. Dougal works on Dex full time. I spend some of my time on it. So for a project this small, I think Haskell also provides a lot of power. It's also a fairly terse program. In a fairly terse program, you can do a lot. I think it's just a good choice for things like that. But ultimately, I think that this is kind of a bit of a social factor. Ultimately, the language in which the compiler for another language is implemented doesn't play a significant role for, you know, the, I don't know how the, the target language behaves and, and I don't know the quality of it in many cases.

00:45:38 [CH]

That means we have two languages, Futhark, which I think Troels would definitely still categorize as a research language, and that's out of the university of Copenhagen. And we've got Dex, which is out of Google research and both are array functional array languages that are definitely, Futhark is targeting accelerated compute. And I know that from, I believe, once again, I can't delineate what was from the white paper and what was from the tutorial online when I went through it. But I know that CUDA [13] was mentioned and acceleration was a mention. So what is the story at the moment for, you know, writing your Dex code and then being able to target different, you know, accelerated compute? Is that, has that been implemented or is that sort of a on the horizon roadmap goal?

00:46:23 [AP]

That has been implemented and then the support for it has disappeared in some version of the compiler. So we have actually regressed in that regard. Yeah, I mean, so we have been essentially building the language very carefully so that, you know, it remains possible to compile down to accelerators. Like we have worked a bunch with it and we know that, you know, pretty much all of modern scientific computing happens on accelerators. If you look at all the supercomputers, they're all just... Most of the flops really come from accelerators, right? So it's a really, really... It's a really, really... And I mean, people even have GPUs in their computers, like, you know, Colab or whatever. You just have lots of opportunities to use accelerators, and they're really good at doing math. So we might as well use them. And so as we're designing the language, we're kind of consciously pushing it in a way which will not prevent us from compiling to accelerators. Having said this, it's actually really difficult. It's not that easy to generate very performant accelerator code, especially from a language that gives you this much flexibility. As we were building it, ultimately I think we have realized that there are two paths we can take it in. One of them is keep iterating on the language, keep doing more programming languages research, and try to expand what we can do with it. And the other one is actually stop where we are, build a really good backend, and focus our effort there. And for some time, we tried to do the second thing, but ultimately, I think we have decided that the particular skill set we currently have and the actual things we want to explore, I don't think we're done with designing the language. So we have built this accelerator backend as kind of a prototype just to make sure that we're not wrong that it can be targeted to accelerators. We have confirmed that in fact it can, but at the same time generating really good accelerator code in all cases is very, very difficult. So we have decided to basically exclude that as an overhead on the implementation of the new language features that we want to try out. So we still sort of carefully design it so that at some point we can come back to the accelerator thread, but it doesn't drag us down in implementation complexity, essentially. So as I said, it's a small project, you kind of have to pick your battles, right? So it's kind of intentionally limiting the scope. So Futhark, for example, has a really good, I think, GPU backend. They are definitely way better these days at compiling code for GPU, whereas DEX was mediocre, I think, at compiling to GPU at one point because it was kind of more to prove a point that we can do it and prove it to partly to ourselves, partly to others. But at this point, we're not trying to make it like a language that will be necessarily used by everybody. So we have decided to sort of cut the scope down a little bit just to be able to explore other things.

00:49:35 [BT]

Is that kind of the same situation as you were talking about, the difference between research and industry, if you try and tie it to a back end, that starts to limit your flexibility in doing other things. Is that the same sort of idea?

00:49:48 [AP]

It might be partly that, but it might just be simpler than this, I think. As in, it's just more code to maintain. And, you know, yeah, it's just more code to maintain. and as you add new features, you have to keep them working through all the backends, perhaps, because otherwise I'm not sure if it's that useful to keep the other backend around. So yeah, it's essentially a hard constraint on language design, the fact that we are able to support something like this down the line, but it doesn't have to be an implementation constraint at this point. In fact, we have tried to find some initial users for Dex, and if we had a group of people who wanted to actually use it. So today, there was a clear problem where they would be excited and want to drive it in, then perhaps we also would have invested more in that. But in fact, in the last few years, the Python tools have evolved so much that actually, they are really good, I think. The landscape of... If we were doing this a few years back, it might have been actually easier to do this. But these days, it will be very difficult to rip Python from people's hands because they actually do like it. And I do think they do like it for a good reason, right? So to some degree, we have kind of accepted that perhaps maybe the time to... It's not like we could replace JAX with Dex right now and the users would be happy, right? So in that regard, it might just be better to keep iterating. And once we actually see a clear shot for... There are definitely language enthusiasts and they will try it and that's really cool, but it will not be a widely adopted language in the industry. We kind of, for now, scope it down to research, plus some potential avenues for short-term impact that we're still exploring, but that's still kind of open to the question.

00:51:44 [BT]

It seems to me that you're not targeting a wider audience until you might find an application, in which case that wider audience finds you, which is what strikes me as happening with PyTorch.

00:51:58 [AP]

Yeah, to some degree, I think that's true. The motivation for building this language to a large degree was an idealized language for us. I think there is a fair amount of people who resonate. In fact, I know that there are people who implement their prototypes in Dex just for the sake of having better clarity in their code and in the type system, assisting them in doing the design of their experiments. But ultimately, then they just re-implement them in JAX to be able to take advantage of cutting-edge compilers that are staffed by a large number of people that can deliver essentially peak performance on the accelerators available today. It's sadly perhaps not the best workflow, but it would take a lot of work for us, I think, to match all of those established compilers and backends in a reasonable amount of time. Especially that those backends are still being developed and still being improved. And so it's like a moving goalpost we would have to catch up to.

00:53:00 [ML]

These backends, they don't support the kind of loops that Dex does, right? So you can't just compile Dex to the backend either.

00:53:08 [AP]

Well, actually, we did have an experiment of compiling Dex through XLA [14] (so the compiler that is sort of backing JAX). And you can kind of do it. I mean, essentially what you do is you can take Dex and perform like extreme loop fission, where you look at all the loops and you break it up into like perfect loop nests that only have like a single primitive, (like I don't know, a single add or a single multiply or something like that inside). And then it looks like just array code, right? So there are code transforms you can do like this, but it's kind of funny because by doing this, you're kind of erasing a lot of information you had about, for example, which of those array ops could have been fused to achieve better performance, which the backend will then have to rediscover perhaps. So it's a bit funny. And also, I don't think that Dex is a lot more dynamic than ... [sentence left incomplete]. Not all Dex programs would lower this way, I think. But there is a large subset of Dex that we could lower this way. So it is definitely a possibility. And in fact, I even talked to Troels at some point about using Futhark as like a sort of language we could lower to.

00:54:14 [ML]

Yeah, I was wondering about that too.

00:54:15 [AP]

Yeah, I mean, it will be interesting, I think. It's just that, you know, time constraints.

00:54:20 [ML]

Yeah, and I mean, anytime you're working between languages, there are all these little mismatches that you have to work around and that adds up.

00:54:26 [AP]

I mean, Dex and Futhark are not so far, luckily.

00:54:31 [ML]

Yeah, well, that's why they'd be little mismatches [chuckles]

00:54:33 [AP]

But yeah, they definitely would be little mismatches.

00:54:36 [CH]

And I was going to mention earlier too, and you may have the small comment about people liking Python. Anecdotally, I was just at a conference for the last three days and one of the speakers brought her nine year old son who apparently is a programming whiz and installed the dual boot windows and Ubuntu on his own laptop. And I was just like: "what the heck". You know? [At] nine, I didn't even ... I don't think I touched a computer, I guess different ages. But I asked him what his favorite programming language was and he said Python without missing a beat. There's lots of discussions inside NVIDIA about the popularity of Python and meeting people where they're at. And yes, it may not be the easiest language to accelerate to NVIDIA's concern with, obviously, their GPUs. But it's a lot easier to ... like, we have whole teams of folks that are basically building accelerated versions of Python libraries.

00:55:26 [AP]

Yep

00:55:27 [CH]

Because it's easier for us to build that and then give it them than it is for us to build something else and say: "Hey, we built this new shiny thing! Come check it out." And there are going to be people that are interested in that and they're going to get more of a performance win than accelerating the Python code. But for the majority of users (and there's millions and millions of them), they just want to stay in Python-land. For a long time I lived in this ideal utopia of: "if we build it, they will come and it will be better". And a ton of folks are perfectly happy where they are in Python. They just want things to go faster. They don't want to switch to stuff, which I think is important to acknowledge. So yeah, you mentioning that just made me think: "yes, even though I wish [chuckles] the world we lived in wasn't dominated by Python, it is the world we live in". So ...

00:56:18 [AP]

I mean, you know, we're social creatures, right? As in the choice of a programming language does not boil down just to technical factors. It also boils down to how easy it will be for other people to reuse your code, right? A lot of projects are built by larger teams, right? And ultimately, you have to make sure everybody's on the same page. And so, I think there's a bunch of factors that actually make Python a fairly good choice. They're not necessarily technical factors always. But I have to say, I've been very impressed by how far people have pushed Python and continue to push Python. The latest development perhaps being something like Triton, [15] which is essentially a Python-embedded DSL. People keep building those DSLs and in fact people keep finding them useful to some degree. So as far as we can push this, I think it's kind of beautiful that we can do all this with a language that hasn't even been been designed for that, right? At the same time, it is kind of frustrating if you're trying to build a new language that perhaps has some technical edge. But ultimately, people will not switch because those community effects and extra friction they will generate during the transition period is oftentimes just unlikely to outweigh the technical benefits that we'll get from using a different language.

00:57:40 [CH]

Yeah. We'll leave a link, because if you haven't heard of it, Triton can be ambiguous on the internet, But I assume you're referring to the OpenAI ...

00:57:48 [AP]

If you Google "OpenAI Triton", you will find the right one. But don't Google "NVIDIA Triton". Even though it compiles for NVIDIA GPUs.

00:57:57 [CH]

[chuckles] Naming is hard, but do you mind if you don't mind sharing?

00:58:01 [CH]

[chuckles] Yeah, naming is hard. But, if you don't mind sharing, you said you sort of have a theory for why Python ... it's not all necessarily technical reasons for why Python's done so well. Do you mind giving us your thoughts on that, if you want to share?

00:58:13 [AP]

I mean, I find Python to just be a really convenient scripting language. Out of all languages where I was trying to put together quick scripts to do a thing X, Python is actually I think one of the more convenient ones. And also that where the thing X varies significantly across domains; it's not just like, "I want to compute this matrix multiply. I want to compute that particular mathematical function." There are probably better domain-specific languages for that. But if you want to: "here, download some web pages and process them". Here: "process some text data set". Here: "do a bit of array compute". Here: "do some image processing". Here: "just do some shell scripting" almost. There are a few languages where you can do that in sort of a single place where you have like libraries that will help you with all of that. And that are also have some friendly syntax. Like silly things like list comprehensions, right? [They] already kind of can go a really long way. So I think it's, it's just surprisingly versatile. And then just the insane community effects are probably a big factor here.

00:59:24 [CH]

There was a quote that I heard once that was: "Python is not the best language for anything, but it's the second best language for everything".

00:59:34 [AP]

Yeah, exactly.

00:59:35 [CH]

Which kind of sums up what you just said there. And it is ... I don't know how many times I've been trying to write some bash script with a limited bash that I knew at one point, and then I'll be 30 minutes into it. And then I'm like: "why am I trying to do this in bash?" And then three minutes later, I went to stackoverflow, basically found the exact thing I needed. It changed a couple of things and have my script working. Yeah so Python is amazing for that kind of stuff, for sure.

00:59:59 [AP]

Yeah. And you know, ultimately I think it's, it might be a similar situation as with PyTorch, right? Like you start out by writing those little throwaway scripts, but many of those throwaway scripts will become research projects and many of those research projects will gradually keep growing and at some point they will become production projects. And at that point you have a lot of code that's in Python so you might as well ... [sentence left incomplete]. Actually this is also a big part in JAX. I guess we design a lot of APIs. I think Matt Johnson (who's one of the creators as well) really likes to keep things demo-able, like be able to have a Colab or IPython notebook where you can kind of showcase something cool in a few lines that is not something extremely specialized, like: "Oh, I just made an API for this one exact model and I can just run through and it classifies images," right? Not something like that, but at the base level: "I have transformed this function and parallelized it over eight devices and also computed gradients and also vectorized it over a bunch of examples" and something like that. And it still takes five lines of code. I think this demo-ability, just being able to write fairly short snippets that already do a lot, is very important. And Python definitely has that. And once you have the seed written down, then you end up building the whole infrastructure around your program. But being able to plant those seeds is, I think, an important process as you start new projects, perhaps. And Python actually is was really good at that.

01:01:32 [CH]

One other another small anecdote: It's hard to predict trends, but there's a website called languish.com [17] (or it might have a different URL). It ranks every language/file format on GitHub based on a few different metrics. And Python is, I think, number one by a long shot, but it also had like a massive 2% jump 'cause it sort of keeps track of market share, not necessarily of whatever. So it's a percentage. And a ton of languages went down over the last three months and Python jumped up 2%, which is a pretty big jump. And you know, it's hard to tell if that'll just reverse over the next, you know, three months or if you can read into that, but in the back of my head, I was thinking, I wonder if the fact that all these LLMs now are sort of being popularized, that ... [sentence left incomplete]. It's like because there's so much, there's this like snowball effect. There's so much Python code out there. And there's so many people that know Python and, uh, it might be the easiest for LLMs to consume and spit out. So now the fact that at the point in time where LLMs became popular, Python was like the lingua franca of the world. Fast forward two years from now. Python is going to be number one by like, you know, 20%, not because of any other reason [than] that it was the number one and easiest language to get started with when the LLM inflection point happened. It's a small theory. We'll see maybe it completely wrong, but, uh, yeah.

01:02:53 [AP]

No, I mean, it's, it's sensible, right? I mean, in the short term, I could believe that, you know, LLMs can have a "rich get richer" effect on programming languages, but ultimately if you take LLMs to their limit, then does the programming language matter? If you had like LLMs that are really good at generating code, we can just stop the language wars; we will just be programming in natural language, right? So, uh [chuckles] ...

01:03:18 [CH]

The new rung in the abstraction ladder is not going to be code. It's going to be: "computer, please do X" [chuckles]. And it's like: "no, no, you misunderstood; Please do X prime" And thank you! Okay. We're done ma. Everyone can go home now. [chuckles]

01:03:28 [AP]

Uh, yeah, I mean, we're still not exactly there, but maybe in 50 years. Just like there was a fun discussion in our team [about] how funny it is that naming conventions from Fortran (which had some character limits or whatever) are still like influencing the code we write today. And maybe similarly, like we look back and kind of laugh that people we're using six character function names, people will similarly be laughing at how we used to actually program in anything else than just the language we speak, right?

01:04:02 [CH]

It's going to be: what are we going to do, you know? [laughs]

01:04:06 [CH]

All right, so I think we are over a tiny bit over the hour mark. But I'll sort of pause before we give the plug (if people want to get involved or if they want to download things). Are there any final questions maybe, from the other panelists, before we start to officially wind things down?

01:04:25 [BT]

I just think it's fascinating to talk to you inside the process of this development, as it's happening, as it's growing. It's not a perspective that you get from outside very much at all. I think to some extent in the array languages, we get that because we're small communities and people know each other within the community, but to see it happen on a bigger scale and how that works on a bigger scale, I think it's really important.

01:04:49 [AP]

Yeah. I mean, I think it's really exciting because ultimately compute is underlying a lot of science these days. right? And so if you are able to build those tools, at least you can feel like you're contributing a little bit to so many like different fields where people end up using those tools in very cool ways that you had no idea about.

01:05:13 [CH]

So, yeah, absolutely awesome. I guess, yeah, the final question we should ask is (number one) if people want to go download this and take it for a spin on their local computer and (number two), if there's someone listening out there that has some application that requires some direction that currently maybe Dex isn't focused on and they want to get involved in building that out or contributing, where should folks go to do both of those things?

01:05:39 [AP]

Uh, so also one like very important thing to me is that pretty much all of the work we do is actually on GitHub. [18] So if you want to, I don't know, try out JAX, there's a google/jax repository. If you want to try out Dex, confusingly, it's google-research/dex-lang. So as it's a bit smaller project, it also gets a less straightforward name.

01:06:03 [ML]

Smaller project, larger URL.

01:06:05 [AP]

Yeah, exactly. There's an inverse relationship there. So yeah, there's a readme that walks you through. We don't have pre-built packages. Dex is kind of scrappy and there's probably a high bar to actually build it. You do need to have the Haskell ecosystem. But as we said, we're not expecting a lot of people to rush in. But if you're a language enthusiast, want to try it out, do go for it. The language is actually undergoing a bit of a syntax redesign right now. So that may be creating some confusion at times. But yeah, it could be interesting. And if you have any interesting ideas for what's difficult, I think, in either the languages in Python or what would be strictly a good application for something like Dex, then I would be very interested to hear more about this. I should also say for Dex, there is a longer ICFP paper (from two years ago, I think, at this point), which actually explains more about the design of the language. So that's also one good place because there is a workshop paper, I think, which is like two pages long or three pages long. And that is like a very short rundown of what it means. And then there's a longer, 20 something pages paper that we have actually written that describes it in more detail.

01:07:29 [CH]

Awesome. Well, luckily, the length of the URLs do not matter in the show notes [everyone chuckles] so if folks don't want to go type that stuff in they can just go directly to the show notes and we'll make sure to include all of that: to the jax repo; to the dex-lang repo and to both of the papers (the white paper and the ICFP paper). I just went to the GitHub and I can't tell if there's two different ways to build it, but one of the ways is with Nix [19] if there are multiple ways which (the Haskell folks out there will all know what Nix is) but definitely pretty cool. And I will be attempting to build this on my own machine and to see how close I can get the Dex code looking to ... whether q or Futhark and figuring out what the differences are. But this conversation has been super awesome. And I'll echo what Bob said: super cool to talk to you in the midst of while you're building this and having come from PyTorch and then sort of getting more interested in the programming language side of things. And then it sounds like you're kind of doing your dream job at Google Research, which is awesome. Maybe in the future, if Dex continues to grow, we'll be able to have you back on. Whether that's to talk about Dex or maybe the next thing (as you said, researchers famously, whether it's every couple of years or every couple of months, they go from working on one project to another project). Hopefully Dex will continue to grow and we'll be able to have you on and chat in the future about what's changed and how people are using it, hopefully not just in academia but maybe an industry at some point in time.

01:08:58 [AP]

Yeah, I would be happy to. There's also one more project. Actually in the past few months, I have not been working on Dex so much. There's one other project that right now is actually internal, but will become public very soon.

01:09:14 [CH]

Ooooh!

01:09:15 [AP]

So I can't talk about it yet, but if you want to then you can ... [sentence left incomplete]. I guess at some point maybe I will post on Twitter or something. So if you're interested.

01:09:22 [CH]

We can, yeah, have a conversation about that. And depending on when you're listening to this, (so I will keep my ear to the ground for when this comes out) we will link it in the show notes. [20] Probably if you're listening to this on the Saturday or Sunday (the weekend), it drops the link won't be there, but for whenever this comes out. Probably we'll look for it on Twitter. Are there plans to maybe give a talk or present this at some kind of academic conference at some point, or just being released online for the moment?

01:09:48 [AP]

This one is actually probably less academic and sort of more applied. So I'm not sure. I mean, we might write a paper about this at some point, but yeah, definitely, also happy to talk about it sort of in less academic terms.

01:10:03 [CH]

And yes, tell your friends folks, because you're hearing it ... [sentence left incomplete]. You're not actually hearing it here first, but you're hearing that you're going to hear about it on Twitter first [others laugh], which is kind of breaking news, you know. It's not actual news, but we're getting a sneak peek, you know, at what will be released. So tell your friends, ArrayCast is the podcast to listen to. And I guess we'll throw it to Bob for folks that want to reach out to us and ask questions, give comments if they have them.

01:10:33 [BT]

If you have comments, if you have questions, the way to get in touch with us is contact@arraycast.com. [21] And as mentioned many times during this show, show notes are available. So if you want to get to the links that we've discussed, go to the show notes and that's on the website or probably depending on your podcatcher of choice, there'll be an option of shownotes there and you can just click and open up those links so there will be easy ways to get to it but if you can't find something: contact@arraycast.com. We'll do what we can to get the information for you and certainly in a future ArrayCast, as Adam releases this new preview of coming attractions, we will let people know about that as well.

01:11:17 [CH]

Awesome. Once again, we'll say thank you so much for taking the time, Adam. And yeah, we'll be keeping our ear to the ground to see what you have to release on the socials, on the Twitters, if it still exists. I'm sure it'll be some platform, depending on how the social media wars play out.

01:11:33 [AP]

Send out emails. [chuckles]

01:11:35 [CH]

Yeah, yeah [laughs]. We'll be going backwards in time just to emails. Yeah.

01:11:37 [AP]

Yeah. Thanks for having me.

01:11:38 [CH]

All right. I guess with that, we will say Happy Array Programming!

01:11:41 [everyone else]

Happy Array Programming!

[music]