Transcript

Many thanks to Adám Brudzewsky for producing this transcription
[ ] reference numbers refer to Show Notes

00:00:00 [JK]

Programming in APL requires assume kind of abstract geometric intuition, and some people don't like that. So, David Shaw is a super, super smart guy, but he said, "it doesn't appeal to me; I'm more of an algebra, you know, one potato, two potato, you know, looping kind of guy."

 

00:00:27 [CH]

 Welcome to another episode of Array Cast. My name is Connor. I'll be your host and today we have a super exciting guest that I can't wait to start interviewing and talking to. But before we do that, we're going to do brief introductions. I'll throw it first to Stephen, and then we'll go to Bob, who also has an announcement. And then we will introduce our guest.

 

00:00:44 [ST]

I'm Stephen Taylor. I code in APL and Q and these days I'm the KX librarian.

 

00:00:51 [BT]

And I'm Bob Therriault. I'm a J enthusiast. I'm working right now with the J Wiki group and we're trying to revamp the J Wiki and it's been a lot of fun and it's exciting. And if anybody is interested in starting with something from the ground up, that's a good place to.

 

00:01:06 [BT]

And also on the J front, for announcements, we're also working on our beta, our new beta, oh and Henry Rich is working on that, and it involves the threads and their whole concept of threading J. And if you're interested in that kind of thing, it's really a good idea to get on the J Forums — we'll include links for that [1] — and start talking to people if you have experience with that, because they're starting to figure out with all the different paradigms you can use for instituting threading and blocking, and all these different devices, they're working out how that might work as a primitive for J, so all kinds of interesting stuff going on, and that's it in the J world.

 

00:01:43 [CH]

Awesome, so yeah, as always, the links will be in the show notes that you can access. And, as mentioned before, my name is Conor and I'm not an APL/J/K developer I code in C++ professionally, but I am a huge fan as listeners of this podcast will know of both array languages and combinators.

 

00:02:03 [CH]

And that brings us to introducing our guest today. So, if you've been listening regularly, you will know that two weeks ago we interviewed Stevan Apter [2]. And if you couldn't tell, I absolutely loved that conversation. Basically, after recording that I went and watched, I think, five or six different talks. And one of those talks was — let me get the exact title of it; it will be in the show notes — it was… the name of the YouTube video is "Joel Kaplan: An Idiosyncratic Look at the History of Array Processing Languages" [3], which I believe was given roughly five or six years ago, so that'll be back around 2016. And I went and watched this talk because Stevan Apter recommended us bringing on Joel and he linked us this talk, to which I immediately went and watched. And this talk was absolutely fantastic. It's sort of a perspective on the history of array languages, from Joel's point of view.

 

00:02:57 [CH]

So if you if you don't know Joel Kaplan is today as our guest, and the brief biography of him he worked, I believe for over 10 years at Morgan Stanley [4], for which he is known for many things, but one of them being the hiring of Arthur Whitney [5] and hopefully we'll get to hear the story of interviewing him and any fun stories there. He then went on to work at UBS, so also in the finance industry and then ended his career as the President of 1010 Data [6], which is the company that Stevan was working for as well.

 

00:03:27 [CH]

So, I'll stop there. We'll throw it to you, Joel. You feel free to take us back to whatever point in your career or life that you want and tell us the story of how you ended up where you are today. Having had a very long and storied career in the with array languages.

 

00:03:40 [JK]

Well, not too long ago.

 

00:03:41 [JK]

Actually, after you know, since I've been involved with, you know, array languages for a long time, somebody pointed out that my name, Joel Kaplan, has all the, you know, major vector languages. You know APL, A, K, you know, so… But it's interesting because my life was… I collided with APL, way back, and it had profound impact on my career I had transferred to the University of Miami from the University of Florida. In the Universe of Florida, I had taken one programming course. I was in mechanical engineering, and it was a FORTRAN course on a VM machine, CP67, that's all that I had had as an experience, and I really liked the programming.

 

00:04:34 [JK]

At the University of Miami, I had a colleague, a fraternity brother, friend of mine, who came up with this idea called Doggy Match a computerized mating service for dogs. Now you have to understand this is happening in 1968. And we were using the PL/I account that we had as students and then we ran into this very, very colorful guy. A guy named Brooks White. And I really got to tell you about this, 'cause this guy was really cool.

 

00:05:08 [JK]

So, Brooks White wrote, and eventually wrote a processor so that APL could talk to other systems. And you know this idea of predicting elections. The first election ever predicted was predicted in CBS using APL. I don't know if you know that. And it was Brokes White who did that. He was also kind of like a radical, you know, kind of, very rogue… Very… He didn't look crazy, he's really… you know wear, you know, button down shirts and everything like that, but it was really quite a weird guy and he started with another guy an APL time sharing service on IBM/360 Model 30. They were looking for customers, so me and Bruce Cousins, my colleague, said them, you know, look we're starting this business called Doggy Match and you got a time-sharing service here so we can go to dog shows. And if you want you can participate in any profits that we make because we don't have the money to pay you for the time-sharing service and so they went along with it.

 

00:06:27 [JK]

And so, this whole thing started in Doggy Match. I mean was it was a time; it was quite popular my partner ended up in "What's my line?" And they it's amazing how quickly they guessed it, you know, what he was doing, and you know… And we were interviewed all over the world 'cause it's kind of funny. I mean, I actually… I could read it to you, but I'm not gonna waste time but you know, there's this article of a guy in Miami News newspaper who interviewed us for the newspaper.

 

00:06:55 [JK]

Alright, so that's how I got involved with APL, but the problem that… The good thing was I had a Docutel acoustic coupler. It was a huge thing. It was like a big… It looked like the… Today when people go, and they carry baggage and it had little wheels and things like that. So, there's this huge machine that had an acoustic coupler and you can you know dial in and that's how you did the time sharing. And I had APL to do programming. I had a 30K workspace.

 

00:07:28 [JK]

And I could do all my assignments and things like that. I mean, at that point I never looked back. I had lost interest in FORTRAN and all that other kind of stuff. So that's how I had connected with APL and it just it helped me through all, you know, of my career. Then I ended up at IBM, and I became a systems engineer, and I don't know if you know… Look, this is so much to talk about it, and I just want to make this a longer…

 

00:07:54 [CH]

Yeah, feel free to take… I mean, there's sometimes these stories. I just like, I'm this, like I said on the last interview, that this is like better than a Marvel movie and I just saw the new Marvel movie — it was great! — but, like, listening to these stories about how you stumbled onto APL because you were building a doggy mating business and that the person that you reached out to for computer storage just happened to be using APL, like, that's… Like no one these days has stories like that.

 

00:08:20 [JK]  

Yeah, I guess.

 

00:08:22 [CH]

Like everyone knows about Python from an early age, there's no like, "oh, I wanted to start a business, and this was the way to do it."

 

00:08:26 [JK]  

Let's see, so let's talk a little bit about IBM, because a lot of people don't realize how important APL was to IBM. Today, you go to IBM, you barely even hear. In the late '60s and through the '70s, IBM had hundreds of mainframes dedicated to running APL programming and the reason was anything that had to do with financial planning or figuring out how you're going to lay out a, you know, a run of chips or whatever it was, that required analytics and things like that. And you know, in APL… Yeah, actually, let me back off:

 

00:09:08 [JK]

 There's a lot of things that people think are really really new hot stuff and it's really old crap. I mean, think about it, time sharing is like the Internet. I mean, in APL you even had a thing called )MSG. You could talk to people you could send them messages. You didn't need to have operators and all kinds of people you know screwing around with your life; you could just do the work yourself. And you had access to the computer and it was almost like a personal computer, so by definition, smart people at IBM people really they wanted to get their jobs done, where computing was only a means to an end, started using APL because the only other stuff was so bureaucratic, there was so many steps that you had to go through and so on, that, you know, they didn't want to do it, so you had unbelievable… 

 

00:10:00 [JK]

IBM's APL conferences — internal ones — were bigger than the APL conferences outside of IBM. Which obviously were highly attended by, you know, by IBM, by any IBM professionals. And there were products that we developed, for example, APL-DI. Has anybody heard about APL-DI? [07] It's called APL Data Interface. It's really basically what 1010data is doing: You had a COBOL program that loaded data and sort of inverted, you know, tipped on its side. You had records and it flipped on its side. And then you could do analysis. You can do select. You can do sort. You can do tabulations, et cetera, et cetera.

 

00:10:37 [JK] 

So, people use this everywhere to do analysis. And so, people lose also all the… OK. In order for IBM to… What the sims engineers did, is they had to configure hardware that was quite complicated. Very expensive hardware, mostly, was being at that time, was being rented out. And you had to configure them and all the programs, all the configurators, you know, that these systems engineers at you know the branch offices, used… were all written in APL, because they could be updated quickly, and they were easy to work with. So, like I'm saying, you can't imagine the importance of APL in those years at IBM.

 

00:11:24 [CH]

How long did that last? If I'm guessing, this was late '60s to into the '70s, like…?

 

00:11:30 [JK]

Yes, it may be to the early '80s.

 

00:11:33 [BT]

So, Joel, what caused them to drift away from it? That seems to me that that would be pretty central to what they're doing. You've got all these engineers working with these, sort of, fundamental level applications in in APL. Why do you think it was that it suddenly drifted away? Was IBM concerned about the engineers having too much independence? Or what do you think it was?

 

00:11:57 [JK]

I don't think APL appeals to the programmers. I mean to some it does. I mean, you know, you have Brooks, [8] you know who used APL to define, you know the /360 architecture and things like that. But by and large programmers aren't into the… in the mode of, you know doing applications or understanding things. They're into their own stuff, so they want to invent all the things, you know, they're just… There's a lot of people who don't like it. It's also a cognitive thing. You know who David Shaw [9] is? Have you ever heard D.E. Shaw?

 

00:12:37 [CH] 

Yeah, it's a hedge fund guy.

 

00:12:39 [JK]

Yeah, yeah. Well, David Shaw worked in my group. And I remember that I came to him. And by the way, it's a big guy in… You know he did a lot of… many advances in how to do parallel processing and also he had a company and he was at Columbia Prescott [???] but he also had a company that built compilers to try to, you know, produce code that ran in parallel, and I remember I was talking to him. I said to him, "you know, what do you think, APL?" You know, because he was in our shop there, you know, we're doing — this is the APTG (Analytic Primary Trading Group) — which, you know, we traded like 2% of the New York Stock Exchange in the '80s.

 

00:13:21 [CH]

Was this at Morgan Stanley?

 

00:13:23 [JK]

Yeah, Morgan Stanley, so you know he came into our group, and he said to me, "you know I'm not a I'm not a geometric kind of guy. I don't view the world geometrically." And of course, arrays and things like that are all very, you know, appeal to geometry. I mean, if you look at a lot of the transformations, a lot of the operators in in APL, they're really… transformation, in other words, you rotate. You can imagine in your mind, you can actually imagine implementing all the APL operators, the primitives, and you can actually make a physical implementation, right?

 

00:14:07 [JK]

I mean, you could imagine, you know, indexing or rotating or transposing or catenating… All those kinds of things, are very, uh, you know, "structural", if you will. So, it's very geometric and programming in APL requires assume kind of abstract geometric intuition, and some people don't like that. So, David Shaw is a super super smart guy, but he said, "it doesn't appeal to me; I'm more of an algebra, you know, one potato, two potato, you know, looping kind of guy." And that's just the way it is, you know. And he's let me tell you, he's a very bright guy. He certainly made a lot of money after he left, you know, Morgan Stanley, so I think it's… a lot of it is also cognitive.

 

00:14:58 [CH]

So, first of all, I think that that's an amazing quote. "The one potato, two potato." "For-looping guy" is fantastic too. As soon as I said the hedge fund guy, I realized I hadn't done service 'cause for, I mean, I've read a bunch of books so I know who D.E. Shaw is, but for those that don't, a lot of money is like, I think I could just Google his net worth; it's like 8 billion so D.E. Shaw is like one of the most successful hedge funds of all time.

 

00:15:21 [CH]

Famously, Bezos worked there at one point and decided to

quit when he was 30 and then went and made a small company to sell books online and we all know how that went. So yes. And so, David Shaw worked in your group at one point basically and then went on to do D.E. Shaw later like?

 

00:15:38 [JK]

Yes.

 

00:15:39 [CH]

Wow.

 

00:15:40 [JK]

Also, by the way, Bezos was, you know, when I was a principal, you know, I was… eventually I became managing director at Morgan Stanley. But Bezos [10] was also at the same time as I was in Morgan Stanley.

 

00:15:54 [CH]

Bezos worked at Morgan Stanley as well?

 

00:15:56 [JK]

Yeah.

 

00:15:57 [CH]

Wow, I had no idea. And so then how did, I guess, did they work together or did how did Bezos end up at the D.E. Shaw?

 

00:16:03 [JK]

I don't, I don't, I don't know. I don't, I don't think so.

 

00:16:07 [CH]

It's just a coincidence that they were both two different places. Wow, OK, so that's crazy so. Go ahead, Stephen!

 

00:16:14 [ST]

Just maybe the same the same question, or at any rate the same answer as the question above us: For all its importance at IBM, of providing personal computing before personal computers existed, as far as I know, APL was never part of the solutions that IBM salesmen took to propose to their customers. The solutions were of course mainframes, but the software with them in the solutions with PL/I and their databases in them. But APL never seemed to be part of that. Was that the just the same cognitive thing you're thinking, or something else that IBM think they knew something about APL, we didn't.

 

00:17:00 [JK]

Well, they did try to sell it, you know, and they were they were fairly successful at it, I mean, along with other companies like DEC [11] , they it was basically APL again, a time-sharing thing. It's ahead, almost ahead of its time, right? It requires, basically… like the big invention… I think that APL's one, if not the first one of the first languages that was implemented as an interpreter, and I think it's a tremendous value. Space over six, you know what I'm talking about? Space over three, you know what I'm talking about?

 

00:17:32 [JK]

You key in something. I imagine that you're on a, you know, on a terminal, and those there's a lot of just with paper. You say 1 + 1 and then the computer shifts over to the right [correction: left — ed.], it says 2. You know, what I'm saying? This idea that this interactive idea was, you know, very powerful. It's kind of like what the what the web is really like on some level; there's an interactivity kind of thing.

 

00:17:55 [JK]

So, most of the systems that IBM was selling were, like, you know, big operating systems kind of stuff and you know applications for, you know, for you know, you know, accounting and bookkeeping and all that kind of stuff. But they did try to do it and I'll also, you know, point out: The first personal computer was the IBM 5100 [12]. And that one had APL on it. And you had a switch; you can go to BASIC, or APL and it had just a tape. It didn't have even a disk.

 

00:18:25 [JK]

 I loved that machine. I got one. Somebody gave it to me in the branch office, the Banking Branch Office because they wanted me to work on my own time to build an application that would keep track of all the, you know witnesses and things like that, at the IBM antitrust thing. And that's how I got involved with the 5100. So, they did try to sell it.

 

00:18:51 [JK]

I know, this may be a good point. Bill Gates, [13] right? This is going to blow your mind if you don't know this. Bill Gates goes off and basically, he starts by building a BASIC interpreter. But before the BASIC interpreter, he actually built an APL interpreter [14]. And it… 'cause he thought it was great. He like he loved APL; I mean you know? So, they gotta give Gates credit for that. But what happened was it didn't get any traction. And being a good businessman, he dropped it. Then IBM came along, and they gave him the opportunity to build them an operating system and so forth.

 

00:19:35 [JK]

I had a conversation with Bill Gates. He came to visit Morgan Stanley. And this is how, you know, this is how I found out. He was introducing the, like… OK, why did he come to Morgan Stanley? Because Morgan Stanley brought them public. OK, then, you know. And so, he's always looking for big customers to try to see where he can sell his stuff. And you know kind of understand places that use technology. So, he came to, you know, sort of like a on a visit, to Morgan Stanley.

 

00:20:08 [JK]

And it was on during the this the festival Sukkot, so I wasn't able to go to a meeting where he spent a lot of time with the technology guys. But then he had one more meeting and I showed up to that one and there weren't too many guys there. And I really gave him a hard time. I said, "why should I get stuck with a system that's controlled by you and, you know, it requires your software, isn't open or anything like that. I mean, I'd rather…" This is like — by the way — in the early '80s, and I said to him, "why are you… You know, why should I do that? Why should I commit myself to you, you know?" And he gave me some great answers, like, you know, his stuff's always hardware compatible blah blah blah — not anymore, but it used to be then — and so we get to talking about what we do and see, you know, so how do you do things? I said, "well you know wrote our own APL here, you know, called A and, you know, we use it for trading and for analysis because it gives you a lot of capability." He says, "yeah, but wait, uh…" He says, "uh, APL, but that's too slow!" And I said, "not our", right? And, you know, "our interpreter." That's the one Arthur, you know, wrote for me. For, you know, for my group. So, he all of a sudden, he really got, you know, got very, really attentive, you know. And he said, "but, so you're telling me that you use APL in production form, and everything like that?" I said, "yeah, yeah."

 

00:21:37 [JK]

And so he pointed out to one of his guys, you know, some technical guy that was with him, I mean, I think it's a combination, you know bodyguard, you know, technical guy, 'cause he had a little thing in the ear, and he said, "you have to follow up. I want to know what these guys are doing." I mean, he never did it. I really didn't care 'cause I was not in the business of trying to sell anything. I mean, we're just trading money for Morgan Stanley at that time, but that's when then I started looking into it, and Bill Gates actually tried to build an APL interpreter. Imagine if that had, you know, battle traction.

 

00:22:07 [CH]

How did you find out that he had actually written one? 'cause I've heard rumors of these stories, but I've never heard that he'd actually written an interpreter.

 

00:22:13 [JK]

Yeah, I did some research, I looked it up on the web and there's some letter, I wish I had it, where he's talking about, you know, in pretty glowing terms about it, so anyway, I mean, look, it's gotta be a small thing in his life. But nevertheless, you know it's like you don't expect it. By the way, another, yeah, another thing that killed it — not killed it — that had a big impact on APL, is the fact that in the early days, the character set, if you wanted to get the APL character set, you had to have hardware. You had to do something that was hardware related in your system, and then you know it was a real big pain in the butt to get the APL character set. Yeah, I know 'cause I had to do it too and… But then of course, you know, very quickly thereafter everything became software so, you know, you think, "what's the problem?" But it, you know, it was already slowing you down.

 

00:23:01 [CH]

You know, so when I first started falling down the APL rabbit hole couple years ago, I had come across this story after talking to Bob Bernecky [15] that Bill Gates had come by IP Sharp and Associates back in the early '80s as well and he was basically doing this tour of companies around North America trying to, you know, get ideas for the PC that he was going to build and then I'd heard that story and then I also stumbled across a photo of an IBM 5100, and zoomed in and saw, like the APL/BASIC toggle switch and ever since, I've always thought that there's like a parallel universe where instead of putting BASIC on the personal computer, some APL variant, was put on and instead of everything looking like BASIC, everything looked a lot more like APL or something.

 

00:23:45 [CH]

I'm not sure if you have thoughts on… Is there actual… 'cause 'cause you mentioned earlier, too, that the geometric or shape thinking that sort-of is required for APL, A lot of people don't like that or that's not the way their brain works, so, is it? Do you think that there is actually a parallel universe? Or there's just too much of a barrier to entry in terms of the type of thinking that it requires to be successful with APL?

 

00:24:06 [JK]

Let me just talk about myself. For example, I basically use 2 languages when I program. I use C and I use APL. I mean OK, I don't use APL anymore, for many reasons, but you know, but primarily because that you know we committed to that kind of technology. You know, in 1010data and so on. But basically, APL for me is the high level. That's where you could use a lot of leverage, but in in my experience and the kind of systems that we built, require that they be high, high performance, high speed, kind of stuff, so performance becomes critical.

 

00:24:50 [JK]

I know so many people have taken the approach of trying to build the APL compilers and things like that, but for me, if you have a good interface between C, let's say or any language actually, then whenever I get into a situation where either I'm not smart enough or it just is a one potato two potato problem, then for the very small amount of code that that's required, which may actually dominate from a from a CPU cycles point of view, then I just dive in and write up a small piece of code in C and I interact, you know, and I just call it as a function from…

 

00:25:24 [JK]

So, I mean I'm both capable of doing, you know, thinking in both ways, some people can do one and people can do both. And some people can do none. But you know, I think you have all different kinds. For example, Arthur Whitney… Clearly, I mean, you know, he spends most of his time implementing stuff in, you know, in C. So, he's probably, well one of, the greatest programmers that… if not the best programmer, that I've ever met.

 

00:25:56 [JK]

And yet he's a great, you know, array guy. I mean, you know, he can… I gotta tell you about I interviewed how we hired him. So, he comes from, you know, from IP Sharp, we interview him. I'm interviewing the guy and we have a problem we're looking at. We have a problem of trying to determine nearest neighbors. [16] You know what that is? It's sort of like an abstract measure to know if something is like something else. And we're struggling to try to do this in APL. In A at that time. No, no, no, that was APL 'cause that he hadn't even been hired. He only did that afterwards. And we posed the problem and now we have been struggling some pretty decent APL programmers. I'm struggling with this thing. I kid you not. He looked, his eyes went kind of like to the left, like looking out. And in like 12 seconds, he went up to the board and he wrote 6 APL characters. And it was a solution to our nearest neighbors problem. So, I looked at the guy, and say, "We're going to hire this guy." So, you…

 

00:27:10 [CH]

You didn't send them for five more interviews or?

 

00:27:12 [JK]

No, no, no, no, no, no. I mean he talked to a bunch of other guys, but I mean everyone had the same impression. So anyway, I think you have different minds, but again, there are people that have a difficult time. I have. My biggest problem is in recursion. I know how to do it, but it doesn't come naturally in it, you know? I mean programs that are purely recursive or, you know — it's hard for me. I mean, you know, so you know you have difficulties. So, I don't know, I mean.

 

00:27:37 [CH]

Stephen?

 

00:27:38 [ST]

Joel, I got a question I've been chewing on for some time. I'm wondering if you can help get me some insight into it: In 1978, when I was working for IP Sharp Associates, I had — I was over in Toronto — and arranged to go cycling with Bob Bernecky because I knew he rode a bicycle. I was a young sprog of it, maybe 20s and so this was it. I thought this as a bit of a cheek meat, asking Bob to take me cycling in Toronto, but he agreed to do it.

 

00:28:09 [ST]

However, on the date, something came up and he sent me out with some other young guy who at the time. I thought I was working with the Zoo, IP Sharp's systems programming team. Well, that young guy took me cycling through Toronto in the most terrifying fashion; we go the wrong way up one-way streets, we went up with his girlfriend but two of us were like struggling to keep up with him. That was of course Arthur.

 

00:28:36 [ST]

Back a couple of years later, few years later, in '83, Chris Sanderson who was running IP Sharp Associates in Australia wanted to port the Sharp APL interpreter onto a Hewlett Packard minicomputer and for some reason I told him he should have to talk to Arthur and for the life of me I can't work out what it was, I thought I knew about this young man, that would make him a good candidate for that.

 

00:29:02 [ST]

You decided to interview him a year or two later in Morgan Stanley. Why did you interview him? What did you know?

 

00:29:10 [JK]

Well, he came with an incredible reputation. Also, you know, I don't know if you know, he was like the bad child at the APL conference. You know, the APL developers' conferences, you know; he's really a funny guy. Let me give an example. He worked on a generalization of the Axis operator [17]. Which is a big deal in APL. And he figured out how you can define like 20 different primitives as, you know, in terms of the axis operator, right? So, you think that he'd be invested in whatever. He'll drop an idea if it doesn't work. I mean, it's interesting how he's not attached, necessarily, to any particular thing, because later on, you know, when he went and he is doing K, he dropped the idea altogether.

 

00:30:05 [JK]

I mean, I've seen him go back and he has tremendous fondness for APL. You know, things that he dropped and ignored or changed or, you know, didn't do, or whatever. And then he comes back, and he says, "oh, that's pretty good", and he'll adopt it. You know, I mean, he has a bizarre mind.

 

00:30:22 [JK]

I was asking, "what are the most important… What was the most important guidance in the development of K?" And you know what he told me? Now can you try to guess what it was it he said?

 

00:30:35 [CH]

Starting from scratch each time, I have no idea.

 

00:30:38 [JK]

Well, he always starts from scratch. He throws all the code, so from sketch anyway, but not always, does he throw away the idea. No, no. His major driving force for the design of the K interpreter is the QWERTY keyboard [18].

 

00:30:55 [CH]

Really‽

 

00:30:59 [JK]

Yes. He, in other words, it's that… You might call it an arbitrary constraint. I mean it's opposite of what Apter was telling you how, and I like it too, the oldest characters and everything like that. And Iverson loves that because he's a mathematician kind of guy. But he said that he wanted to make sure that whatever his language was, fit on a regular keyboard.

 

00:31:19 [JK]

And that particular set of constraints lead him to figure out what primitives he should have and so on. I mean it's mindboggling, but he was really serious. Yeah, he's told me some things that are like totally mind bug. I remember when we were doing A, [19] A was the language that we developed in Morgan Stanley which is, you know, an extension — it's basically APL but an extension of it — that actually had one level dictionaries. But anyway, I remember one of the guys is saying, "Well, how are we going to debug this interpreter?", right? 'cause I mean, you know… And by the way, Arthur's code is the cleanest code that I've ever seen. I'm not saying it doesn't have bugs, but I am saying it's seriously, seriously good code even when first goes out.

 

00:32:06 [JK]

He said, he looked at it — this guy named, the guy's name is David Wise — he says, "I'll tell you what; I'll put in any feature you want me to put in this thing, but for every feature that you… new feature you want me to put in, you gotta tell me which feature goes out." So, he was like…

 

00:32:22 [JK]

And then another time the guy said, "well, how're we going to debug it?" he said. And he said, "oh, if I have to debug this…" — and I never understood that until much later — "oh, you really want me to? You want this thing to be bug free? Well, then I'll have to spend a couple of months reading the code." Could you imagine what? Yeah, in other words. Think about it. Usually, you know, you'll run something, and you test it and, you know. And if it breaks, you know where to go or whatever. But for Arthur, the way his mind works is if you really want to debug the thing, he had to go and read the code very, very carefully. Not execute the code. Read the code. It seemed to me like an absurd idea when I heard it.

 

00:33:10 [JK]

Was only maybe 20 years later after doing… I was doing some kind of… For example, I think it was, yeah, I was doing this parallel processing thing with many tasks which I… That's not usually what I used to do, but I got into that, and I had a really hard problem, and it was really impossible to test 'cause there's so many conditions that, you know, that come in and there's all this time shift and everything like that. And I found myself number one thinking that you better design it right from scratch. And really, it's a matter of reading the code and understanding that problems can't really arise, because if your code is loose, you'll never even be able to debug it. And it's only 20 years later that I understood it. And it seemed absurd when he first said it to me, but I'm saying so. Interesting kind of guy.

 

00:33:55 [JK]

Also, other thing about the design of K: [20] Arthur's design goal was that the slowest primitive to the fastest primitive have to live within one order of magnitude in speed. Right, so if in other words you can't have operations that are powerful but impractical from a CPU point of view.

 

00:34:17 [JK]

And I'll tell you another example where this showed up, you know, immensely. There was a company called Analogic that was building what today is called a GPU. [21] This is happening in the '80s, right? Maybe… no, the '80s. And so, they, you know, so they really… It's got parallel stuff in it and whatever, and the guy, the owner of the company, got interested in APL because APL, you know, has a lot of ideas that, you know, the operators work, sort of, like, you know, all in a bunch of things as opposed to just individual, you know one potato, two potato stuff, and they built an APL interpreter. You can look it up "Analogic", [22] OK?

 

00:35:07 [JK]

It was astoundingly fast on the operations that parallelized well. But it was an impractical machine because every once in a while, you had to resort to an operator that didn't parallelize and what would happen is the thing would go it would take 500 times… 1000 times longer to do it, so it really wasn't even practical. So, for Arthur — and he uses, you know, techniques to make sure that that that the language is fast all along, so you can really be expressive. Otherwise, it's not practical. So, I mean, I've been I've been going on for a long time and maybe you have…

 

00:35:46 [CH]

No, no, so it's amazing that this came up organically because that when I was watching your History of Array Languages talk the one from 2016, you mentioned at one point this test Analogic system and I tried to Google for it and couldn't really find it and I even went and asked them — because I work at NVIDIA, we're, you know, we're one of the biggest GPU manufacturers in the world — and asked a couple folks that have been around for a long time; had they heard of this test Analogic interpreter and none of them had had heard of it, but they said that if it was earlier than like you know the '90s that you know GPU programming was in its infancy, so it wouldn't be surprising if an attempted something like this where you were accelerating the whole language, really didn't go far, because exactly what you're saying. You know, you're going to get some little micro pieces of it that are blazingly fast, but as a system it's not going to work well and I'm curious if, like, if there's been any other company that has tried to do that, you know, since the advent of CUDA programming or you know, sort of, modern GPUs, but I haven't been able to track down any kind of startup or company that is attempting that, because it seems like now in the 21st century, like that would be something that could be successful. If done correctly. Bob?

 

00:36:56 [BT]

Yeah, well, I'm actually going back to what you were talking about, Joel, with the understanding of the programming and have having to read it to say I could debug it. That actually reminds me of something that Aaron Hsu had said. Or that I've heard Aaron Hsu talk about in his Co-dfns compiler, [23] is that he finds now that he understands it so tightly, he can look at it and make very quick changes to it because he's almost internalized the process so tightly that he can see exactly what he needs to do, and he doesn't have all these side effects right rippling through. It's a level of understanding of a program that I don't think most people who write, or program actually get to because they do get into that loop of "I'll try this. Oh, it didn't work. I'll try this. It didn't work." As long as you're working in that area, you're never really going to understand what you're doing. It's like writing and rewriting literature. I think you, if you're… You could get to a very good piece of literature, but if you really, really — if you're one of those people that really, really, really understands what's going on, you're like Shakespeare: You can put words together in such a way that it just really works and it wouldn't be put together any other way. Do you think that's sort of what's going on with these — I want to say times 10, but it's probably times 1000 or 100,000 programmers where they're actually working at a different level of understanding of what the programs are doing?

 

00:38:18 [JK]

Yeah, for example, the smartest guy. I mean, like, I said, the best programmer that I know… I mean, there's another guy, by the way, you may want to interview him. His name is Adam Jacobs. He works at 1010data. But yeah, we don't want it, but anyways. Arthur, every time he writes a new language it gets smaller. And it does more, and it's faster. And if you look at his code, it's bizarre: He'll have like the, you know, the source for K, right? I don't… It's 140K, the whole interpreter. And that one included GUI. That one had a GUI.

 

00:39:07 [JK]

Basically, he has a.c, b.c, c.c, d.c — these, you know, just a few things. And every one of these, you know, scripts, fits on a screen, very easily on a screen. And he writes C — he has these macros. So, he strings a lot of, you know, a lot of C things in a row. So, it's completely totally dense. You understand what I'm saying? There isn't one blank anywhere. And it's a.b, you know, a.c, b.c, c.c, and so on. It's maybe ten of those, and that's it.

 

00:39:58 [JK]

I can't tell you the number of times that I got into trouble, and he never ever wants to show anybody his code, not because he doesn't want. You know, it's because it scares the shit out of people.

 

00:40:09 [JK]

There was a guy from IP Sharp. I can't remember. He made a big — he was a pretty big shot at Morgan Stanley too — and showed him K. He's an APL programmer too. And he loved what he saw, and then says, "can I take a look at the code?" And Arthur says, "I don't really wanna show him." Said, "Come on, let's show it tomorrow." And the guy sees it. And when he saw this code, since this is unmaintainable and impossible to, you know, to debug and Arthur turned the machine off and looked at him and says, "you're a pinhead."

 

00:40:40 [JK]

And I actually lost my job in Morgan Stanley eventually afterwards because I pissed this guy off. But anyway, my point is, that it's counter intuitive. That what he's trying to do is exactly what you're talking about. He's kind of has it all in his mind. He can, you know, you can, if you're going… We have code that's all over the place with pointers this way or whatever it just… It's hard to maintain and this is for the smartest programmer; he can deal with complexity, right? Because he's smart, he makes it, he keeps it as simple as possible. And that's a lesson, you know.

 

00:41:13 [JK]

By the way, Apter's code, you, well, you've seen some of it, because you looked at SNL [NSL — ed.]. [24] I mean, he's got a similar kind of stuff. But you know, it's counter intuitive. All these things are just totally counter intuitive.

 

00:41:24 [BT]

Arthur was at a conference I was at in 2014 in Toronto, a J Conference, and he spoke at that. And one of the questions that came up is, have you? Have you really reached the level that this can't get any faster and more compact? His answer was, "well no, because the optimum is no code and infinitely fast."

 

00:41:49 [JK]

Exactly!

 

00:41:50 [BT]

And I was just this… and everybody kind of went, "well, I guess if you're working in these areas that is the optimum and you will never reach the optimum until you can do that." He was quite happy with where he was in terms of what he'd come up with, but his target is not the usual target of what most people are aiming for.

 

00:42:11 [JK]

That's true.

 

00:42:11 [CH]

I wonder if you have thoughts, 'cause seeing as we're talking about code density and Arthur style, is at the beginning of your talk, the one that I mentioned a couple of times now, you quote, I think, Ken Iverson, quoting Babbage, and the three word quote, which is — it's a longer quote, but the three words that are bold that are "brevity facilitates reasoning" — and I've had this ongoing, you know, battle in my head of "does the tersity, you know, the terseness of these symbols and the fact that you can spell things so succinctly, is it important?" And like even very recently, I you know, had a in my thesis that I was writing, and I was talking about the expressive power that comes with being able to spell something so succinctly, and their remark was, like, you know, "I don't see why this is being remarked a couple different times that like, you know, the fact that you can spell with a few number of characters is important" and I just, like, I wonder if you have thoughts, and I feel like this is something we should be asking all our guests: Like, does the fact that it can be spelt with so few characters — is that really like a feature, an important feature of the array languages, or is it something that potentially we over-index on?

 

00:43:24 [JK]

Yes, I think it is very important. In my story there, you know, my talk that you saw, I covered that. I mean it goes back. See, most languages, programming language, a lot of them are kind of very new, but since this is based on mathematical notation it has a long history, right? Going back to, you know, al-Khwarizmi, [25] you know, with the, you know, showing you how to algorithm some multiplication and so on, and, you know, especially with Leibniz. Leibniz thought that the purpose for your having a formal logical system was to make it possible to think.

 

00:44:04 [JK]

I mean, I have a cord, I don't, I mean, from that paper, let me see if I can find it. OK, so this is Leibniz, [26] right? He invented, you know, inventor of the calculus refined binary numbering system, prolific inventor in the field of mechanical calculators. Leibniz calculus ratiocinator, which resembles symbolic logic can be viewed as a way of making such calculations feasible. By the way, that system that was in his machine that he that he built, that same plan was utilized hundreds of years later. Still was in use. "Leibniz thought symbols were important for human understanding. He attached so much importance to the development of good notations that he attributed all his discoveries in mathematics to this. His notation for calculus is an example of his skill in this regard." This is a quote from, you know, a Wikipedia article.

 

00:45:01 [JK]

So, what I'm saying is you have an enormously long history. You know, you look at today the standard model of physics. It fits on a page, right? Now, is it easy? Not necessarily. I mean, but in a certain sense, the brevity helps. So, you know… Also understand, I mean… K or APL is a complete language, right? So, if you want to write loops, you can write loops. You can write loops; you can express anything you want as dumbly as you like. You have a choice. But I think that there's a lot to be said for that, and it's just basically what mathematics is done. People invent mathematics. I mean, if you look at the proofs from Newton… Newton who invented calculus when he's selling his idea, it's all using geometry; really complicated stuff, because they didn't have the… people didn't understand. So, I think there's an evolution, right? And that's why I say that the APL language starts way, way, way far back. It inherits all of that, you know, increasing generality notation that mathematics gets. So yes, it's very important.

 

00:46:25 [BT]

It's really interesting 'cause you talk about Leibniz and you talk about Newton and that whole dispute over who invented calculus, and I've never thought of it this way. But you're right, Newton had this whole way of thinking about what the calculus was and this whole way of doing it and understanding it, but found it very hard to explain it to other people, and at the same time, and I think it's pretty much, acknowledged that they both invented it independently, but Leibniz did use a better notation and Leibniz' notation was used in large… it's not in all areas, but in most areas.

 

00:47:02 [JK]

It's the one we use today.

 

00:47:03 [BT]

And as a result, Leibniz could do the same thing that this other guy you know, a hundred miles or hundreds of miles away on the other side of the English Channel. But he was doing it with notation and as a result he got almost equal credit for inventing it. I think actually, depending on what country you're in, because I think in some countries Leibniz is acknowledged as the inventor, and in a lot of countries Newton is the inventor of the calculus.

 

00:47:27 [BT]

But it, the notation, is really, really a powerful thing when it comes to these complicated ideas: the simpler you can make it — and I think notation helps that — the simpler you can make it to understand, the more likely it is to be able to be adopted.

 

00:47:42 [JK]

Yeah, I definitely believe that. Now, some of these concepts are hard. So, when you look at a light one line of APL or K code, some of them are very deep, but I mean the problem that you're trying to solve is pretty deep too.

 

00:47:56 [CH]

I wonder if there's a history of there being resistance or friction to introducing these notations, you know, back hundreds of years ago when they were being introduced, or if you know, or how long it took, because it seems like from my experience you'll show people a APL or J or K expression and just the reaction you get is just like, "well, that's just… how could anyone read that‽" And it's really, it's just a language that you learn, and each of those primitives represent an algorithmic operation or some sort of kind of transformation. And when I hear you talking about, you know the calculus notation and it's like it's this sort of agreed upon thing that it is, whether it's the best form of the notation that, you know, that's another question to be answered, but the notation itself is helpful. It subordinates detail that, you know, makes things pop that are important.

 

00:48:50 [CH]

But yeah, it's lost on me. There's… Somewhere here there is like a beautiful argument for the spelling of these languages, but I definitely feel like I haven't found it yet.

 

00:49:01 [BT]

And with the calculus the thing is that the notation was ahead of the actual understanding of it because it was, you know, hundreds of years later, when they actually got into the analytics of it that they found, actually how it did work and what infinitesimals were, but they were leaning on the notation to get them through the calculations before they understood what the calculations actually were and it was in — I think it was in — the eighteen 1800s/early 1900s where this all came together and they said, "oh, that's how we can do this."

 

00:49:33 [BT]

Up until that time they were just using it. They didn't really have the fundamental, down to the very most basic fundamental levels of understanding. They were using it 'cause it worked and 'cause the notation worked, more than they understood exactly what was going on. So, in the case of talking about Arthur, "I'll take a couple months to read it." That's where calculus ended up. It took a couple months, a couple of hundred years to really get it to the point where I know exactly what's going on, but in the meantime, I could actually use it. And if I've written it properly, foundationally, if my notation is good, it's going to work for me.

 

00:50:10 [JK]

Yeah, I mean the problem that you have is the same problem why people don't go into math or whatever. I mean, some of them have facility, but it takes effort, right? And if it… Let me ask you this question: Is French a better language for communicating than Portuguese or English or Chinese? I mean, you know, whatever. You know, I'm just saying if you know it already, it's a means to an end. You're not even focused there. Most people aren't focused, they don't care about the, you know, whether it's better or not, even if it is better; "if that's what I know, it's what I know." You know what I'm saying? And so that's a… In other words, there's a big step function to get over, and by the way, there has to do with the marketing. With 1010data. 'cause I mean, 1010data in many ways reflects, you know, the vector idea. You know what I'm saying? So, it's a very powerful paradigm, but it requires a certain amount of effort to get into it, and then that, you know, that that makes it difficult because some people won't accept it because they're doing whatever they're doing, you know what I'm saying? So that's a psychological issue.

 

00:51:20 [CH]

Yeah, maybe we can talk… 'cause we've, I think spent most of the time talking about the '80s and your time at Morgan Stanley. And I'm not sure if you want to tell the story you briefly mentioned that you were fired by someone that Arthur called a pinhead, or something happened there and then. I know you transition to UBS and then on to 1010data later. Do you want to fill in some of the details there?

I'm sure there's some more interesting stories and things that happened along the way up until yeah, what you were most recently working on?

 

00:51:47 [JK]

OK, so quickly how I got out, how I left Morgan Stanley. Basically, you always need a rabbi, and organizations are pyramidical in structure and so at the end, you know, the higher up you go, you know, the less room there is up there, and my rabbi lost, and some other rabbis won. And this guy's rabbi won. And so, there wasn't a lot of room there so in a certain sense,

I mean, I'm exaggerating, you know, because we're politically in on different, you know, sides of the aisle.

 

00:52:21 [JK]

So, then I left Morgan Stanley. By the way, Morgan Stanley treated me very well. And I was actually introduced by, you know, a partner, I can't remember his name right now, to a guy in UBS, who was you know, very high up there, maybe second in command — he handled all of trading for the Union Bank of Switzerland [27] and what they were trying to do, is they were trying to become like, you know, Goldman Sachs or Morgan Stanley. And they didn't… They were not as advanced as those places, and they wanted to really leapfrog and get there. And so, he interviewed me to sort of build a platform for them so they could do their trading and catch up with, you know, to the kind of stuff that Morgan Stanley, Goldman Sachs, you know and other, you know, major brokerage houses could do.

 

00:53:15 [JK]

And I said there, "why don't you buy some firm, you know? I mean, it might be cheaper if you just bought, you know, Bear Stearns or…" I don't remember what other name I gave him. But he decided that he didn't want to do it, so we got the job of basically building the platform for fixed income. All of the technology. And so, I ended up, you know, running… Which is something that I also did in Morgan Stanley, by the way, after I did the trading, you know, basically running fixed income research and technology and building it from scratch. And for that situation we hired Arthur. So that we'd have, you know, we'd have his full attention. And you know, making sure the K platform really works.

 

00:53:58 [JK]

So again, all his products really are tested in the fire of reality. You know what I'm saying? And so that's what happened, and then eventually, you know, I don't want to get into the details, but that whole thing fell apart — it's mostly politics — and at that point we decided that we were done. You know. I was done with working for somebody else, although that had been proved quite lucrative.

 

00:54:21 [JK]

Just give me a second 'cause I have some notes here that might be… Yeah, so in in 1999, Sandy Steier, who's, you know, one of the originators of 1010data, and myself, and a guy named Peter Muller, [28] who's no longer associated with the company, he's an old friend of mine, end up in Palo Alto at Arthur's house. And what we're doing, is coming up with the following idea, which is basically 1010data. You now have a web. It's in its early stages, but why should everybody have to build the whole system, in order to do analysis. I mean, just give us your data, we'll load it up and all you have to worry about is doing the analysis and doing your queries and things like that.

 

00:55:21 [JK]

So that was the idea. So basically, we came up with the cloud was a place to do it. That it was SaaS, meaning that you're not buying hardware on your own and installing all this stuff, and then you could do the analytics yourself, but we the idea that we had there was something like, Yeah-data. You have to understand, Google didn't even exist. So, you know Yahoo was the thing and people go to Yahoo or maybe — what's that "Vista", what do you call it?

 

00:55:49 [CH]

AltaVista.

 

00:55:49 [JK]

AltaVista or whatever. But we would do the same thing for actual analysis of data where numbers are involved and, you know, things like that and that's how the idea of 1010data came and that's where we started. And so, the way that Arthur played into this thing is that we contracted to use his, you know, the K platform. And that was the beginning of 1010data.

 

00:56:16 [JK]

Yeah, and eventually you know that worked and you know, and I mean, we sold the 1010data about five years ago to the New House people for half a billion dollars and it worked out pretty well.

 

00:56:27 [CH]

You sold it for 500 million?

 

00:56:29 [JK]

Yes.

 

00:56:31 [CH]

And "that worked out well"? It's a very understated… It sounds like it worked out VERY well.

 

00:56:37 [JK]

By the way, I just thought of something. Another reason why K or APL or all those guys didn't do so well, is because the people that were developing the stuff wanted to make money. They didn't want to give it away. For example, Dyalog APL. [29] It's a great product. But people were selling it. They didn't go the open-source route. I mean, Arthur could have gone the open-source route. But you know, Arthur's doing very well. He's in, you know Turks and Caicos Islands right now. You know where he is, right? He's hanging out. He's, you know, in the Bahamas someplace and working on Shakti [30] or whatever. I'm just saying, he's got a lot of money. I mean, so it worked out well for him. This has worked out well for me, but if, you know, if people had really at the right timing, at the right time, in the right place, you know, made this stuff free or whatever it is, maybe it would have taken off more, you know what I'm saying?

 

00:57:36 [JK]

I mean, and… But then on the other hand, this… But so, what, you know‽ I mean, yes, I have this great desire and I do… I mean, there's a certain way which, you know, I feel like almost like a profiteer for K and APL and a lot of stuff, but well, we didn't go that route and that did slow things down. You do have to understand that.

 

00:57:57 [CH]

Yeah. Do you have, well, so do you have thoughts? Because I know this is, I mean, I think it's inspiring for the APL story that… Like I've heard that Arthur sold K4 and Q for $100 million, which arguably makes it the most expensive program in the world because it was 50 kilobytes of code. So, I heard from someone once that it was like per kilobyte of code, it was the most expensive piece of software ever sold, which is, you know… Obviously they were selling KX, the whole company, which came with customers and stuff, so it wasn't just the, you know, the K program and Q.

 

00:58:31 [CH]

But that's… So that's two businesses, though. KX and K and what became Q, that got sold — obviously very successfully — and now 1010data. I guess so there's two questions. Is like… Do you think, yeah, do you think the world would be different today if everything was open source? 'cause technically J is open source; there is an open-source APL that does exist, so it's not like that there's zero open-source array languages. And that's the thing. I've these stories of people building successful companies on top of array languages. They're not like… This is the first time I'm hearing of the 1010data one, and it took me a couple months to find out about the KX one. And even when you go search for that, the articles that come up are basically like the news briefings from some, you know, it's not like this was on the front page of the Wall Street Journal or even on like, you know, page 26 of section 4. Like these stories aren't really widely reported on and I wonder if that's intentional or… Because it seems like there's definitely a ton of people out there that have built very successful careers on top of these languages, but it's not as widely known, at least from someone who's only gotten into this world in the last couple of years.

 

00:59:51 [JK]

Yeah, I gotta tell you, it's always been a, you know, a source of mystification for me. I don't… See, especially when we're doing 1010data — 'cause you're trying to sell it, right? I mean, we're trying to, you know, we're trying to get a lot, you know, more people involved or into it, or you know, scale it or whatever. And it's always been very hard. I think a lot of it has to do with psychology. I'm not, you know… It requires a certain kind of person; you know what I mean?

 

01:00:21 [JK]

I don't know, like for example, I mean, Whitney can be very dynamic and when he talks, if he wants, to get him to talk, but I mean like, for example, I had dinner with Steve Jobs, [31] right? And I don't want to get into all the details how that ended up, but you get this guy is, you know, of blessed memory, he has a charisma. You understand? He has a charisma. The way he behaves, the way he acts, even the way he dressed. He was so well dressed. I mean it was… I mean, you know, the total mixing, you know? I don't know, maybe it's just luck, you know, it requires something, for example, that I don't have, you know.

 

01:01:10 [JK]

I don't know. I mean, why something works so well, but you know, why something takes off or why something doesn't. Maybe, you know, 100 years from now, somebody will rediscover, you know, arrays, you know, and they think that they invented them. Which is fine, and maybe they'll be lucky, happier, luckier, you know, and we'll be able to promulgate it more. I don't know.

 

01:01:30 [BT]

Yeah, you were talking about the fact that you know open source might have gotten away, and I think sometimes you could look at it and go, "Yeah, maybe that did hold it back", but I think actually J [32] does prove the point, that I don't think it's open source that holds it back, because J is open source. But, and this is the big but, when you when you want to use it, you have to get thinking at a different level. You have to dig in deeper. You have to stop and actually… It takes effort to program in J. Even now when I'm programming in J, it takes effort for me to do it. It's not something that I go, "oh I'll just dash this off." I'm not that kind of a programmer. I'm not that smart.

 

01:02:13 [BT]

But in the process of using the language, it allows me to go through and refine my thoughts to the point that they do become clear and what I end up with in the end, I'm usually very happy with because it does that. But if you were to say to somebody, "you know what, I can make you, I can extend your life, say 20 years; you can live 20–30 years long. Would you like to do that?" "Oh yeah, OK!" "OK, so now what you need to do, is you need to restrict your diet, you need to exercise every day. You need to do all these things and if you do that, you've got a very good chance. And if you live a, you know, have good risk assessment, chances are you're going to live 30 years longer." And most people know that, because that's common knowledge; if you are healthy and you exercise, you'll live longer. Most people or many people won't do that. Because that's a lot of work. And why would you do it that way?

 

01:03:10 [BT]

And I think that becomes the challenge with some of these languages, is, it is work to think this way even when you know them. You have to do the work. You have to be the kind of person that goes, "You know what? I'm going to work at this and that's OK. It's not easy, that's OK, I'm just going to work at it." And I think that's not a common, as you say, psychological trait within people. I think the people who are willing to go, "oh, this is really intimidating, or this is really difficult, but I'm going to stick to it." They're not that common, and usually, and quite often, as you say, when you talk about the sales of these languages, people who can do this, they are quite successful.

 

01:03:51 [JK]

They're also… I think it requires a certain amount of intelligence, a lot of intelligence, you know. You see, you don't have that many people. See, us guys, we like languages in general, right? I mean, I was, I, you know, I wanted to get in — maybe it's not enough time, but I mean, like, you know — like J: I mean J, it reminded me, I mean… I, by the way, I knew Iverson very well. [33] I had a really personal relation with Kenneth. 'cause I sold a lot of products. When I was one of the first IBM's… Well, I was on the first class of IBM salesman that sold software only. And what IBM was doing, is trying to sell, you know, the APL family, because it drags a lot of iron with it. So, I, you know, so I interacted with Iverson all the time, you know? So, we're good friends and I also know all his kids and his wife.

 

01:04:37 [JK]  

When he was at IBM, he was a little bit constrained, and he couldn't do all the things we want to do with the language. And the lang… APL was a smaller, simpler language, even though it's pretty big. When he went to J, he just went wild, you know? And you see all the things that they got put in there and it's hard. I mean, for some people it's difficult. There's a lot to know.

 

01:04:58 [JK]

And see Arthur Whitney's attitude number one. He didn't have that luxury. And the reason why they like him in finances and why people are willing to learn that, is because the stuff was unbelievably fast and unbelievably good at rendering those kinds of algorithms. So, he got, you know he got a good foothold in there. Those people are all very smart and trust me, the guys on Wall Street are not stupid and he gets that, you know he gets that and so he emphasizes more simplicity and speed. I don't know, I mean, I really don't know what the benchmarks are. I don't know what the fastest APL today is and how we compare with K or Q or the same thing goes for Q, what I'm saying here goes, you know equally for Q. And I don't know about J. I mean, a lot of a lot of people add things that are interesting from a language point of view, but not necessarily from a performance point of view, you know, and I don't know. I mean, I, by the way, I don't want to criticize 'cause I don't know J. And you know, and it could very well be it's really fast and really great and really scales, and everything like that. You have on the one hand, expressiveness and another thing, you know speed.

 

01:06:04 [CH]

Yeah, I think, that's kind of my what I think is the golden path for array languages, is basically trying to basically follow what Arthur did in that, you know the fact that it's a somewhat esoteric language. You have to… There's a certain amount of a barrier to entry in terms of the way you have to think about it, and the way you spell it, but if you can come up with a blazingly fast array language where… Potentially, it's even limited; you take away some of the functionality that exists in APL and J with, you know, nested vectors and stuff because that won't map as well to accelerated hardware. But if you can find some kind of language that is just you know an order of magnitude or two orders of magnitude faster than everything out there, and it runs on accelerated hardware. That's going to be something that like, people… There's going to be a set of people that can't ignore that 'cause that's exactly what they need. They don't care what it looks like. If it's faster, they'll want to use it. Which is why, like, I'm convinced there's some secret company out there that we just like… We'd have to keep interviewing people and then one day we're gonna find out, "Oh yeah, I've been, I'm sitting sipping my whatever Mai Tai on Hawaii. I sold the company for however much money and just no one ever heard of it." Because it sounds like there are more and more of these companies as we just discovered.

 

01:07:25 [ST]

So, this is interesting, 'cause we're distinguishing two things we like about the languages. One is the speed. As you say, people have paid big money to use K simply because of the speed. And as you say, Conor, "don't care what it looks like." In fact, I remember Janet Lustgarten once saying to me, "All our customers, all our KX customers, are people for whom everything else failed." But nobody wants this weird maverick technology in their stack. The only reason they've got it is 'cause nothing else will do the job. That's pretty harsh. Pretty self-deprecating. And then there's people who actually like it. So when I sit down to write a piece of Q code, I think the last time last job I did I was hacking sound file metadata to import a bunch of stuff into Apple Music — I published A blog post on that last week [34] — I was thinking about what Bob was saying a few minutes ago about, "yeah, this stuff is hard, but it'll work good." I'm thinking, as I start that, I've got a sense of anticipation. I'm a little excited about what I'm going to do. I don't quite know what the, well, the end result's gonna be. I'm pretty confident I'm going to get this stuff into my Apple Music library. But what's the script gonna look like? Well, what I'm hoping is it's gonna look kinda cool. By the time I don't know what the codes… I haven't solved the problems. I don't really know what problems I've got to solve until I've been all the way through the data, but I'm hoping that by the time I finish I'm going to have something I want to show people and say, "That's pretty cool. Could you believe I've got the job done with just that little code? Can you believe that the code is this readable? It's this clear what's going on." Something like, "it's exemplary!" Yeah, so there's an aesthetic thing, and it's completely separate from the speed. I don't know, in that job, I didn't need it fast at all, it's pure delight.

 

01:09:38 [JK]

Well, but, you know, there's two issues here. A person might go into the speed. But there's speed in two ways. Speed #1 is that it executes quickly, right? But speed #2 is how quickly can you implement something? And that's where the language, the power of the language, and that's it should not be under-emphasized. That's really, really important, right? Because you can do very complicated things and maintain, you know, I'm talking about problem level complexity. You know it's irreducible, right? You can write that very well if you have a language that doesn't force you into all this minutiae and stuff that you can't, you know deal with. So, there's a speed to development and there's a speed to execute.

 

01:10:27 [CH]

Yeah, it's interesting that you say that 'cause I just literally got this question in my defense last week — because I use the word "power" in my thesis, I don't know how many times, referring to these languages, and one of the comments I got was like, "what do you mean by 'power'", and when I gave an answer during the defense — which was OK — but I thought about it afterwards and I realized that like power is an overloaded term and you just described, there's different types of power, and I think the word, the version I was using was more in the speed in which you can write something. And also like how much you can say like it's it sort of ties into expressiveness as the power of the glyphs is that they're rank polymorphic and that they you can do so much on so many different structures with like with like so few keystrokes, but at the same time there's also another power in that, like if you're using a K or, you know, an accelerated APL. It's also extremely fast, and that there's an article online with an interview with Arthur, and one of the questions is about like, "oh, it's an interpreted language. Isn't that slow?" And then it's like, "well, no, like that's a myth that, like, just because it's interpreted, it needs to be a slow language."

 

01:11:34 [CH]

So, it's interesting that you commented exactly on something I was thinking about last week, is like, "what does power mean?" And it actually means multiple things and all of them really apply to array languages.

 

01:11:45 [JK]

Yup.

 

01:11:47 [CH]

All right, so we are well past the hour mark, but I feel like I want to keep going and you said we don't have enough time. I mean, I'm not sure I sort of kind of want to ask. I mean, this is definitely the most famous people have come up in this episode, by far. We had D.E. Shaw first, then Jeff Bezos, then Bill Gates, then Steve Jobs. Are there any other just in case, if you've got a Pokémon collection of well-known people that we should…?

 

01:12:13 [JK]  

OK, oh, here's a good one: Alan J. Perlis. [35] He's the first Turing Award winner. He used APL to teach computer science at U of Penn — at Yale, excuse me. That guy was a great guy. I mean, he invented… one of the inventors of ALGOL and he became a real, great APL guy.

 

01:12:34 [CH]

Did you end up having like conversations, or do you just know about this impact 'cause I know?

 

01:12:37 [JK]

No, I spoke to him, yeah.

 

01:12:39 [CH]

Wow, what was that like?

 

01:12:40 [JK]

He's great. I mean he, you know, what he did, he said, "look, it's really cool: I can teach a guy APL in a week and then well, instead of just talking about how you would implement an assembler language, right? I now have an implementation language." Remember that's where you get the power to implement something quickly, right? And so, he used APL. He taught them APL, and then he taught him all the concepts of the different computing. And he implemented these things, what a macro language looks like etc. etc. And that that's what he was talking about. He thought it was a great educational thing, and while he was there, obviously, APL was a big deal. But you know, again, I don't know why these things don't sometimes stick.

 

01:13:25 [JK]

Other people that are interesting, OK? Benoit Mandelbrot, [36] of blessed memory.

 

01:13:30 [ST]

What's the connection?

 

01:13:31 [JK]

 With Benoit Mandelbrot? OK, so we had… This is at Morgan Stanley, and we're doing trading. And he was, you know, he consulted with us. We were looking at the, you know, attractors and things like that, and you know and fractal things, and he, you know, he was, you know, was helping us out and he thought that was a good idea. We had… I mean he, you know, and also… he did… I went out to, I mean, there's this thing called the Society of Quantitative Analysts, [37] which is an organization of people that are quantitative, very old organization in finance, and you know, I was on the, you know, on the board there and so we used to go and set up things. So, I made, I set up a trip for all the finance people to go and take a look at what was going on at IBM. And they might have an impact on this. And, you know, and Mandelbrot, you know, Mandelbrot was there, and he gave a talk.

 

01:14:41 [JK]

There's another guy named Shlomo [Shmuel — Ed.] Winograd, [38] who invented the FFT; the Fast Fourier transform at Yorktown Heights. [39]

 

01:14:49 [JK]

 There was John Cocke [40] who was the guy who invented the RISC architecture, you know, at IBM, and he loved APL. I remember him saying how he, you know, he thought that's the way that your architecture should be utilized. In other words, big arrays and vectors and things like that, is the way that you could best, I think, that would map most nicely into the RISC architecture.

 

 

01:15:23 [JK]

And there was Leon Cooper, [41] who was a Nobel laureate who helped us with some neural net type stuff that we were using in trading at our place. What's interesting is not the technique; what's interesting is when you get a real smart guy like Leon Cooper who's a, you know, as a scientist, right? And when they apply their brain to trying to understand something. You know, he was amazing. I mean we gave him a problem where we gave him some spreads. We didn't tell him what it was 'cause we didn't want to, you know, and we wanted to find out when spreads and stocks, you know, diverge because maybe somebody got in internal information or whatever and you will get killed on those, 'cause we were just betting on the spreads, you know, when they get big, they they'll shrink, and when they shrink, when they're small they get big, you know. So, we gave him the data. It's amazing what he was able to figure out because he's a good scientist. Just looking at the data he figured it out.

 

01:16:16 [JK]

Oh, another guy, great guy, Philip Wolfe. [42] He was the guy who invented quadratic programming. And quadratic programming is, you know, this whole idea of, you know — trying to remember the guy's name now — yeah, he was at Yorktown Heights. This whole idea portfolio analysis you know risk versus return — risk return.

 

01:16:41 [CH]

Sharpe ratio? [43]0

 

01:16:43 [JK]

Yes Sharpe. Yeah, but I'm trying to remember the guy who brought it. There was a guy. Another name, another guy. God! He's the guy who invented this idea, on Wall Street. But in any way, the whole point is that it's all based on being able to do a quadratic program. It's the extension of linear programming and that was Phillip Wolf. And Phillip Wolf loved APL too, when he showed us.

 

01:17:05 [JK]

We were having trouble. This is an interesting problem. We wrote the algorithm and for, you know, quadratic programming and it was unstable. Depending on what numbers you gave it, it would, you know, just take for… you know, it wouldn't converge. And I remember he came by. He looked at our code and he said to them, "what are you using for zero?" So, we said, "we're using zero." He says, "well, that's naive." He says, "zero is too small. And in the points, what's what value do you use for it?" And actually, this is more like an art. I mean, it's really obviously it has to do with chaos and everything like that, but so he gave us a number as soon as we plug in that number, you know, it started to work. So, it was great. I don't know if these guys are still alive, you know, so anyway, those are other guys.

 

01:17:55 [JK]

And oh, Irwin Godowsky Burgess [? — ed.], another guy. He was an IBM vice president — corporate vice president, not a VP, I mean, I'm talking about a real VP, you know? And we were having trouble with APL — wasn't going fast enough. This when we're still doing APL on the mainframe. And he actually had something done in the architecture — changed the hardware — so that we could get more bang for the buck. And the IBM, you know, the thing that came after the /370, I don't remember what it was. [390 — Ed.] He's a very interesting guy too. And of course, Ken Iverson. I mean, we all know Ken, you know, so…

 

01:18:35 [CH]

It's surprising how many — I want to say "famous people" — but just like well-known people in computer science secretly, or maybe not even secretly, but just had a fondness for APL. Like you just added a bunch of names to my list, but I also know like Alan Kay [44] — Smalltalk — he was a big APL fan. Alexander Stepanov, [45] who was very important in the C++ community. He was a big APL fan. And then just all these, you know, I think… I can't remember if I just mentioned Bill Gates. It's odd that all… there seems to be this like trend of a lot of folks that had this fondness for APL, but it was just either like a side thing or it never became the, you know, their main tool that they were working with. And I wonder, yeah, it's just curious. Like, the more and more we do these interviews, and the more people we talk to. I used to think of APL, as this kind of, you know, esoteric, you know it's… I've read the "10 programming languages that are dead" and you know it's on that list and that, but then, sure enough, it's not just like a small collection of people that are interested in, like you know, we just discovered today that Bill Gates not only was an APLer, I'd heard that he had a little manual in his in his desk drawer. But I haven't heard that he implemented like… That's more than the…

 

01:19:59 [JK]

Yeah, he did an implementation of APL.

 

01:20:01 [CH]

That's more than just being like, "oh, I find this interesting." Like to take the time and actually… And there's other folks that have done the same thing. Rob Pike, [46] who worked on the Go language and his two other co-creators, Rob and Ken Thompson. They all did little APL interpreters all at one point and yeah, it's very, very curious that this seems to be a trend amongst on some level of infamy or fame, these, you know, these computer scientists. I mean, who hasn't heard of Bill Gates?

 

01:20:37 [JK]

Right?

 

01:20:38 [ST]

Not so much a closed-off little niche, more like the way we think these days of a fungal network in the forest.

 

01:20:46 [CH]

Yeah.

 

01:20:48 [JK]

Yeah, right, right exactly.

 

01:20:50 [CH]

All right, well, Bob, Stephen, do you have any final questions? Bob?

 

01:20:53 [BT]

You've had this wealth of experience. You've met all these people, Joel. If you were to talk to somebody starting out, either just in the working with array languages, or just generally if there are programmers or interested in this kind of stuff. Would you have any advice to them about what in your experience, what's the best way to start out with this stuff and have a fulfilling experience with it?

 

01:21:19 [JK]

Well, it depends on when you get a person. You know what I mean. Unfortunately, there's an ocean of chaos, and most of it is nonsense. There's like millions and millions of people that are programming. People that are using their cell phones in some way. They're kind of doing something like programming. So, a lot of them, their minds are already invested in something else, so it's hard. It's hard to find somebody that, you know, has, you know, beginner's mind.

 

01:21:52 [JK]

So, it's tough. You know, we hired two guys into 1010data, who came to array languages on their own. We put an ad in Reddit. There's a Reddit thing on array things or whatever, and these were people that came to it on their own. They said, "Gee, this is a good idea. This is an interesting thing", you know. So, it's hard to say, you know? I guess if a person loves languages right. For example, we didn't talk about Lisp, [48] I think Lisp is pretty orthogonal to all these things. And I have a certain respect for it, and there's certain things that make that are easier to do. In that, you know, it's really different. I mean FORTRAN and COBOL and Scala, and you know, and I don't know, all these languages, they're all more or less the same, you know? But there are some different ideas and people that like languages would be interested at least in looking at it, you know?

 

01:23:01 [JK]

I mean, now you can get stuff. You can get a version of K and get APL things like that that are, you know, free, right? So, you know, point them out point, point it to them. I don't know. It's hard that I, you know, I wish, you know, you see all this experience that I've had. It's it mystifies me. I still can't figure out what makes a person like something or not like something you know. Sorry, I can't do better.

 

01:23:28 [CH]

Well, I think what we need to do is we need to get a T shirt that says, "one potato, two potato — for-loops" and then like, "not me!"

 

01:23:39 [JK]

Hehehe, that's great. Well yeah, but now you're in the area of marketing, you know, and, you know, it… sure! Hey look, I mean, if you get a clever enough person that gets something going, people will go, you know, they go nuts. You know, it's just that when you go nuts and you hit, you know, you hit a language, it's hard as you said, it takes effort, right? And you know, it's the slow thinking, fast thinking kind of thing. Most people just want, kind of, stuff to happen easily. I mean, the idea of putting some effort is like a lost art.

 

01:24:07 [CH]

Yeah, I think I think potentially part of the part of the marketing issue too is that for the folks that are in the category of, you know, they fall in love with this paradigm, of which I consider myself a member. That rabbit hole that you fall into, is very, very, you know, it's a deep and like it's not a gradual thing. It happened so quickly that there was a couple months where like I didn't understand trains and point free. But then like you, once you get like the snowball rolling, it starts to roll so quickly and the hole is so deep that very quickly the videos that probably I was posting on YouTube before, [49] that were approachable, you know, there was like two or three of those videos, and now I'm just like, "well, check out this like combinator point free blah blah blah stuff. It's awesome. I'm not really going to get into explaining it 'cause I'm just having so much fun with this."

 

01:24:54 [CH]

Like we're we went from one order of magnitude, you know, ability to think in my head to two orders of magnitude, and you've now become that professor that that isn't really that great at teaching the material, because they're too far removed from "what was it like to be confused?" Which is why I think there's some videos that Rodrigo, [50] who's an employee at Dialogue Ltd. and he's really focusing on sort of introductory, and you know why you might… I think even Richard Park, he had a recent video at APL Seeds [51] about like "worth learning" or sorry… I can't remember the title. Something about "worth learning and even better to like master it" ["Easy to Learn - Worth Mastering" — Ed.] and he showed just some simple things of like outer products. Like I remember the first time seeing an outer product, [52] like I had never seen that in another programming language and I knew a bunch of programming languages, and yeah, so I think yeah, I am also mystified but like there's gotta be a reason and well, we'll as a group, we'll figure it out, and T-shirts, I think, are the starting point. We need some potato GIFs. And yeah, something. There's something there.

 

01:25:57 [JK]

So, one of the things about APL, because it's a mathematical thing, and it all these things generalize, is what you don't know doesn't hurt you. If you didn't know there were vectors and you said 1+1 or a←1 or, and, you know, in K you have do and, you know, while — that kind of stuff — you could be programming and never know that you were doing an array language and then later on when you learn an idea, you don't have to know everything in order to do something, right? That's one of the advantages of a clean, well thought out, something that is exploiting this whole tradition of mathematics. That's an advantage.

 

01:26:33 [CH]

Yeah, even to this day, yeah, there's…

 

01:26:35 [JK]

Yes.

 

01:26:36 [CH]  

… a percentage of each of K, J, BQN, [52] and APL that I don't know. There are operators that I've never touched and in BQN, especially 'cause it's the newest there is, I think, a whole four or five operators that have been introduced in that language that I have not… I don't… I have no idea what they do, but like that doesn't affect your ability. It probably means that you're spelling three or four more characters in some cases to solve some problem. But like that just means that down the road I'm going to stumble on this thing and be like, "Oh my goodness, I've been doing this with three or four characters and I that exact thing, that I needed, existed there." And it's yeah, that's what I was talking about with the deep rabbit hole is, you get started and as you go deeper and deeper, you just realize, "wow, this is even more and more powerful."

 

01:27:19 [JK]

And for example, inner product, [54] right? You can just ask just +.× this… Well just take +.× — think of all the things that you can do +.× for which you have operators. For example, you want to sort something. If you have a permutation, right? You can do it with an inner product. Because, you know, by putting zeros and ones in a matrix, you can make that matrix do a sort. You can make that matrix give you any permutation you want, right? — By putting ones and zeros. So, it's interesting how all these ideas, you know, connect to each other. I mean, I think Apter was talking about that: How many things make sense, you know, the language. I mean, you know, it's just fascinating!

 

01:28:07 [BT]

Yeah, I think it was Adám as well, was when Stevan was talking about that. Stevan Apter was talking about that, was saying that at some conferences his dad was at, people would be walking around trying to figure out ways to using their product with different primitives and come up with what they could do, yeah.

 

01:28:24 [JK]

Exactly! When he said that, I said to myself, "Yeah." But I'm saying, just think about it, and if you can do indexing effectively, indexing is an inner product. I don't know. There may be some deep thing in there I don't know. Also, the way that a lot of proofs are done in in APL: Well, you start with nothing. You know, the concept of an empty something is a fundamental idea. You can derive, you know, the +/⍳0 right? You can do that by some proof that involves starting with, you know, I don't want to get into it. It's too late. Anyways. But look, if we want to talk, we talk again.

 

01:29:05 [JK]

By the way, people that you should talk to. Is Adam Jacobs. Seriously, you should talk to Adam Jacobs. And you should talk to, maybe, John Earnest. OK? Yeah, he's chicken farming out in, I don't know, Washington state, but he is young, and he's a young man. He's somebody who came to this on his own.

 

01:29:27 [CH]

Interesting. Stephen, you have a last comment or a question?

 

01:29:30 [ST]

T-shirt story.

 

01:29:31 [CH]

OK.

 

01:29:33 [ST]

It's 1976, I was on the top deck of the number 13 bus in London, and I was wearing an IP Sharp Associates T-shirt. As I stood up to go down, the back of my T-shirt becomes visible. No one can see it when I'm sitting down. But on the back of the T shirt is the ripple shuffle expression; [54] exactly describes how you take a deck of cards, cut it in two and shhhh… so they are all interleaved. And as I descended the stairs of the bus, this voice behind me in American accent says, "look, Martha, another APL expert!"

 

01:30:12 [CH]

Awesome, I think that's there's no better way to end it than that. So yeah, Joel, thank you so much for coming on. I've been looking forward to this conversation for you know, two weeks, basically, since Stevan recommended you, and I know our listeners are going to absolutely love this conversation, so hopefully at some point we'll be able to have you back. And tell any of the stories that we weren't able to get to today. But yeah, thank you so much for taking your time to come on and chat with us.

 

01:30:36 [JK]

It's been a pleasure.

 

01:30:37 [CH]

Awesome, and with that we'll say happy array programming.

 

01:30:39 [all]

Happy array programming!