In this episode I speak with Peter Suma of Applied Brain Research about Nengo, a complete brain maker that allows you to develop and run models with deep learning, online learning, static weights, simple linear neurons, complex spiking neurons, and everything in-between.

Also features content covering tools for working from home, two player games, a “Hello World” for the modern era, and much more!

Transcript

Chris: [00:00:00] . Welcome to the weekly squeak, your weekly geeky squak with me Chris Chinchilla, back for another week. I have an interview this week with Peter Suma of Nengo talking about the quite fascinating AI model training brain simulation. Project. I haven’t had a chance to get hands on with it myself, but as you’ll hear from the interview, it is quite fascinating and I hope you find it interesting too. [00:00:37] I hope you are all safe and sane out there. I am going to, not explicitly, he mentioned again what I’m talking about, but I’m sure you know and I have a few links that allude to the situation, but I want to keep this a free zone where you can enjoy geeky wonder without having to worry too much. So all that said, let’s get straight to Meyer links for the week. [00:01:04] Firstly, an article from on a website could sifted. It’s actually someone we do know called controversially. Berlin is crap and no one is talking about it. Uh, this is directly in reference to the startups. In case you’re wondering if it’s more general. And Torben is fairly scathing of situations that make Berlin not that great for startups. [00:01:27] As much as people may think things are like Berlin is great to live. But mostly only for employees or not that, obviously employers are kind of people too, but you know, kind of looking at it from that perspective, that for employees it’s often better than the employers should we say, cause of rights and things like that, workers’ rights, et cetera, et cetera. [00:01:47] One of the main negatives he cites is the limited investment pool and albeit then a fairly conservative investment pool that. He also mentions there is a fairly limited call of talent for senior hires, and I know many companies who have witnessed this and have often have to go fairly far flung to find a senior people who actually worthwhile hiring for roles that need any sort of form of experience. [00:02:12] For junior, it’s mostly fine and juniors can take you some time and of course the juniors can become seniors is another. Point to make here, but it’s an interesting point that, um, I often hear and often seniors are attracted elsewhere where they can get better wages or higher wages better and higher and not necessarily the same thing, or that is maybe best safer discussion. [00:02:33] Another time. And then finally, the point that everyone always expects German bureaucracy is a pain, especially a painful people running businesses, uh, mostly for employees is better. Well, it’s still a bit of a pain now. Interesting in this article, and you can certainly find the links in the newsletter version of this podcast, and there’s a lot of fairly negative comments on this article, mostly relating to the fact that Torben didn’t work for a particularly large company. [00:03:04] And he doesn’t really know what he’s talking about, that he couldn’t attract people because he wasn’t attractive to them, et cetera, et cetera. I’ll let you make your own mind up, but they always say, don’t read the comments, but some of the comments may be valid and some I don’t think are something, I don’t think a fair, but still, if you have an opinion on Berlin being good for tech or not, have a read, and let me know your thoughts. [00:03:24] Kristen dot com slash contact next. This is an article, uh, from . From Mary J. Foley, but actually widely reported. Why? Well widely reported, but kind of quietly snuck under the radar, I guess in a lot of other news, and this is always a point that people will make sometimes when news is full of other things, other stories sort of vanish into the, to the unknown. [00:03:49] And this is the story of get hub. Slash Microsoft buying, NPM, NPM, the M package manager full of JavaScript. Uh, very widely used, but I guess they probably was struggling financially. So for them, this was very obvious. Pairing is, it does start to make some people worry that there’s too many. Um. Eggs in one basket. [00:04:09] Too many eggs in one corporate basket. Too many developer tools owned by one company, or be it a company that is largely now into voters favor, but still this is somewhat concerning. Also, somewhat concerning that very few people seem to have much of an opinion on it, which is also strange in itself. But in terms of integration options would probably get better and better. [00:04:29] I’m wondering if get hub and NPM will just sort of merge into some kind of odd cohesive whole, we shall say, but one to watch and one to watch out for. Maybe looking further forward. Next, an article from Charles R. Martin. On the overflow, an interesting, a historical look into, um, the origins of the hello world demo application that is often there in many, many, many code tutorials where it came from. [00:04:58] And maybe a more modern version of it has hello world kind of run its course for developers are trying to learn something useful and he pose he and he proposes a couple of pointers frame modern hello world. This being things alike. Actually organizing code into folders and maybe separate files to introduce the concepts of things like dependencies. [00:05:20] He also mentioned adding version control by default, which is an interesting one actually, and teaching people how to use version control straight from their hello world experience. He also mentioned then maybe introducing a tooling around coding, uh, IDs and editors and setting them up, et cetera. [00:05:40] Peter: [00:05:40] He [00:05:40] Chris: [00:05:40] also mentions built processes. [00:05:42] I would start to wonder if this gets a little complex for people getting started. Some of the others, maybe a build process, um, or as he clarifies here, repeatable build processes may be getting a little complex. I mean, running is something that I’ve definitely done in most of the world applications. And then as a, as a virtue of that, then it is built. [00:06:03] But I wonder if that’s getting a little complex. And then he says, then start coding. So I wonder if this is maybe too much for many people getting started. I’m not sure. It might be maybe a hello world. And then I kind of, hello world, um, recommended next steps. Maybe something like that. I actually like to hear your opinion on this. [00:06:20] Um, and maybe I should get him on as a, as an interview subject, actually to go a bit deeper. Uh, do you think hello world is enough? What else would you add to a one hello world application to make it somewhat realistic in the kind of modern world of programming. Next, an article from Seth Kenton on opensource.com 10 open source tools for working from home whenever you might be working from home. [00:06:41] Some of these I definitely knew of already, um, things like gypsy, although I, she realized if gypsy had open source, I thought it was owned by atlastin, maybe I’m wrong. Um, some you definitely know things are, say like gypsy next cloud. At the pad, at the sheets, but then some others here that you may or may not know and some that I did not know and I was definitely find very useful. [00:07:02] Um, things like Joplin, uh, sort of Evernote replacement, which I have been looking at myself recently, but also, uh, a collaborative drawing application. He could draw a pile, which [00:07:11] Peter: [00:07:11] I would [00:07:12] Chris: [00:07:12] really have been looking for and did not come across. So I look forward to trying that very, very soon actually, and seeing how well it works. [00:07:19] He also mentioned something, Kanban, I. E. sort of Trello replacement tools as well. Riot for chat, which I have used actually and mostly works fairly well and of course liver office, Linux, some classics like that. Some of these actually it took me by surprise. I had not come across before and look surprisingly good. [00:07:37] Some, I’m not 100% sure of how fully open source they are, but um, yeah, if you are now working remotely, but we’d like to stick down the open source path, take a look, try some of these and see how you go with them. Next and article on a Mel magazine from Brian van hooker on, uh, an oral history of how the movie war games inspired Ronald Reagan’s cybersecurity policies along nine to masses of detail to not sort of give away the story here, but if, you know, war games, the movie, when a hacker manages to accidentally kind of hack into a, what he thinks is a game, but is actually the, the, um, kind of weapons control system in 1980s, America. [00:08:16] Inadvertently starts a war. And Ronald Reagan being a film buff and an ex actor watched this and then came in the next day and asked the advisors, is this possible? And a lot of his advisors actually quite interested. He came back with, yeah. Actually could be, and should we do something about this? And they did. [00:08:35] So bizarrely, uh, fiction did influence facts in this strange case. So have a read through. I’d be interested to know if any of you older hackers out there remember some of these problems and could collaborate, corroborate whether. These facts may be true or not, and if it was actually possible, you know, ignoring all the drama and storytelling actually possible to to accomplish this or is the article wrong as well? [00:09:01] Please do let me know comments on this article and any others@kristincharlotte.com slash contact. Next, an article from the national geographic, not a publication. I often site cold dead sea scrolls at the museum of the Bible are all forgeries. This sparked my interest, not just because of history, but before we were all full set home. [00:09:21] I actually just got back from Israel and we went to the dead sea and we were told the stories of the dead scrolls, and it was shown where many of them came from. So I thought, Oh no, all those are forgeries. No, that is not true. This is just a specific collection at this specific museum that had been shown to be forgers, which is a great shame, especially as I think judging by the name of the museum is probably something that’s quite important to them. [00:09:43] And it details how something like this might be found to be a forgery, why they indeed even checked in the first place. And, um, and what this could mean for the rest of the scrolls. Actually in this case, most of the others have been shown not to before too. So just his batch. But actually the story we heard whilst we’re in Israel was that. [00:10:05] The dead sea scrolls were quite a lucrative affair. So creating forgeries, passing them around and selling them to collectors and museums was quite a common thing to do, mostly because they were nomadic tribes who found them. So it was a great source of revenue for them. So there’s highly likely to be several sets of hoarders out there. [00:10:20] And I’m pretty sure I remember hearing that there have been, uh, previously found forgery. So this is not necessarily first or indeed the last, if you like, ancient with a modern twist. Detective stars didn’t have a read. And finally, we’ve been playing a lot of games online the past couple of weeks. I might actually even do a Roundup of options for that in the future, although I kind of feel like a, everyone that has been done by a lot of people ready. [00:10:49] But maybe I have some particular points I can bring. Maybe even looking at it from a designer’s perspective instead of not sure. But these are, uh. Games and Rapa games for two people to not necessarily play remotely, but play together a family, a couple, et cetera. And there’s some great ones here. Some that I definitely knew, some that I did not know. [00:11:09] Some, I look forward to trying, especially some of the options in the roleplay section, which are more the kind of, um, building together storytelling role-plays, which I actually really, really like the idea of. And I think I could encourage my significant other to join him. Sorry. And if were to try and some of those, if you are looking for some inspiration yourselves, get a hold of copies whilst you still can take a look at this post on. [00:11:31] Gizmodo from Beth Elligan and in joy playing, letting me know your experiences of playing some of these games. [00:11:41] Now my interview with Peter Sumner from Ningo. Now, this was a very interesting interview because. I have never interviewed someone and I mean as in the best possible way, Peter, if you are listening that talked so much. I did not have to ask very many questions and he gave some very, very thorough answers. [00:11:57] So you don’t actually hear very much for being interviewed. Um, he answered most of the questions I would have asked very, very well. Uh, and it was quite interesting just sitting there and listening. Um, so yes, I do reiterate again, I do not mean that in a bad way. I mean that in a good way. I enjoyed the interview. [00:12:12] I found it quite fascinating. I really look forward to getting my hands on. In the article that will accompany this interview in the future with the platform myself and trying it. But in the meantime, enjoy. [00:12:24] Peter: [00:12:24] Oh, okay. All right. Fair enough. Well, look, thanks for the interest in the company. Um, so, uh, what do we do? [00:12:31] So effectively, we do two things in the, right now what we do is optimize AI and we use techniques that were discovered by my partner. So I’m the co-CEO on the business side. The technical brains of the organization is dr Chris . And Chris is the head of the center for theoretical neuroscience at the university of Waterloo. [00:12:55] And so he’s an electrical engineer that, uh. Started got a fascination with, Hey, how do brains work? And I think it’s based around electrical circuits. And so years ago, he, um, did a PhD in neuroscience, psychology and then a met up of the physicist and they got to thinking about how could you mathematically describe how a brain takes information from the outside world. [00:13:20] Turns it into representations somehow in, in the electrical activity, in brain circuits, and then transforms that through learning memories. And then how does all of that, how do the dynamics of how neurons work, uh, be controlled in the brain? Uh, if they’re all spiking away, how can we don’t just get. [00:13:37] Garbage as opposed to useful computation and high level behaviors. And so the structure of the brain, the structure of circuits. And so we put together a set of mathematics that basically you could model, create a mathematical model of, of how groups of neurons work to do that process, to take information, build representations, learn. [00:13:56] And as well control their, their activities to be able to recall those, uh, that information and make behavior. So all of that, um, is basically the foundation of the company. That mass got turned into a piece of software called mango. And so Ningo is like a compiler. Um, that you basically can visually lay out circuits of neurons. [00:14:17] And it was originally absolutely only used for brain research for 15 years or so to build simulations of circuits that were found in the brain. And so his lab, he eventually did his PhD and went to Waterloo, started the lab, and the lab was basically in the business of saying, okay, well if we’re, if this is sort of. [00:14:36] Correct. Let’s go build models of brain circuits. Um, let’s go to papers where neuroscientists had speculated on how the circuitry of the neurons found, how they connect, how they behave, their activities. You know, that that would produce things like how you smell, how you see, uh, how you process sensory information, how working memory works. [00:14:58] A vision might work, et cetera. And there’s been lots of papers, both from the neuroscience community and the AI community over the years speculating on these circuits. And so mango was used to very quickly. Develop models of the circuits and then run them and then compare them to, um, to the real data coming out of when people put probes into worms and other animals in the lab, they’ll publish their data. [00:15:23] Well, here’s the situation the animal was in. Here’s the stimulus, reapplied. Here’s what we saw as the spiking behavior of the neurons. And so when you compare that to an artificial model of that same circuit, if your circuit design is, is what’s going on in the real brain. In theory, you know, if the two match, you get some pretty good evidence that your circuit diagram is probably what’s going on in the biology and as maybe farfetched as that might seem. [00:15:49] A, it works way beyond anybody’s expectations for many years, and the brain is, is extremely complicated and has many, many, many different. Um, circuits and many different computational though systems or algorithms that it’s computing at different places. Um, but they were able to explain a lot of, uh, circuits found in the brand. [00:16:09] Now let me say that reverse planning small parts of the brain, uh, one by one, the brain is 100 billion neurons. Trillions of connections, and, uh, as I said, uses many algorithms, not just one. Um, so this, the work progressed, uh, then in about, uh, so this is sort of too fast. In the two thousands coming up into 2011, I ran into Chris [00:16:32] I was looking for, um, somebody was working on full loop AI, something that’s sorta said, let’s take. The what we knew about brain circuits and build higher order, um, copies of it because sort of simple AI backpropagation um, is, is powerful, but it certainly, it doesn’t seem to explain how intelligence arises. [00:16:52] So anyway, the, uh, the process became, or the, the events became the, in about 2015, Intel approached us as we were trying to commercialize mango, not for brain circuits anymore. But to build better AI, a more intelligent AI. How do you integrate a vision system with a working memory system, with the decisional system of the motor system? [00:17:14] How could you make a smarter robot by trying to copy how the brain does and using mango to build each of those circuits and then integrate them and effectively. Mmm. You know, IBM had built a chip. All of this relies upon neurons being computed with, IBM had built a chip called true North, and Intel saw that and the bigger backdrop of, of brain chips, you know, why would, if you, why would anybody want to build a chip that has a sheet of neurons right on the hardware? [00:17:43] And the idea was that, and still is that neuromorphic chips in the theory. Um. We’re getting, Moore’s law is breaking down. You can’t just make the transistors smaller to get more throughput. So maybe if you do what brains do, you know, looking at a brain that sort of says, well, each neurons actually kind of slow, like way slower than most of the computer chips we work with. [00:18:06] Um, but there’s billions of them and they all compute at exactly the same time, like in parallel, but not, not necessarily synchronize. They’re quite independent. They do use certain mechanisms for synchrony. But. Effectively. All the pieces of the brain are always on and working together at the same time. [00:18:24] And so the thought was, if you parallelize the problem massively and got rid of the clock that, you know, a Von Neumann ship works on, there’s a clock tick, and every, every part of the system works to that. Tech and GPS have multiple clocks, but they still have clocks. So was there something special about going massively parallel. [00:18:44] Massively asynchronous and removing the clock and then using this idea of a neuron structure to compute with. And if we did that right on the hardware, would that make really, really fast AI? And by the way, as there’s something special and secret about this that we’re going to get. A big advance algorithmically if we do these things like is the future of AI in an architecture that looks like a brain in this way. [00:19:06] And so 2015 comes around and we had been doing, who started the company sort of 2014 and we’ve been doing some research contract work for places like DARPA in the States. And Intel showed up and called us down to a meeting. And said, look, we’re going to build this chip. Um, we are doing it into labs as an exploratory research project. [00:19:28] And, uh, IBM’s got their true North and we’re, we’re going to a, to build a chip like this, except we’re going to do way better. And they did. Their chip is, is way better. Um, and, but you guys, uh, so we get a chip, uh, effectively, once they build a chip, you look at this chip, it’s got, um, 128,000 neurons. They’re each independent. [00:19:51] If you stimulate any of them, they start spiking. You’ve got to figure out how to connect them. You’ve got to figure out how to make them learn things. You’ve got to figure out how to control their dynamics. So the once you connect them and send one of them spiking with an input in response to some stimulus, visual sound, whatever, and you’re trying to make vision recognition or image recognition, well, if they’re all connected, they’ll just all spike away. [00:20:14] And then the whole thing goes to what we call saturation and the chip, the output is basically garbage, like it’s just a whole bunch of random spikes. So how do you connect? How do you control a symphony? Something you need to have a perfect orchestrated symphony flow through. There’s no clock. And you need them all to work together. [00:20:32] Well, it comes down to setting how they connect and the timings of all the signals and the kinds of, uh, the ways in which everyone is trying to give the symbol, the ways in which each of the neurons processes its input. And that’s basically a set of millions of parameters that the chip needs to be given. [00:20:49] It’s one thing to produce a chip, but it’s basically a blank slate. Um, and you need to know how to, how to do that. Well, that’s exactly the same problem, but personalized must solve. When he created the mathematics called the neural engineering framework model, and the software called Mingo. And Intel realized that early on they’d talked to all the labs in the world. [00:21:08] And for about eight months I came back and said, you guys are the best. There is a, if not the only solution to this problem. So let’s work together. And we did from 20 1516 till today, I’m making mango be the visual compiler for the research chip we, and in addition to which we’ve done this same thing with the brain drop chip at Stanford. [00:21:30] The Spinnaker chip in, in Germany. Um, and an increasing number of, we also support CPS and GPS. So today you can build an AI model, uh, visually, and you can make it to everything from deep learning, to reinforcement, learning to liquid computing, all manner of neuro modeling. And you can have that model with the press of a button run on your CPU. [00:21:55] On your GPU or if you have one of these access to one of these new research chips, a neuromorphic chip. And in the process of doing that, commercially speaking today, what that means is if, and when neuromorphic chips are released, which looks to be around 2021 for the Spinnaker chip, and there are a couple of others in the works as well. [00:22:17] Um, until we don’t know, it’s a research project, uh, out of Intel labs. But when it releases, uh, you’ll have a choice of programming the chip directly, very low level. Or you can download mango and program it visually. And also. Well, something no one else has. You can actually do bug visually, which is extremely useful if you actually do this. [00:22:35] So Ningo is available and then go.ai. So it’s a compiler for AI dynamic temporally, I models. And what I mean by that is this kind of AI, it, these kinds of systems do the best job on the same kinds of problems the brains solve, which is flowing or temporally eyes. So if you’re a . Processing a visual set of visual input from a camera or speech or sound or sensor data or control data in an autonomous car. [00:23:03] All of these things are characterized by data flowing in a constant stream. Which is a temporal stream. So what that means is if you look at a video, the data you get at any instant is related and it’s related. It’s dependent upon the data that came before it and the data that flows after it. It’s not like processing a stack of unrelated images where each image is a brand new thing. [00:23:27] It’s not related. If it’s related to the thing in front or back of it, it was by chance, not because it’s, it’s a, it’s a consistent stream of sensor data. So. These kinds of AIS, all these little neurons. Each neuron is stateful. It holds a certain amount of state over time. The way you connect them respects the idea of state and the algorithms they’re most efficient at processing are those that are continuous and time. [00:23:51] So like your retina, you, you may know that when you look out at the world, you actually, you only have about a million fibers coming out of the back of your eye. And so you’ve got pretty low resolution if you want to think of it that way. But at the end of the day, what’s going on there is you’re processing the change in the environment, the visual environment, not constantly reprocessing all the background. [00:24:13] And that’s efficient and evolution discovered that. And so we use that, we call that temporal sparsity. So our systems are better and more efficient at processing, constantly flowing temporal data streams. Because these neural systems learn and respect and take advantage of the fact that the signal is correlated in time. [00:24:31] It has a relationship, so it ignores duplication of signal and it focuses its competing efforts on the things that change. Making it more sensitive to change, more accurate than time and less energy is used to process a constant flowing video by not reprocessing. The background information, for example, over and over and over again. [00:24:49] Some of them have TV sets work, you know, they download the Delta of the frames, not the whole frame every time, right? So effectively now Ningo is this brain circuit modeler, which solve the same problem that is present in programming neuromorphic chips. As a consequence of that. And because mango supports CPS and GPS, et cetera, you can put one model in and run it on all these kinds of chips, existing chips and ones that are still not yet released again, because it does a better job of processing temporarily. [00:25:19] I hold these sensor video, et cetera, signals you end up with a visual, um, platform to build AI models, build them once, run them on all kinds of hardware. And have them, if they’re processing temporal flows, have them be way more efficient than traditional AI. So we use, in the parlance of a AI programmers, we do use Wade sparsity. [00:25:42] We do all kinds of mechanisms to compress our networks like everybody else does. But then in addition to which we pro, we compress or specify the temporal processing as well. Um. Secondly, the visual, the performance advantages. And then lastly, um, you know, we, these platforms are, uh, support, deep learning, reinforcement learning, liquid computing, all of that at the same time. [00:26:06] So you can build these multiple or full loop type AI systems easier. So for example, we build advanced drone brains that have the visual system. Looking for things like defects and infrastructure connected to an adaptive motor control system, connected to an adaptive path planning system, and the three of those subsystems run together on a put onto a test or research pre-release neuromorphic chips. [00:26:35] They’re going to fly the drone way longer than if you tried to do that the existing way, which is was like an edge GPU or something. It just sucks. Way too much power to even do just the visual tasks. So you’re not only compressing, the benefits of compression aren’t just the energy. It shows up in the actual flow through the system cause it does less work to achieve higher order behavior. [00:26:56] So you get more adaptive and responsive, autonomous intelligence, and it allows the autonomous systems to function much longer. So the net of all this is where do you use this stuff? Um, basically if you have, uh, any form of constantly flowing data. You want to push it through an AI network or multiple networks to produce some categorization, classification, control loop, et cetera. [00:27:22] And typically those situations are found and you care about power and you care about response time. Typically those situations are found in devices that operate in the real world. In a behavioral loop locked with the world. So autonomous cars, drones, robots, sensors, et cetera. And that’s where we’re finding our niche. [00:27:42] Beyond that, the last thing is all of these systems from the ground up because they support state. Right now, each neuron, unlike a, what you’ll find in deep learning, traditional deep learning, um, these systems also lend themselves to basically temporal based learning. So they can be, they do a very good job of learning at the edge. [00:28:02] So the other big deployment area is taking mango, building models with it. Converting them, frankly, from TensorFlow or from, uh, from some other frameworks. Uh, and you basically then take that model and push it out across multiple hardwares. So it’s, it’s a very good platform for IOT in the sense that you have lots of edge platforms, hardware, but you want to have one model. [00:28:26] And when you deploy it, you want it to run and potentially, depending on what you’re doing, you want online updating and learning at the edge, which you back propagate to the central model. So we’re working with. Uh, some companies to integrate mango into their IOT backbone, to give them, uh, the visual, compile the efficient at the edge, low power, and the multiple hardware plus the online learning at the edge. [00:28:50] And other than that, those are the uses of the products. We do an awful lot of research for corporations and military on AI algorithmics because in studying the brain, it’s really the case that you. You very quickly get pushed out of the idea of one network to do classification and categorization. [00:29:07] You’re immediately into a, an environment where you’re trying to understand how it’s the integration of the multiple networks and it’s dynamic and temporal integration. It’s all flowing. It’s all signals thinking, not static spanking that you have in traditional AI. Um, and it’s all stateful, both at the neuron level and the network level. [00:29:25] And so taking inspiration from that, you immediately. Find it quite natural to look at integrating vision plus adaptive control plus slight path planning in a flowing dynamic way. And that also you’re into, we do work, for example, in robotics scene understanding, we do work in lifelong learning, which is making AI less brittle, taking ideas from the critical parts of the brain that are responsible for why the cortex isn’t as brittle as traditional AI. [00:29:55] Um, and we do an awful lot of work also with, um. Uh, some security agencies around something called dynamic computing. And those research projects are advancing our algorithmic understanding. And then we use that advanced understanding to put back into mango and then to the applications we develop, whether it’s for a phone company, making a super, super low power edge speech system about to three times more efficient than state of the art using our techniques. [00:30:23] Um, uh, the adaptive drone projects, uh, and as well, we’re working with what’s publicly one project that’s public is. We’re working with BMW. They’ve been talking about it to dramatically reduce, um, we reduced the a vision test system. Uh, we reduced its power usage by 53 times. So instead of, you know, a test car, we’ll have 30 to 50% on autonomous car. [00:30:48] We’ll have that much of its total power budget going to compute. And so for the AI workloads in there, uh, the initial test suggests we can cut, you know, that they, I workload down to 2% of its original workload. That means dramatically different top sticker energy efficiency in the showroom in future with neuromorphic chips and mango. [00:31:10] Powering the visual system, et cetera. So I think I’ll stop there. Tools, applications. And when you’re recording, you’re recording, and so you told me you’re recording it, so I’m going to save it the whole time. Applications and research for military [00:31:26] Chris: [00:31:26] intro. [00:31:27] Peter: [00:31:27] Yeah. Well, I’m sorry, I just, I just, the problem with this is that it’s a connected, it’s a whole new platform. [00:31:35] And you’ll find when you play that back, and I’ll probably type it up. Um, the true understanding comes from seeing everything, both what it is, the layers of it chip to software to application, to features, to benefits. And being able to run up and down that and see it differently. Similar to how, you know, you’ve got the paradigm today of deep learning and GPS, but it’s got flaws. [00:31:58] Um, this is better in some ways. Not in every way, but certainly in a lot of ways. So I wanted to get that across. [00:32:06] Chris: [00:32:06] I think the one thing that I had in the back of my mind throughout that explanation was, so I’m looking on the Ningo site and I see kind of, um, you know, you’re pushing it as something that’s easy for people to get started with. But how, I’m just trying to understand. How would someone use this? Do I have to have access to one of these particular chips or can I run an emulated version, albeit with low performance? [00:32:34] Like can anyone try and, and go for a particular use [00:32:37] Peter: [00:32:37] case? Somebody has [00:32:38] Chris: [00:32:38] to have access to something special? [00:32:40] Peter: [00:32:40] No. No. I absolutely don’t. So, um. Look it is if you’re trying to come into this world, and tango makes it easy in the sense that it’s visual. What the website doesn’t say is, yeah, it is. There’s a getting started page and we’d take you step by step and encourage like all development platforms. [00:33:01] We encourage you to immediately go and download examples and instead of coding from blank screens, there’s at your right. So that’s why the getting started flow is, is jump into an example quickly cause it just makes a lot more sense by teaching that way. And that’s the same for TensorFlow or any other platform. [00:33:18] Um, the one thing that is challenging and, and, uh, is that everything in neuromorphic is a signal. So the kinds of backgrounds that tend to understand this easier are people that come from signal processing DSP or something like that, or any form of neuroscience or whereas traditional old programmers like me, it took awhile. [00:33:40] Uh, and I, and I won’t, I won’t understand that. It can be for certain backgrounds, difficult. You have to let go of the idea of. Um, you know, there’s linear thinking that you find in traditional Von Neumann computing. Step one, clocks to, you know, events are these things operations are done on the clock, moves forward. [00:33:56] Here it’s like. I created, you know, all of these multiple neural ensembles and signals are flowing everywhere. Oh my God, what do I do? How do I control this? And the neural engineering framework will definitely help you there, but you definitely have to stay close to these examples till you kind of rock it. [00:34:12] Um, so that said, there is a bit of a, so step one of what you were addressing. Yes. There’s a bit of a learning curve. It’s a paradigm adjustment, but once you’re through the, through the looking glass, if you understand the benefit of it. And you start to build systems like dynamic control systems. One of the examples is like the brain of a pack man creature. [00:34:29] And if you look, you stand back from that, it’s like 25 lines of code and you go, wow, this thing’s got pretty cool. Dynamic control systems, avoiding predators seeking food, all of that sort of thing. And you’re stepping back and going, that’s a lot of functionality for a small amount of code. And that’s, you know, that’s one of the benefits. [00:34:47] Moving on to your second question, what do you have to. You know, where do you start with this? The whole idea of mango is that you can build a model and just run it on your CPU. You just download mango on and it’ll pick up and use your CPU. If you have a GPU, you can build the exact same model and it’ll run a little bit faster on your GPU. [00:35:05] Uh, just cautionary tale there. But an important instructive point around the general future of AI networks, um, if your model is a highly recurrent network. As we find in the brain. So lots and lots of signals sub-brands are based around processing state. So one of the things that you want to do if you’re processing state of course, is you want to send signals, feedback. [00:35:31] You want to, you know, you classify an image, you want to send the classification from a few milliseconds ago back to the beginning of the loop, because that may inform you as to what you’re looking at. Because brains are used to living in a world of continuous flow, like I said earlier, signals that are related so they care about. [00:35:47] There’s not feed forward flow. It’s massively recurrent flow. Well, if you have massively recurrent flow GPS, don’t do that well with that compared off, sometimes you’ll find networks run faster on the CPU, but regardless of what you want to use it for, you can easily test this and now you can build the model once and then run it on the CPU. [00:36:04] Literally click an option in the UI and run it on your GPU. If you have a neuromorphic, you can. I run it on that. But you can also tell mango to run an emulation of the Intel low easy research chip, for example, and it will reuse your CPU or GPU, but it will emulate that easy chip. And that’s used to basically understand for where people heading to deploy or research. [00:36:28] With the Intel chip, for example, or the Spinnaker chip from same thing. Uh, it gives you an idea of what you’re going to see performance wise, and, and, you know, in emulated time, it’ll show you how that’s going to go. Um, but in general, no, you don’t need to have those chips. That’s why we built the software emulator for people. [00:36:47] Um. If you do have one of those chips, and if you’re running a certain, especially heavily recurrent or temporal network, you’re going to get big efficiency gains. So typically we’ve published papers on archive for keyword spotting. We haven’t published it, but the vision system work. Mmm. There’s adaptive control work. [00:37:08] All of these things are showing 50 to a hundred and Intel has test cases showing up to a thousand or more times efficiency gains over GPS. And it, it all comes from this idea of temporal, sparse, suffocation, the massive parallelism, and the lack of the clock, the asynchronicity. Um, and so, and there’s a couple of other features about neural networks that the lens for temporal tasks, they do a way better job than CPS. [00:37:33] And JPS. Um, and so that’s, that’s why we’re trying to get ahead of the curve and get people into the space. But today, most people are using it with us, mostly for the algorithmic benefits. It’s a more visual, easier way to build a dynamic system for those drone brains or to study a robotic scene understanding or, you know, more of a, um, a development platform advantage is what it brings today. [00:37:58] And then the multi hardware platform for the, for the IOT devices. So we get algorithmic benefits of power. In temporal sparsity without using a neuromorphic chip. You’ve got more of them. If you take that same model, flip the switch and mango and say, go run it on my neural morphic. If you have one, you’ll get even bigger power benefits and [00:38:15] Chris: [00:38:15] also to go alongside an Ingo. [00:38:17] I see. You have the brain board. I think mostly [00:38:20] Peter: [00:38:20] for. It’s a test. Yeah, so the brain board, it has to be clearly understood right off the bat. It’s a small, all it is is we only sell the bitstream. We probably shouldn’t have named a brain board, but brain bitstream. But a lot of people don’t know what a bitstream is. [00:38:35] But if you buy a 150 odd dollar, depending on where you are, or $200. FPGA development board from Altera or Xilinx for $99. Um, you can, or $79 academic you can buy from us. Uh, this file is an electronic file called a bitstream and you download it and load it onto your FPGA board. And what it does is FPG AEs, our field programmable gate arrays are basically like a, think of it like a tic TAC toe board. [00:39:09] It’s like a mesh for grid paper and at the intersection of each sort of vertical horizontal wire, if you want to think of it this way, is, is a thing called a logic unit. And a logic unit is this programmable piece of hardware that you can make it into whatever you want. And so the idea with an FPJ is you can program it to be whatever. [00:39:30] Cirque and it’s like, woo. Instead of getting an ACE, you know, a standard computer chip, which does has a burned bermed instead of Gates and the connectivity. And so, you know what? It does, a CPU, a vision processing chip or whatever it is, and FPJ is like a generally programmable thing where you can emulate. [00:39:46] Any one of the any hardware design that you want. So all we did was we took the resources of the FPGA and we turned them into so that when you look down upon it, after you load our bitstream, it reconfigures what our logic units into emulated neurons. And so the chip once you program and FPJ with our midstream. [00:40:08] It looks to the computer that’s programming, although as though it was a neuromorphic chip, it looks like as though it was a simple version of an into conceptually speaking only, um, of a, like a Louie or a Spinnaker, one of it are more for chips. And so the, the reason for doing that is that if you’re running an AI model, now you’re running it directly on the hardware and the FPJ and you do get a power reduction and throughput benefits. [00:40:35] The, um. The one downside is the F we did that not as a production platform, but as a, like a raspberry pie type of thing. So it’s, it only supports models with up to 30,000 neurons using a single ensemble. So it’s useful to do things like keyword spotting or adaptive control, or very small control problems. [00:41:00] It’s a test thing. It’s mostly used in academic labs. So in Israel it’s taught in courses at some universities and in Korea as well. There are some professors who basically have, you know, 15 or 30 of these things. In the labs and they bought the boards and burned the bitstream on them, and they have mango and they’re trying to basically show the students, um, okay, so put the board into the mobile robot. [00:41:26] And then construct your a small vision sensing algorithm or sound recognition or a little controller, and they’re trying to give them a feel for what it means to have embedded neuro morphic networks in and teach them the principles of neuro morphic. Time-based temporal computing, and that’s what it’s for. [00:41:45] It’s not big enough to do full vision, full speech, or any of the more interesting, larger tasks. Those are, you need your CPG pur neuromorphic [00:41:54] Chris: [00:41:54] chip. Yeah, for sure. Okay. A final question then. What’s planned for the next. Six months. So say [00:42:02] Peter: [00:42:02] for the next six months, we’re focused mostly on, I’m going to, if I can extend that to a year, we expect, the reason being is, is a big, um, in the near term, we are integrating mango with IOT, uh, systems, uh, cloud-based large scale IOT systems, uh, to give people the benefit of, I want to deploy this temporal AI model. [00:42:28] And, um, I want to put it on multiple hardware and I don’t want to have to think about that and I want it to run with less power. So that works off CPU, GPU, microcontrollers, and we’re extending mango’s support for things like various forms of microcontrollers and various arm chips and whatnot, so that we truly become a massively multi-platform, have about nine supported today. [00:42:51] We want more. And so it becomes a low power, temple, edge, AI, um, visual creation, compile and, and deployment suite. And so if you’ve got sensors out in the field, et cetera, and you want to put speech recognition on all of them, and they can be from different manufacturers, et cetera, Ningo can take your. [00:43:11] TensorFlow or other design model, import it, spit it out the other end, and make it run on each of those platforms seamlessly for you. And it’ll run over power. So it’s becomes a piece of infrastructure and a tool for the developer to not have to write 10 versions of the software. That’s the next key feature. [00:43:28] And then a year from now, when the, um, neuromorphic chips, the first ones we’re hearing should roll out in 2021. Um, then we’re, you know, if you want a program, one of those. And you’ve got existing TensorFlow models, you want to convert them over to neuromorphic and run them to get the power benefit and embed them in devices and you know, push the GPU out because this is going to be 50 times or a hundred times less power. [00:43:53] Um, then you need mango for that. At its core, it’s temporal based. The TensorFlows of the world are not, there’s going to be a huge, huge thing for them to support their amorphic chips. And ours is visual day one. And the other thing I would caution people is it took. 20 years to get the, that dynamics problem is a big problem. [00:44:11] How you control those spiking neurons. Mango does that better than anybody else. And we’ve seen lots of people in industry who are doing some of the first pre-release work with, there are morphic chips of major companies. Try multiple different ways to do it and come back to us and say, Nagle’s the only one that’s actually accurate and rock solid and actually works. [00:44:31] Uh, and, and I think people will figure out over time, there’s a huge amount of, it’s the usual problem is. There’s a thousand or a million little things in there that were learned painfully over many, many years that mango does incredibly well. Um, so that’s supporting those developers as they want to take advantages. [00:44:47] There are more for chips have hit the world. That’s a major theme. Uh, and then in the background, some of the really exciting stuff for us is these very advanced dynamical. And autonomous computing systems that we’re very lucky to have been selected to build for a number of our clients, whether it be on drone or in the abstract. [00:45:04] We’re working on some fundamental advances for AI. One last thing I’ll point out. Is a, and there’s a lot going on. If you put a dozen PhDs with 28 years of experience each into a room, you’ve got a lot of good stuff. There’s a very exciting thing. This is, this is actually very important. Um, so we released a paper at neuros this year, uh, which. [00:45:27] Fundamentally changes the state of the art around temporal signal learning. Um, and this is called a . We invented something called a luxury Chandra memory unit. So Aaron Volcker and crystallize Muff. Um, it was Aaron Volcker’s thesis and one of the PhD’s at Chris’s lab and Alondra memory unit is, uh, was based off of the observation of how time cells in the brains, human brains work. [00:45:51] And basically, um, there’s a, if you process speech or vision or pretty much anything time series related, not language processing, that’s something that now is, they use transformers for, but for time series processing, so vision, data, speed, shade of basically all this stuff. I said. You today use something called a long short term memory, so it’s a neural network inventive back in about 99 the earliest goes back to, I think, nineties three and it processes flowing information and learns. [00:46:23] The patterns found in that, and then you can classify, Oh, that’s a word, that’s this word, et cetera. The LMU replaces that. And it uses, um, a set of mathematics called loose genre, um, and the , uh, characterization. Soon as you break down what’s in the signal using represented, using a series of curves that are more efficient and they’re actually modeled after how we think the brain does it. [00:46:50] And, uh, you end up being able to learn a temporal signal, not just over like a thousand time periods of which analysis Sam can do. But over a hundred thousand time periods, it’s about a million times more powerful than the state of the art. Uh, and it’s now, you know, some of the first calls we got are from hedge funds because now you can look at temple data and have it learned patterns that humans can’t see, but that extend over much, much longer periods of time. [00:47:19] We have been able to produce speech recognition systems that are multiples of the efficiency of existing speech recognition. These aren’t released, Jeff. They’re coming though, uh, and dramatically less power. And it does all of that wellbeing. The 60% less in number of weights on size. So it’s better on and it, and it’s more accurate. [00:47:41] It’s more accurate. It models a larger timeframe and it’s a 60% more space efficient. And up to, uh, 50 times more power efficient. It’s a big, big deal. The papers up on archive, it’s a patented algorithm, uh, and we’re basically now going into rebuilding speech and vision and other systems with it to capture those efficiencies. [00:48:03] Chris: [00:48:03] And that was my interview with Peter Sumner. I hope you enjoyed that. Now, of course, not much travel happening. I am definitely trying some remote events. I am trying some of my first, uh, streaming and new podcast ideas. I just started doing some test recordings on any storytelling podcast today, which was fun. [00:48:22] I am doing a live stream. I’m on Thursday of this week. Add it should [00:48:28] Peter: [00:48:28] should. I [00:48:29] Chris: [00:48:29] mean the podcast is out just in time. So to clarify, that is on the 26th at 1400 central European time. I’ll be live streaming meet, working on the new Etherium Wiki. So that could be interesting for some of you. And many more ideas to come soon on that front. [00:48:46] I will hopefully be setting up some documentation, uh, kind of office hours soon as well. And uh, yeah, if you feel like joining me for an online game or two, then let me know and I’ll try and add you to some channels when we do organize them. Um, in the meantime, I will definitely be working on some more consult projects and things. [00:49:05] I don’t necessarily have anything to add right now, but there’s definitely some things in progress that I will announce very soon or next few weeks. So plenty happening, just all happening from the safety and, um, sanctuary. And, uh. Yes, whatever. Otherwise I can’t quite think of of my home. I hope you’ve enjoyed this weekly squeak. [00:49:23] Please do reach out to me if you have anything to say. Please rate, review, and share wherever you found this podcast. Very, very appreciated. And until next time, take care. Stay safe. And thank you very much for listening.