Roundtable: Neurotech vs Neuroscience

Full podcast transcript

Doug Clinton: Welcome to the Loup Ventures Neurotech Podcast. I’m Doug Clinton. On this episode we have a special round table discussion with several guests including Rob Edgington, Head of AI at Paradromics, one of our portfolio companies, Konrad Kording, Professor at the University of Pennsylvania, Vikash Gilja, Assistant Professor at UCSD, and Avery Bedows, our own neurotech specialist. On the round table we primarily explore the different goals of neuroscience and neurotechnology, or neural engineering, and how those differences trickle down into data, data science and the tools that we use to achieve our end goals. And with that, I bring you our esteemed round table.

Doug Clinton: So, thanks everybody for joining us today. What I want to do for the audience’s sake is have everyone introduce themselves so that they can hear your voice and know who’s speaking. So, I’m going to go right in the order of how I have it lined up here on Zoom and Rob, you’re first in line. So, Rob, introduce yourself and what you do at Paradromics please.

Rob Edgington: Sure. Thanks for having me. My name is Robert Edgington, I’m the Head of Artificial Intelligence at Paradromics.

Doug Clinton: Excellent. And Konrad.

Konrad Kording: My name is Konrad Kording I’m a Professor at University of Pennsylvania working between neuroscience and bioengineering.

Doug Clinton: And Vikash.

Vikash Gilja: I’m Vikash Gilja. I’m an Assistant Professor at the University of California, San Diego, also working in the intersection of engineering and neuroscience.

Doug Clinton: And last but not least, Avery.

Avery Bedows: Avery Bedows I’m the neurotechnology specialist at Loup Ventures.

Doug Clinton: All right. So, what we wanted to talk about today with this round table is a few different topics. One of which is one of the things I think we’ve started to hear from the community: this distinction or talk of the distinction between a defined neural engineering application and a basic neuroscience project. I think some of the basis of the debate from our perspective is how much do you really need to understand the function and the processes of the brain to deliver on an effective neurotech application, as we would broadly call it. So, I’ll start with Konrad first and say, how do you think about this distinction between a neural engineering application versus a basic neuroscience project?

Konrad Kording: So, I wrote extensively about what it really means to do a basic neuroscience project. What we really usually mean in neuroscience is that we want to understand how it works, and when we say how it works, we mean in terms of causality. We want to know how a nerve cell affects another cell, how a brain region affects another brain region. And ultimately how that interplay between all these pieces of the brain allows you to perceive and think and move. In that sense it is very complicated because we need to understand that whole network with the interactions with it.

Konrad Kording: In my engineering work, the problem is much easier: I just want to figure out based on the signals that we have, what the person is trying to do or what the animal is trying to do at a given point of time.

Avery Bedows: And Konrad, because this is such an important point, what is your operational definition of causality here?

Konrad Kording: I usually use what’s called counter factual causality. So, what we mean with that is, if we have the network of all the interacting neurons and I would take one of them and would perturb it, maybe I would make it spike more. How would that perturbation affect what all others do? It’s called counterfactual because we can never see both at the same time. You could say that what I’d ideally like to do is have two copies of the brain, one in which I do something and another copy of the brain in which I don’t do anything, and then causality is defined as the difference between those two. That is why us neuroscientists are so obsessed about randomized perturbations. Because if it’s randomized, I can basically say, “Well, it happens at a random point of time.” I could basically take that average thing that happens in the brain and compare it relatively to all those random times where I perturbed it.

Avery Bedows: Okay. I see.

Doug Clinton: And in terms of… As you think about the actual application then of a neural engineering project, right? Something you talked about: Just working with the signals versus maybe trying to understand how everything works and everything fits together. How do you think about how important it is or what level you need to understand of how things work versus just understanding the signals from a neuroscience perspective?

Konrad Kording:  So, the thing that makes usual neuroscience hot is because I don’t need to know: Does one neuron do something to…? Let’s say, does neuron A do something to neuron B, or does neuron B do something to neuron A? When I do a brain machine interface project, the only thing that I really need to know is that both A and B can tell us about what that person, or animal, is trying to achieve. So, in that sense, I don’t need to understand the meaning; I might want to understand something about coordinate systems. So, if I want to get at how you move your hand, maybe to build a prosthetic device, I need to know when in your brain there is signal relative to the hand movement. But I don’t actually need to know: How is that signal calculated? How do we get there? Is this specific neuron making the decision that you want to move your hand or is it just part of the chain of execution?

Avery Bedows: No, that’s for decoding problem. If you were trying to do a closed loop brain machine interface and stimulate it, then you would probably need more of a model of how a group of neurons or how an individual single unit is going to respond to some stimulation.

Konrad Kording:  Yes. That is the big talk. If you want to close the loop, you need to understand what the neurons really do, what their meaning is. So, if you have a neuron that tells you how you feel your hand versus a neuron that tells you what your hand should do, they will be very different if you want to close the loop. But that’s because then you’re trying to have a defined causal influence on their brain.

Doug Clinton: Let me give a chance for Rob and Vikash if you want to jump in with any thoughts to the conversation.

Rob Edgington: Sure. To me, it’s really a system identification problem when it comes to machine learning for neural decoding. We’re really just trying to map inputs to outputs, or the inputs of brain activity and then the output is whatever we’re trying to control or convey through it. So, when we’ve been building our team for the machine learning here at Paradromics, we’re not even necessarily looking for neuroscience experience. It’s more of a data science problem or a control theory problem. So, I think we really don’t have to know much about the causality of where the signal is coming from. The current applications we’re going for in brain machine interface control, they don’t have a deep level of abstraction. So, the correlations that you can get from doing simple decoding with machine learning are quite faithful to the actual underlying representations that we’re trying to decode.

Konrad Kording:  But Rob, if you wanted to close the loop, if you, say, want to take what comes from a prosthetic device and put it right back into the brain, then the problem is more complicated than the correlation problem.

Rob Edgington: I agree. Yeah. I think in these early days of BMI we are more keeping that loop open, and it will more be for sensory and simple motor control methods.

Vikash Gilja: And when we talk about closing the loop, we can do that in many different ways, right? We can close the loop by using the existing sensory system. So, a very common approach for developing motor BMIs, motor and computer interface-based systems is to use the visual system as our method for closing a loop. So, we can read out intentions, and we’re trying to estimate that intention. That’s where machine learning comes in: to translate the neural activity into our estimate of what the user’s intention is. And then we have a system that takes action, and that system could be the movement of a computer mouse, it could be movement of an actuator, it could also be synthesis of artificial speech.

Vikash Gilja: But in each of those cases, if we’re working with a user that has their sensory system intact, they can see and they can hear, we can close the loop in that way. It may not be completely natural, but particularly in the realm of computer mouse control and robotic arm control it’s been shown in laboratory testing that we can do this in ways that are functionally relevant.

Doug Clinton: Konrad, I’ll throw it back to you just to make sure that that point is clear for the audience. The way that I understand what you’re saying Vikash is we may not necessarily know the downstream effect of stimulating let’s say neuron A as part of… Like a mouse replication or a cursor replication, but we can observe that that stimulation causes the output that we want, and it doesn’t seem to have any maybe negative side effects. Is that basically what you’re saying is that we know there are certain inputs and outputs, and that the actual application itself is functional without a negative outcome associated with it.

Vikash Gilja: Yeah. I want to provide an analogy, right? So neural engineering, which we’re saying is an engineering field, it’s an engineering field that has actually been around for a while, for many decades, but it is gaining traction, quite a bit of traction, more recently. And like any engineering field, it is dependent on existing science, existing theory to drive it. But that theory needn’t be 100% complete to make progress, right? So, I sit in an electrical and computer engineering department, and electrical engineering grew out of physics. They grew out of electromagnetism. Many departments started as applied physics departments. Now, we didn’t have all the fundamentals fully solved before we started building systems, and theory in many of these areas is still evolving, but we’re able to build useful systems, right?

Vikash Gilja: And as the theory evolves, we can continue to advance the systems on the engineering side. I believe the same is true and will continue to be true for neural engineering, which will always be dependent on advances in neuroscience and as those advances happen, we can keep pushing the envelope in neural engineering. And so, back to this concrete example. We have working theories of how motor cortex controls movement. We have working theories for how somatosensation, touch sense, proprioception, are encoded in primary sensory areas. These are working theories and we may be far along enough to leverage those theories to build functional systems. And, at least on the motor read-out side, there are demonstrations in various clinical trials that show that.

Avery Bedows: And I think one of the key points here is that: It’s not that we may be far along enough. It is that we definitely are. As you said, we have demonstrated in laboratory far along enough such that the basic science, which in neural engineering winds up coming from a bunch of different fields and basic science, is enough to build certainly proofs of concept of various neural interfaces and brain machine interfaces. I think one important question is how close are we to the saturation point? Like how much improvement can be done in neural engineering, holding neuroscience constant such that we’ll see improvement in outcomes for patients. I think that’s a really important question.

Konrad Kording:  Yeah, and if I may chime in here, what can help? So, what’s the current status? So, if we look at, say, prosthetic devices, BMI devices, how would we characterize them today if we are honest? They’re slow and they’re noisy. Now, what can we do on the engineering side about slow and noisy? Basically, slow and noisy happens because we don’t have all that much information from the brains. What can we do to get more information? Well, we could have better models, but if you give me enough data, I can always construct a better model even if I don’t really understand why my models work. But the other thing is, we can just have more and better data, more channels, and more channels ultimately translates to less noise and faster movements.

Doug Clinton: And I’d say at the heart… I mean that is fundamentally the problem I think Paradromics is trying to solve with a much higher bandwidth neural interface. You can get more data, right? If you have 65,000 channels instead of 100 channels from a current Utah. Right? Rob, when you hear Konrad talk about that, does that align with the vision for Paradromics and how do you think about then leveraging your hardware and software to bring some of that to life?

Rob Edgington: Yeah, we’re seeing that effect already going from the Utah array’s 100 channels. We’re seeing that richness of data that you’d expect with more channels. So really for us it’s all about dimensionality. You can think of it as the degree of freedom that your data has and how many rich dimensions it has within it. And we’re seeing that as you increase the number of channels that dimensionality does also increase. It drops off logarithmically let’s say. So, there’d be a point where enough is enough on these channels, but we haven’t seen that ceiling yet.

Rob Edgington: So, we’re expecting to go from say 15 dimensions of control up to into the high teens and that’s really going to open up a lot more high-fidelity control of prosthetics. If you think about the degrees of freedom in a hand, there’s about 27 degrees of freedom. So, you really very loosely want the dimensionality per system to be on the same order of the degrees of freedom you want to control. So, we’re quite excited about the number of channels our system provides to matching that expectation of control future prosthetics.

Avery Bedows: And there’s also the notion of… You can think about channels, holding constant the set of regions you’re recording from it. You can also think about recording from a larger set of regions in a more distributed way, which reflects the growing belief in neuroscience that most things don’t sit in one little spot of cortex. And so, I think that’s another angle of improvement for neural engineering as well is distributed recording and decoding.

Rob Edgington: Yeah. Especially if you think about the closed-loop system.

Vikash Gilja: Yeah. And I think that is an excellent plan, and one we can look to empirical data to gain confidence in. So, if we look at what most of the labs doing clinical studies, if we look at the techniques that they’re using, a lot of what they’re doing year after year is: They’re increasing the number of implants that they put in, and you know that’s a slow process. You have labs that are trying to leverage the technologies that they have available to them now and develop a solution that works for their research but what we see is labs are trying to put in more and more implants, manage more and more wires, more and more connections to increase the number of neurons they’re recording from to demonstrate higher degree of freedom control.

Vikash Gilja: I think those successes provide empirical evidence that increasing the number of channels, increasing the spatial coverage, can be beneficial for these applications.

Doug Clinton: I think we’re stepping a little bit into another topic I know we wanted to discuss, which is the differences in tools between neural engineering projects and neuroscience projects. And I know from the beginning of our discussion, just to kind of refresh it for the audience, we defined that neuroscience is really about understanding how the brain works, and then neural engineering is really about understanding and processing the signals, but how do you as a group think about the constraints and how they differ between the tools that are targeted at these neural decoding applications and the tools that are really targeted to answer these basic neuroscience questions.

Konrad Kording:  So, when it comes to answering basic neuroscience questions, the things that we’re currently really into are ways of perturbing and understanding the different kinds of things that are there. That means that we, for example, have these techniques so that we can optically activate neurons, say optogenetics and not only in neurons, but we can choose exactly which, out of a very large number of different neuron types, we want to perturb. Say, there are cells that are called SOM somatostatin positive cells. It’s a local inhibitory neuron. In neuroscience, when it’s about understanding the brain, we might very much be interested in which role, in the local circuit, this inhibitory neuron has.

Konrad Kording:  And in that sense, on the neuroscience side, we have these progressively more specific tools with which we can perturb different things, with which we can measure signals from just very specific, very well-defined neurons. And on the neural engineering side, people drive into very a different direction, we may say. When it comes to recording, we primarily care about lots of channels. We don’t need to know exactly what those cells are, we just care about what information we have there. And so, in that sense, neuro engineering and neuroscience are going to somewhat different directions at the moment.

Vikash Gilja: Yeah. So, if we think at a high level, neuroscience, science being the key word, is ultimately hypothesis driven. It is after fundamental truth. So, all of the models that are being developed on that side need to be testable, they need to be defendable, they need to be interpretable. Whereas neural engineering, we are engineers, we’re building a system. Now the design of that system can be inspired or driven by our current understanding of neuroscience but fundamentally in neural engineering, the proof is in the building. You build something, you demonstrate that it works, and there we can look at existing examples in the clinic of technologies that are widely used.

Vikash Gilja: So, for example, deep brain stimulation, or DBS, which is fairly commonly used out for a variety of motor disorders. The fundamental neuroscience there is not fully understood, yet it’s a very effective therapeutic technique. Now, if that theory advances, maybe the specificity of that therapy could be better understood, it could be enhanced, but it is a useful technology today.

Rob Edgington: Yeah, I think of it from a different angle together. I mean, right now I’m sitting in my office looking at some neuroscience tools which are about the size of a small child. For BMI to work we need to have implants that are sufficiently small that they can sit neatly under the skull and have a low enough power that they don’t actually cook the brain. So, the huge constraints on BMI is how much processing and data you can actually get in and out of the chip compared to a bench top neuroscience tools.

Konrad Kording: Although I’d like to add one thing there: For me as a neuroscientist, the kinds of developments happening in neuro engineering aimed at BMIs promise to become super useful tools for neuroscience as well. So, in that sense they suddenly have a second way of using them.

Avery Bedows: Konrad, one thing you said earlier that got me thinking, you were talking about intra-neurons and optogenetics, and it struck me that neuroscience tools can only be as specific as our ontology of the brain, right? Like if we don’t know that there is some degree of granularity, we can’t really design a tool to ask about that degree of granularity. And so, you also just made the comment that neuro engineering provides useful tools for neuroscience. My quick take on it, which I want to float to everyone here, is: There seems to be a distinction between foraging forward and refining the ontology, refining our knowledge of components of the system, and then working with existing components of the system.

Avery Bedows: So, for example, I think, developing a higher-bandwidth electrical interface à la Paradromics is more about expanding what we can do with the existing ontology or, in some ways, not caring all that much about the ontology. Whereas something like optogenetics, which is a neurotechnology in its own right, is much more about articulating further and further detail.

Konrad Kording:  That’s an interesting way of putting it.

Vikash Gilja: Yeah. And if we think about a little more concretely how advances in neuroscience, let’s say potentially driven by optogenetics, could inform neural engineering, let’s say we’re able to perturb the system and better understand how the underlying circuits work, how the neurons interact, that can provide us with a better model for how we should look at neural activity in a neural decoding standpoint. Right. So, if we’re developing, let’s say a machine learning model to decode neural activity and infer somebody’s movement intent. If we better understand how that underlying neural network operates, we can put the proper constraints on the machine learning model and perhaps get that model to perform better or to generalize better.

Vikash Gilja: Similarly, on the simulation side, if we better understand how stimulation affects that underlying network, we can more finely tune and design the patterns of stimulation that we use to write in information or change the behavior of the network.

Doug Clinton: That makes sense. I do want to ask, as we were talking about tools, one quick follow-up which is: I think a lot of times disagreements can be about the timescale being debated rather than the ultimate outcome. And we’ve talked about how for neuroscience we have very specific tools, whereas for neural engineering less specific. To the extent that our knowledge in neuroscience improves and that trickles down to the neural engineering side, do you see a future where neural engineering tools ultimately become more specific? Is there an advantage long term to that?

Konrad Kording:  There should be. Now, like let’s say, if I want to produce in your brain, say, the feeling of having heard a certain word. Then, in general, there are exciter neurons and inhibitor neurons. If we electrically stimulate the brain, we will stimulate some exciter neurons and some inhibitor neurons, and thereby they will possibly cancel themselves out. So, in that sense, if we could target better both in terms of regions and in terms of cell types, and maybe in terms of what neurons mean in their overall computation, it would most certainly help.

Rob Edgington: And here at Paradromics we are definitely trying to make the implant we are building as future-proof as possible, so we’re not throwing away any information that is necessary from the brain, and really just trying to get as much information as possible, so that when in the future neuroscience delivers new bits of information that we can use, the implant will still be ready to go and utilize that. People often think, or the common mantra is, if you’re going for a high bandwidth brain machine interface, you have to throw away a lot of information. Our design actually is really able to avoid that situation, so we’ve gone and achieved high bandwidth while still retaining maximum information to allow these futuristic methods that will come out of science.

Doug Clinton: One thing I did want to touch on there, Rob, is: I know that a lot of times when people think about the neurotech space, they think about brain machine interfaces and they think about the hardware, the implants necessary to deliver on that future. But the software I think is equally, if not more important. So, when you talk about preserving all of the information that you’re able to pull out of the brain through your hardware device, I mean that’s really a software challenge, isn’t it?

Rob Edgington: At the scale we’re trying to pull data out of the brain, you simply can’t do the processing you need to do in software, because it requires too much power and therefore heat, to do it. So, what we invented here was a way to process and compress and distill that data within the chip. And that’s actually bringing a lot of the processing you do in software into an analog pixel. So, in each pixel of the chip array we effectively have an analog computer, which would perform the first few processing steps that you’d do in software. So, that’s really what’s leveraged in the technology to allow it to scale to thousands of channels while still extracting all the salient information about each sparking event and each local field potential, which really is currency of the brain.

Doug Clinton: Let me then jump into an even broader question in many ways. One thing we wanted to do at this round table is just talk a little bit about what modern neurotech is really capable of today and make it more concrete for the listeners. So, I know that that’s a very large topic and the way that I’d actually like to talk about this topic is: We’ve all heard about using a Utah array, right? For example, to restore some motion in someone that maybe has some paralysis. But what I think I’d like to do is to try to maybe almost take one step beyond that and say, within the next couple of years, this is the evolution where we’re going to, based on what we’re seeing evolve in the neurotech space today.

Avery Bedows: And also, I think just as another addendum is someone who talks about neurotechnology to a lot of different kinds of people. I think we should be as specific as possible. One of the things that makes it easy to communicate better in neurotechnology is that you can generalize, but that’s what also makes it hard to communicate better in neurotechnology. So, as we chat about this and the more concrete, we can be, I think the better off.

Konrad Kording:  Well, maybe I can first say something about the general space. As we move into a future where neurotech is a reality, a lot of it hinges on making it usable, which means we need ways of getting it into the brains, we need ways of getting the information out of the brain. We need to avoid the massive number of wires that we mentioned previously, and we need to integrate that system. And in that sense, seeing it from my perspective as a professor in neuroscience, it’s really exciting the level of energy that the industry can have about integrating all these different aspects and making it really strong on the many sides that are necessary to make it useful for consumers.

Vikash Gilja: Where I see a lot of that energy going is hardening the understanding that we’ve collectively developed as a field on the neural decoding side in the last 20 to 30 years, and really turning that into something that is a bit more turnkey that could be applied in many different places. And so, if we think, again, generally, when we’re doing this readout, this high definition readout, our goal is to estimate brain states. And I’m using the words brain states on purpose: It’s a very general term because that could be applied to many types of state, right? We’ve been talking about volitional motor commands as a potential state, but these types of states could be disease states, or intextual states that could be leveraged by existing technologies and solutions on the neurotech side.

So, one place I could see this going is there are examples of closed loop neurostimulation devices that are available on the market and that are implanted somewhat routinely now. These rely on very low fidelity estimates of brain states. One example is the Neuropace device; Medtronic has some similar devices. Imagine we could augment the state detection with the types of systems that we’re talking about that are much higher fidelity and that are using orders of magnitude more neural channels to estimate these underlying states. I think there’s a potential to augment those existing applications to make those therapies more effective.

Vikash Gilja: And then when we think on the volitional side, there’s all sorts of examples now coming out of laboratories as proof of concept demonstrations. As this technology gets hardened and productized, these could become clinical solutions. So, much of the work being done on motor commands that we’ve all been talking about has been mapped to estimating intended speech. Ultimately our speech production is a result of many speech articulators that are controlled by muscles. And so, much of the same underlying technology in there, I mean sensing technology, decoding methodologies, is translating to these other applications. And I think there’s a number of exciting applications that we’re going to see mature in the near term.

Doug Clinton: Rob, how do you think about that question from the perspective of Paradromics, what applications neurotech will enable in the near future.

Rob Edgington: We try to keep it really grounded and clearly state the expectations of where we can take it in the near future. To begin with, we want to use our technology where it’s needed most, and that would be for locked-in patients who need an assistive communication device. And in current BMI, as Vikash has worked on through his career, this can be currently using a Utah array with 100 channels to control custom movement on a screen to spell out the words, and correct me if I’m wrong, but I think that’s around 10 words per minute at the moment.

Vikash Gilja: There are demonstrations around that. That’s correct.

Rob Edgington: So, the decoding behind that also uses neural state style representation of how to move it. So, with our technology, we aim to increase the fidelity and the dimensionality of that state. And with that we eventually hope to hit more keys. So instead of controlling a cursor we can control a whole keyboard of keys, letters and spell out words a lot quicker. So, the intention will be to allow a really fluent communication for these patients who are locked in and otherwise can’t communicate.

Doug Clinton: Maybe in the spirit of kind of continuing to make the future apparent for people listening. I’d love to hear from everybody, what is your favorite BCI application you’ve heard about that most excites you to date?

Konrad Kording: So, I’m just a huge fan of the idea that we could restore speech to people who have lost speech. There are so many people who have a stroke and can’t talk afterwards. If one could build a device that allows people to talk, who can’t talk, that would just be magical.

Avery Bedows: I’m going to jump on this speech train here as someone who writes and someone who’s also involved with neurotechnology. I pretty often think about how my life would change without communication and it’s so central to everything. So, kind of for that reason I’m on the same boat. The work that’s really been happening a lot in the last couple of years or on speech decoding is pretty incredible.

Vikash Gilja: Yeah. I want to expand that as we’re thinking far future to think around visual communications. I think it’d be incredibly cool if in the far future we’re able to develop systems that allow us to share internal visualizations by choice as a way to augment some of the more standard forms of communication. I wish I was a better artist, I’m horrible artist, but I feel like I have good visualizations in my head. So, if I could transmit those with at least the fidelity I believe they have, that would be amazing.

Konrad Kording:  So, what about shooting space bugs?

Avery Bedows: Oh, I mean, that’s why I’m actually in this field with Konrad. You didn’t know that.

Konrad Kording: That’s what I thought.

Rob Edgington: Yeah. The Up Device, that’s the one for me.

Avery Bedows: I want 17 arms.

Doug Clinton: I think that was one of the… Shooting space bugs was one of the first applications on the Microsoft HoloLens. So emerging technologies do seem to go that direction.

Konrad Kording: Everything’s ultimately about shooting space bugs.

Doug Clinton: Sure.

Avery Bedows: One question I would have with respect to neural engineering. So forget neuroscience for a moment. Everyone likes to use deep- and machine-learning interchangeably, but there is actually something kind of important in the distinction, at least as I understand it, which is: Machine learning tends to be more model heavy. They use a lot more mathematical probability theory and statistics to form statistical models to kind of think about the structure of the system you’re trying to understand, whereas in deep learning there’s less of that. And so, the question I wanted to throw out, I think at Konrad and Rob, is particularly for neural engineering applications: Talk about this machine learning deep learning distinction.

Konrad Kording:  Okay. Practically one of the big changes is when you do regular machine learning, the step that we call featurization is very important. What is featurization? We take a data set and we ask ourselves: What could be the dimensions that could be important for decoding? For example, we might say, well, the average firing rate of each neuron could matter, and maybe how strongly they’re synchronized to one another could matter, and then we build that in and then we have a machine learning algorithm that basically takes these, like, Konrad-made features and puts them together into a good estimate.

Konrad Kording:  When we do deep learning, the philosophy is a little different that we basically say, “Well humans can’t do the feature engineering because we don’t understand enough about the brain.” What we might understand rather is that say, “Well, the way that firing waves of neurons map onto the intense state is smooth and there are ways in deep learning to formalize those things.” Or let’s say we might know that the way people talk should be the English language, and there are ways within deep learning with which we could formalize these things.

Konrad Kording:  But in many ways it’s a continuum, and one can say: Build Random Forests that cross a lot of domains and that have very similar performance to deep learning systems. And the areas where deep learning are particular good, which is say texts, speech and vision, are all cases where we can use certain tricks, which are often convolutions, that allows us to describe why a cat looks the same way, if it’s say moved a little to the right or to the left in a visual scene. And we can effectively use those in deep learning while we can’t use it for a classical machine learning.

I think that that distinction on deep learning versus machine learning is at the moment a little bit of a distraction. Just like, deep learning is just so incredibly popular at the moment.

Rob Edgington: We try and use the simplest models we can use for the task at hand. So, while deep learning gives an incredible capacity to the models, it isn’t necessarily required for, for example, auditory decoding or movement decoding. That increased complexity and capacity comes with it. So, lack of traceability and other things won’t necessarily want to put into BMI. We also want to keep our models as lightweight as possible. These will be deployed very much at the edge on a person, so large models don’t favor that kind of deployment.

Vikash Gilja: I want to add that as new techniques in machine learning are coming online and becoming more accessible, what we’re seeing in the neural engineering research space, particularly in neural prosthetics, is that we’re able to better approximate what we believe the underlying system looks like. So many of the earlier decoding algorithms that we are using in the field to get at intent, we knew that they were overly linearizing the problem. They were not accounting for all the recurrences in this network, right? We’re recording from neurons that are highly interconnected and there are complex relationships between those neurons in both space and time.

Vikash Gilja: And, as these newer techniques are coming online, becoming more accessible, training methodologies are getting better for these algorithms. We’re able to build machine learning models using some techniques from deep learning from other branches of machine learning as well to better model what we believe the underlying system looks like. And there we are taking inspiration from our basic neuroscience colleagues to develop those models.

Doug Clinton: As we talk about some of the differences here in deep learning, I can’t help but think of the explainability debate broadly and in “AI”, and how parallel it feels to what seems like another explainability debate perhaps between some people in the neuroscience field and some people in neural engineering field. It seems very similar in some senses. Any other topics we should talk about before we wrap up?

Konrad Kording: Some people will listen to this as well that they’re interested in entering the field. Maybe you guys can mention a little a bit the kind of skillset that you guys are looking for when you’re hiring.

Rob Edgington: Yeah, sure. Well, it’s truly interdisciplinary as a field, so let me go all the way from pure neuroscience down to hardcore engineering as well as software, UI design, data science, obviously machine learning. So what I see is a lot of new graduates, especially, see the field and see how multidisciplinary it is and then they go out and do all of those disciplines at once, but the BMI field is becoming mature enough to where we need specialists in each of those fields. So, I’d recommend picking one and becoming a specialist. I definitely came from a generalist background, but a generalist with a PhD in Neural Engineering. So, I supposed that was the right time.

Avery Bedows: That’s the funny thing about it, that one can be a PhD in Neural Engineering and still be a generalist within the field.

Vikash Gilja: Yeah, I think of the ideal as being T-shaped. Right? You have breadth, right? So, you understand the scope of the field and then you also have depth in one or more areas that’s important to advance the field. If you just have the depth, you won’t know where to apply it, and if you just have the breadth, well you’ll be able to talk about a lot of things, but you won’t be able to push the needle forward.

Rob Edgington: Yeah. You have to be aware of what the teams you’ll link to are doing as well. There’s so much to think about that if you silo yourself and put blinders on to what you’re doing, the processes that interconnect to the next part in the chain that’s where the breakages can happen in the system that is a BMI. So yeah, breadth is good, good too.

Konrad Kording:  So, would it then be fair to say that a company like Paradromics is well prepared for however the space is moving?

Rob Edgington: Yeah, and I think that’s fair to say. We hope so.

Avery Bedows: I also think just from a hiring standpoint, I would wager then that it makes sense to look for people who have specializations even if that’s on a specialization within neural engineering because you can rapidly expose them to the breadth by nature of the multidisciplinary composition of the team.

Rob Edgington: One of our senior electrical engineers comes from a particle physics background, making detectors for example. So really mixing in lots of different fields helps to really get the best product for us. However, if you’re doing neurosurgery then I would suggest being a neurosurgeon.

Avery Bedows: Yes, I agree. Good insight.

Doug Clinton: Well, I think that does it for us today. So, Rob, Konrad, Avery, Vikash thank you everybody for joining, and we will speak to everyone on the next episode.