OCTOBER 15, 2013
BRAIN CONTROL OF PROSTHETIC DEVICES: THE ROAD AHEAD

Dr. Jose Carmena

UC Berkeley Associate Professor of Electrical Engineering and Neuroscience and Co-Directory of the Center for Neural Engineering and Prostheses, and Principal Investigator at the Brain-Machine Interface Systems Laboratory

 

Dr. Michel Maharbiz

UC Berkeley Associate Professor of Electrical Engineering and Computer Science, Co-Director of the Berkeley Sensor and Actuator Center, and member of the Center for Neural Engineering and Prostheses

 

Transcript - Carmena

 

Hello everyone. So I will be talking today about the field of brain machine interfaces. I will try to make a summary of where we are today in the field, and what hurdles remain in order to bring this technology all way to the clinical realm, which is mostly where we focus on.  So, I will start with two examples of very successful neurotechnology devices, implantable devices.  I'm starting with this slide because, as you will see, we work on the invasive side of brain machine interfaces, or BMI.  There are other ways, noninvasive ways, like EEG, etc. that I will not be talking about today.

 


So there are two very successful examples in particular: the flagship of the field, which is the cochlear implant, that allows people who have lost hearing to regain their hearing even at different stages of their life like childhood or adulthood; The other one is a more recent one, which is the DBS, or Deep Brain Stimulator that allows people with Parkinson's to reduce the tremors in cases where drugs don't work anymore.  Again, these are very invasive, especially in the case of the DBS – it goes all the way to the subthalamic nucleus.  This is an electrode implanted all the way deep into the brain.  This is a safe technology, and it's only going to get better, as we'll see later today in the talk.

 

So, the purpose of our work is centered around sensorimotor control, or helping people with sensorimotor disabilities.  In particular, in this case you see Christopher Reeve with a spinal cord injury. These also are meant to help with stroke, ALS, and so on.  There are huge numbers of patients in the US alone suffering from this condition.  The field of BMI emerged primarily with this application in mind, basically to convert thought into action.  In this diagram that you see here we summarize the main elements of this “BMI loop”, as we refer to it, in which you can see that there is a different variety of signals extracted from the brain, noninvasively as I mentioned like the EEG or all the way to individual activities of cells in different areas of the brain, which is what we we use in our lab.  And then these activities stream into what we call the decoder, or the translation algorithm that translates the activity from these cells, or groups of cells, into certain motor commands.  That allows the subject to control, for example, a computer cursor on the screen, like a mouse pointer to reach and click, or to steer a wheel chair.  This is still in the early stages of development, but being the ultimate goal to control whole-body or upper-limb, etc., exoskeleton devices and orthotic devices as well.

 

We refer to this plug also as the “spinal cord” for prosthetic function, mostly because, as you can see, it serves the role of the spinal cord in this now-modified central nervous system, projecting a large number of signals into a subspace of, in this case, motor commands like position, X and Y, of the endpoint effector or the computer cursor.  In a similar way and as an analogue, the real spinal cord obtains signals from thousands of neurons and projects those into a thousand or a couple of thousand muscle groups just to move the upper limb. 

 

Let me show you – hopefully the videos will start immediately... But this is in the very early days of BMIs, around 2003.  I was a postdoc at the lab of Miguel Nicolelis, who is one of the pioneers in the field of modern neuroprosthetics.  In those days, we were actually trying to close the loop for the first time, showing that Macaque monkeys in this case, in the absence of physical movement, would drive the neural activity to control prosthetic devices.

 

(Videos.) So, in the top you see a Macaque monkey controlling a computer cursor to reach for this target, and the absence of physical movement.  So that's the arm that used to control the joystick now resting there.  You see here another Macaque controlling a much harder task, his robotic arm to hit the target.  This is the same Macaque doing a reach-and-grab task.  In each case, although you see some residual arm movement, this is all under neural control.  Signals from the brain are entering the decoder, and the output of the decoder is rendered on the screen, and the animal sees that.

 

Those were the early days of closing the loop, getting back to 2003.  Where are we today, especially on the human front?  Essentially, one of the goals of this field is the translation of this technology to the clinical realm.  I think it's fair to say there are two main challenges to have this as the 'pacemaker' of the brain.  I have divided them as follows: One, anything related to what is inside the brain, meaning the implantable device that my colleague Michel Maharbiz will be talking about in a moment; and challenge number two, anything else that you can do with those signals, assuming you can keep them for decades or a lifetime, which is one of the main issues about this technology – as you can see it's bulky, tethered, it's not wireless, it lasts for a few years.  As a proof of concept it's very good, and the same happens in the demonstrations we have seen so far, in a few clinical trials in the Brown and Pittsburgh groups.  This is very exciting for our field, but we like to look at it like the bottle is half empty, not half full.  So it's very exciting but at the same time it's not enough to call this the level of skill that we'd like to achieve in this BMI field, to perform tasks of daily living, like brushing your teeth or tying your shoe or whatever.

 

One more thing: the field of robotic actuators has advanced tremendously in the last few years.  As you can see these are fancy prosthetic devices built through DARPA programs and also in the corporate world.  Mainly you can see that there are huge numbers of controllable degrees of rhythm.  So it's fair to say these robotic technologies exceed the capacity for BMIs to control them.  In other words we don't know what to do with our BMIs to exploit the full possibility of these robotic actuators.  There's a little bit of a mismatch; hence we're focusing our attention on the brain – how the brain learns and adapts to control these devices, the process of cortical plasticity that was mentioned earlier this morning.

 

In the last 4-5 years in my lab in Berkeley we have been focusing on the problem of plasticity, of how the brain incorporates the prosthetic device into neural representation.  For us, it's very important that the brain “owns” the device.  In order to achieve, eventually, natural and skillful control of the device, as opposed to the decoder learning, or the machine learning everything you are trying to do, which we think is important. I will mention that in a moment, but we start from the premise that the brain has to learn to incorporate that device into its own representation, like an extension of the body's schema, if you like.  Think of this like a very primitive or an early version of an avatar when we talk about the computer cursor.

 

We hypothesize that by keeping the BMI loop stable in terms of connecting the same channels, the same neurons, to the same decoder from day to day, and hence keeping the same BMI service from day to day, the subject – in this case the Macaque monkey – will be able to retain what it has learned in a given day and recall it readily the next day and so on, in the same way we recall motor memories.  When we learn to drive a car, and then we jump in the car and drive, we don't need to recalibrate and so on.  That's the concept of the motor memory, but in this case, in the neuroprosthetic sense, a motor memory for something that does not belong to your own body.  It's a disembodied actuator.

 

In the animation you just saw, this is a Macaque monkey performing what is known as a central reaching task.  That requires them to drive the cursor purely under neural control, and to do this in the absence of any physical movement. So, it's just mental control, to the center target, hold for 400 milliseconds, and then reach one of the targets that you see there, in which case the animal gets a juice reward.  So this is a demonstration of controlling reach and click, that you can do with a bunch of cells in a Macaque monkey, just to give you a sense of where we are now today.  Needless to say, this performance doesn't need recalibration from day to day.  After the learning phase, the animal, from the very first trial in a given day, can recall this “plug and play effect”, as we like to call it.

 

We've been talking about adaptation here, in the brain, brain plasticity.  But then there's the possibility of also using machine-learning techniques to improve, to change the parameters in the decoder – in this spinal cord prosthetic function – in order to, for example, accelerate learning, boost performance, and so on.  This is an area we're exploring these days which people call a “co-brain/machine co-adaptation.”  Now it's a true learner system.  You have the brain and the machine learning in the same closed loop.  It's tricky because for us it's very important that we do not give up the plastic properties that we mentioned a moment ago.  We want the brain to own the device, but at the same time we want to help or improve performance by tweaking the parameters of the decoder.  I will not get into details of how we can do that, but basically this is becoming a very promising area of research in BMI.

 

So, the summary of my part of the talk, which has been challenge #2: What to do with the signals if we keep them forever? What we are pointing towards is the skillful, natural control of the BMI.  Note as I mentioned “natural” control, so you also want to feel the BMI.  So far we haven't been talking about sensation, but just one of the big missing elements in this BMI field that we and many other groups are also pursuing, but which is, I would say, a little bit more underdeveloped than the motor control part, which is to sensorize the prosthetic device – to return this feedback to the patient so that he can feel the tactile information from the robotic hand, from the gripper, and also get a sense of where the robotic arm or prosthetic device is in space – and doing that by writing information into the brain, either by electrical or optical techniques like microstimulation or optogenetics, for example.  So that's one of the main building blocks that people are working on today, and that we think will improve enormously what we see today in demonstrations of performance.  And with this I will pass the torch to Michel.

 

Transcript – Maharbiz

 

So, my job is to give you a tutorial or an appreciation for the challenges in building the gadgets that are required to do all the fantastic stuff that Jose was talking about, and I want to end with a presentation of an idea that we're pretty excited about.

 

 


Let me start by giving you about a minute tutorial on how these technologies work and how they take data from the brain.  Let me start by saying there are a lot of ways to take data from the brain, and you should look forward this summer to a number of really amazing white papers looking at the fundamental limits of how you would extract data from an entire brain, for example (Marblestone et al, 2013).  There are many modalities, there are many different energies that you can use to take data out.

 

Let's focus on what we call extracellular electrophysiology.  So, this is the classic way you take electrical information out of the brain, very invasively, and you get high-resolution, if you will, data.  Let's pretend this is an accurate representation of a neuron (which it's not – it just looks pretty – it's a big audience so you've got to have things that look like this...).  So basically you have a neuron, and this neuron fires depending on the inputs its getting.  It turns out that when it fires it changes the concentration of ions very rapidly around it, because it's using those ions to fire.  And so there's a very classic method that revolutionized things many decades ago, which is you put an insulated wire such that the tip is close to this neuron – close means about 100 microns, although people debate this – and you measure the potential between that and some distant electrode, which is usually some other piece of metal in your head not near the neurons you care about.  By recording these signals you can essentially infer something about the activity of those neurons.  That's been the basis of the type of work that you see.  You take these recorded electrical signals and you do things with them.

 

There are challenges to doing this for the type of things we're all talking about here, in other words lifetime chronic integration.  What do you want when you want one of these technologies?  Well, you want to be sure you get whatever part of that electrical information that's relevant to you.  So that might be spikes, you might want to see little actual spikes, which represent the neuron's firing as a sum total of the activity that's going on.  Sometimes you'll just be interested in what's called multi-units, a lot of different spikes; or local field potentials, which are not really any individual neuron, you're just happy to hear their cocktail party going on – you know, like I can do something with this sort of cocktail party.  You want to see as many as you can.  You want it to last a long time.  You want it to be biocompatible, which is a complex term we can't unpack in a minute, but the quickest way to say it is you want to minimize the harm the brain does to the electrode and you want to minimize the harm the electrode does to the brain.  Because you're putting this into a brain.  You're very worried about infection, because all existing systems go through the skull and stay that way.  There are wires coming out of your head, which are very carefully worked on and closed up.  There are surgical techniques for this, but there is a wire, going through your skull.  A hole in your skull, to be precise.  You want to minimize the amount of damage you do when you sort of staple this in the brain, and you'll see what I mean by “staple” in 30 seconds; and something everybody's going after is you'd love to do this without some immense thing coming out of the back of your head, you'd like to be able to walk around and do this.  And that's not just for cuteness, it's because a lot of science would be enabled if you could do all these wonderful things with a completely awake, normal-behaving animal.

 

So, the state of the art looks like this.  This is the famous Utah array picture that changed everything more than a decade ago.  Out of this type of work have arisen arrays that look like this, and like this, and you can see that essentially they're a bed of needles, and at the end is the exposed part that's going to record.  You stick all this in, and each of those needles is going to give you a recording.  There are various incarnations of this which I'll skip – this one's made by Neural Nexus, some of these are made by Cyber Kinetics – there are different variants – this is the Duke array that Jose worked on with Miguel Nicolelis – and so you can see a common motif.

 

A newer approach to this – and these are not the only people, there are a number of people – here you have the same needle, and this was pioneered in Michigan, and this is a neural paper from another group – lots and lots of these little bright dots, each of these is like an independent head of a wire.  This shank can take lots and lots of recordings along its 1mm-long, 35-micron width.  Each of those little gold spots is recording this electrical trace.  Each of those lines is an electrical trace.  Time is the X axis, 5ms is this little bar, and you can see here – there must be a neuron near this bundle here, because you can see all these little spikes and they're correlated, they're all picking up the same neuron nearby firing.  That's a spike.

 

So this gives you an appreciation for what you want to do, but there are problems.  I want to sketch out these problems - there's a debate going on in the field - and give you two different passes at what we're doing.  The first is what I would call pseudo-conventional, and then I want to end with something we're very excited about that you're going to be hearing about, hopefully.

 

What are the problems?  The biggest problem is these things just don't last that long.  Infection is a problem, but you all can imagine that without much explanation, because wires through a hole in your skull is a route for infection; but the actual sites themselves degrade.  In rats they last proportionally longer for their lifetime, but in primates, essentially, there's a very nice report at the end of the year that pushes this a bit up, but essentially it's a small fraction of your lifetime before these electrodes do not give you useful information.  Those lines just go flat.  Each of those little sites stops showing stuff.  And there's a big debate in the community as to why.  Is it that the wires themselves are allowing infection to go in very slowly, it sort of goes in there and starts messing with things?  Because, you know, you're talking about a brain, is it that these needles are really stiff relative to the brain and over time this really upsets what's going on there?  What I didn't mention is that you go in there and you pop a bunch of capillaries, because your brain is as vascularized as it is full of neurons.  Does this cause the problem?  Is it that they move?  That this is very stiff and they sort of sit there and they're moving relative as I go like this?  Maybe that's doing something over long periods of time to upset the cells.  Is it that the services chemically just don't look like brain, and so the cells are looking at this going “why did this skyscraper just land by me?  Everybody attack!”  No one knows.  This is a big deal.  What I want to do is sketch out in two slides, and then wrap up with something different, what we're doing, which is, I think, representative of what a lot of people are doing – you're probably going to hear more about this stuff today and tomorrow.

 

We're attacking this issue aggressively in a number of ways.  The first one is: get rid of those wires.  Put a 60 GHz radio with the latest electronics technology inside your skull.  That 60 GHz radio will be taking all of the data that's coming through here, beaming it through the skull, very high bandwidth, pulling out all that stuff, some fraction of the channels, some compression, these are technical details – and then this thing out here then talks to some other nearby device and sends all the information itself.  It could do some processing, it might itself, this small thing sitting on your head, it might do some computation.  The other thing a lot of people are doing, including ourselves, is attacking the stiffness.  So instead of having those [squishing sound] – what you have is, think of contact lenses.  Polymers as thick or much thinner than contact lenses lying conformally over your brain from which sprout, almost like an octopus, very very small, really really small, anywhere from a few microns wide, little bigger, also compliant polymer things that are inserted into the brain and left there.  So you have almost this very thin spaghetti that's sort of permeating the cortex taking data.  These are some of the first ones to come out, so you can see this flexibility there, this incredibly high thing, and it's connected to a prototype that's going to get much smaller as we work on this.  It should be reliable, tons of channels, and so on.

 

Now, this is just an eye chart, I'm going to spend 2 seconds on this, but a lot of technological innovations are required for this.  This is not going to be something one lab is going to do.  This is an effort over a decade.  It's going to have to involve a lot of people looking at different angles of this.  Just in my lab, which is a small drop in the ocean, you can see all the different technologies that have to be developed.  From high-density assembly technologies that involve polymers and silicon and metal, to working out these little contact-lens-like substrates I was talking about to record, to the actual details of the engineering of the interfaces, to the insertion robotics.  Peter Dudovitz has been doing a lot of this, he's here somewhere.  Tim Hanson at Philip Sabes lab and mine working on robots that are literally micron scale, almost stitching machines, that will basically sit there and get these things inside.  How do you get a 5-micron-wide, 1mm-long piece of contact lens material in?  It's not a trivial technological problem.

 

I'm about out of time, we've got about 3 minutes, so I want to end real quick with something that I think will change everything, and we're very excited.  This is the next level, and we're pushing hard on this.  You should see a white paper, open access, giving the entire engineering specification of this, very soon (Seo et al., 2013).

 

We call this “Neural Dust”.  The goal here is to have the transcranial transceiver, but now you don't have any needles in the brain.  You have incredibly small specs, scalable down to the tens of microns, which are not using electromagnetic energy to couple to them, because it turns out coupling electromagnetic waves of your usual cell phone radio through a brain is a losing proposition for very small things.  You're coupling out from these independent little specs using ultrasound.  Each one is a little tiny ultrasound transceiver talking to a base station.  This base station can talk to a number of them.  This would be completely untethered, completely embedded, recording what's going in the brain, sending it out, sort of fairy dust at the top of your cortex, feeding out data to a collector and then that gets sent out to the outside of the skull.  I'm going to wrap up by saying we're very excited by this.  This is the mental output of Jose Carmena, myself, Jan Rabaey, and Elad Alon, sort of this gang of four at Berkeley that's become obsessed with this problem.  Look for a white paper very soon with all the technical details laid out in exquisite detail.  We really want to invite everybody to start working on these platforms.  With that I'm hitting two minutes, and we'll stop and take questions.  Thank you.

 

Acknowledgements

 

Our thanks go to our volunteers Giulio Prisco, Kim Solez, Chris Smedley, Philip Wilson, Xing Chen, including anonymous volunteers for their help with the transcription of Congress presentations.

Congress announcements
01.10.2013
In June 2013, the Initiative hosted GF2045, the Global Future 2045 International Congress: Towards a New Strategy for Human Evolution, in New…
19.06.2013
At the Global Future 2045 conference (GF2045) in New York City on June 15-16, 2013, emcee Philippe van Nedervelde said, “It used to be that the…
15.06.2013
'How's this for a weekend conference: Some of the smartest people in the world are gathering in New York to try to figure out how to build…
14.06.2013
To accomidate popular demand, we are now able to offer a limited number of tickets for one day attendance for Saturday, June 15th or Sunday,…
04.06.2013
Topics for the Two-Day Conference at New York’s Alice Tully Hall June 15 – 16, 2013 Include: Human-Like Androids, Brain-Computer Interfaces,…