Control phones and computers with your thoughts: Elon Musk’s latest challenge

Source: Neuralink

Elon Musk’s Neuralink, the secretive company developing brain-machine interfaces, showed off some of the technology it has been developing to the public for the first time. The goal is to eventually begin implanting devices in paralyzed humans, allowing them to control phones or computers.

Neuralink is working to develop a chip that can read, clean and amplify brain signals. The chip, along with other components, will result in a product that Neuralink calls “N1 sensor”, designed to be embedded within a human body and transmit its data wirelessly.

Source: Neuralink

Neuralink intends to implant four of these sensors, three in motor areas and one in a somatosensory area. These will connect wirelessly to an external device mounted behind the ear that will hold the battery. Everything will be controlled through an iPhone application.

To implant the sensors the scientists at Neuralink hope to use a laser beam to pass through the skull rather than drill holes. The first experiments will be done with neuroscientists from Stanford University. Elon Musk has revealed that they hope to implant the first sensors into a human patient by the end of next year.

Neuralink’s technology uses flexible “threads”, which are less likely to damage the brain than the materials currently used in brain-machine interfaces. These threads also create the possibility of transferring a higher volume of data, according to a white paper credited to “Elon Musk & Neuralink. The abstract notes that the system could include “as many as 3,072 electrodes per array distributed across 96 threads”. The threads are 4 to 6 μm in width, which makes them considerably thinner than a human hair.

Neuralink’s technology, precisely because it uses very flexible materials, unlike, for example, other needle-based solutions, is difficult to implant. To combat that problem, the company has developed a neurosurgical robot capable of inserting six threads (192 electrodes) per minute [automatically]. It looks something like a cross between a microscope and a sewing machine. It can also avoid blood vessels.

Source: Neuralink

The development of solutions to detect brain activity and control computers is not new. The first person with spinal cord paralysis to receive a brain implant that allowed him to control a computer cursor was Matthew Nagle. In 2006, Nagle played Pong using only his mind. Since then, paralyzed people with brain implants have also brought objects into focus and moved robotic arms in labs, as part of scientific research. The system Nagle and others have used is called BrainGate and was developed initially at Brown University.

The Neuralink system, if it works, will represent a substantial step forward compared to older technologies. BrainGate, for example, uses a series of rigid needles that allow up to 128 channels of electrodes to be used. The advantage of the Neuralink solution is both on the number of electrodes, much greater, and on the flexibility of the wires that, unlike the needles, can follow the movement of the brain in the skull.

Neuralink has yet to begin the certification process with the FDA. At the moment the company is still working with the mice to make sure that the platform is stable. But the technology, if it works, will be very promising and will allow a “high bandwidth” brain connection, implanted through robotic surgery.

Digital health: panacea or chimera?


An interesting article on Lancet Digital Health, by Kazem Rahimi, reflects on the elusive search for the savings that digital health should allow.

Digital health advocates argue that the digital future will be one of more precise interventions, improved health outcomes, increased efficiency, and ultimately reduced health-care expenditure. But how realistic is the promise of reduced costs, while also seeing improved health, or at least no diminishing of it?

The author admits that the field is at too early a stage to reach a definitive conclusion on this issue, not least because of the need for more empirical evidence. However, in the interim, given the importance of this issue to current national and international policies, the argument of digital cost reduction seems worthy of scrutiny.

The author states that “despite the substantial contributions of technological progress to improvement in health outcomes, examples of cost-cutting effects are a rarity. On the contrary, technological progress is widely seen as the most important driver of the rise in health-care spending. For instance, magnetic resonance imaging will inevitably be more expensive than its alternative, which is usually either no test at all or a cheaper, but less accurate, diagnostic technique.”

However, the author highlights the differences in digital health. Digital technologies often include innovative software solutions and algorithms that could be substantially cheaper than devices or drugs. In addition, these technologies tend to focus on solutions to the notoriously inefficient delivery systems of health care globally, as opposed to the development of new treatments. Given that the alternative to digital technologies would potentially be a more labour-intensive model of care, one might expect their adoption to replace costly health-care professional time or hospital services.

Rahimi does not question the fact that the use of well designed and tested technological solutions will eventually lead to improved matching of resources with the complexity of tasks and, thus, achieve increased productivity or (technical) efficiency. The author makes for example the case of a machine learning algorithm that is able to make diagnoses faster or better than most doctors could be expected to lead to substantial reductions in the price of that particular service. Provided that there is sufficient empirical evidence, one could then directly compare the prevailing approach (doctor diagnosis) with the new digital approach (algorithm plus or minus doctor diagnosis) and conclude that the new intervention will get the same job done at a much lower cost.

But why is such a cost-saving intervention still likely to increase health-care expenditure? This apparent paradox can be explained by the common confusion between microeconomic effects of individual health-care interventions or programmes, and effects on the whole health-care market. Although a microeconomic study might conclude that substituting old with new might lead to net savings, the typical models in such studies assume that health-care utilisation of the service or treatment under investigation remains unchanged and that the two approaches differ only in their costs and health consequences. Thus, an intervention that is of lower price, even without causing a change in health outcomes compared with the alternative, would be expected to result in cost savings.

However, as the author points out, health-care markets tend to be in disequilibrium when demand continues to exceed supply and use. In such a setting, reducing the price of a particular service will invariably lead to an increase in the quantity demanded. Given that total expenditure equals the quantity demanded multiplied by its price, introduction of low-price technologies might lead to an overall rise in expenditure. In other words, medical uses of the new treatment are increased through addressing an unmet demand, and this expansion in use leads to a net rise in expenditure.

The article makes further reflections that I invite you to read here. They are an interesting point of view on a subject that is too often addressed in a dogmatic and simplistic way.