Daily Roundup: Neural Implants Help With Cognitive Function

Hot on the heels of the mixed press coverage of the ENCODE project is an EurekAlert article I’m very wary of taking seriously. Wake Forest University researchers claim to have created a prosthetic device that restores cognitive functions to primates whose capabilities have been impaired by injections of cocaine, in some cases to better than average.

Specifically, the monkeys involved were able to complete tasks correctly more often when they were stimulated by this prosthetic.

Before we begin talking about taking magic pills to help us in times of stressful decision making, let’s review the facts: the researchers set the monkeys to doing a task that involves matching a shape they’d seen a few minutes ago to an array of shapes, and they had been trained to achieve a 70-75% proficiency on this task for two years. Miracle cognitive therapy, this certainly isn’t.

But what I am interested in is how their prosthetic works. What are levels L2/3 and L5 of the brain and why are they important? How do they communicate, and why is that suppressed by cocaine (and why is dopamine involved in the first place?)

Most fascinatingly, how do you calculate the mathematical relationship between neuron levels? How is that even possible? And how do you tune a device like this to record during the correct input?

This, for me, is a classic example of a press release that promises so much and delivers tantalizingly little. Now I’m itching to speak to the scientists involved to ask them these questions.

Daily Roundup: Not Exactly Junk

It’s never been “junk” DNA. I seem to remember reading about this years ago, but it’s only made the news in a major way now, for some reason. Researchers working on the ENCODE project, which was begun to catalog all the pieces of the human genome — everything besides the genes themselves — have confirmed that about 80% of human DNA is regulatory in nature. That is, those bits of DNA don’t directly code for proteins that the body needs or uses.

As this Wired article puts it:

There are transcription factors, proteins that link these pieces together, orchestrating gene activity from moment to moment, and basic rules for that orchestration. There are also multiple layers of so-called epigenetic information, describing how the activity of genes is modulated, and how that varies in different types of cells.

And the proportion of these rules and regulatory elements to the actual genes themselves is quite stunning:

“Every gene is surrounded by an ocean of regulatory elements. They’re everywhere. There are only 25,000 genes, and probably more than 1 million regulatory elements,” said Job Dekker…

That’s a 40 to 1 ratio right there.

This information is crucial, because it’s often not enough to know simply which genes are expressed and which aren’t. Of course, the gene expression is often only the first step in a long process that results in a symptom or disease or hormone expression. But knowing only where the gene is and whether it’s turned on or off is akin to seeing only whether a a light switch has been thrown or not. You aren’t able to see the internal wiring or to control when and how that switch is thrown.

Hopefully, ENCODE will soon be able to give us glimpses of that wiring.

Daily Roundup: Introducing Pyruvate Kinase

One of the things that fascinates me the most is the sheer number of delicate, complex biochemical pathways that need to function in order to keep us in working order. Knocking out even one step — getting a protein fold wrong, transcribing a nucleotide base incorrectly — could mean disaster. It’s what leads to sickle cell anaemia, in which the haemoglobin in red blood cells are mis-formed and cannot bind with oxygen as efficiently as usual.

Sickle cell anaemia is caused by the mutation of just a single nucleotide. But there are other equally dramatic changes in the gene to protein pathway that can lead to complications.

One of these is Stanford’s recent research on muscle recovery using a “cooling glove”. It is, according to the researchers themselves, “better than steroids”; one of the scientists, a self-professed gym rat, improved his pull-up rate by 244% in 6 weeks.

The device itself is unremarkable: it’s a thing in the rough shape of a glove which creates a vacuum and draws blood to the palms. Plastic lining in the glove contains water, which cools the palms down.

Described by its own creators as “silly”, the glove works ridiculously well by taking advantage of two fundamental factors of body temperature. The first is the fact that most of the heat in our body is expelled through our face, feet and palms (mostly our palms) in much the same way that dogs expel heat through their tongues.

The second factor is linked to the reason why overheating in the body matters so much. Our bodies — and that of any other animal, really — run on proteins. Haemoglobin is one of these, but there are other, more subtle proteins that control the production of raw energy. The “unit” of energy that serves as a kind of energy currency is ATP, or adenosine triphosphate, which is required in any number of processes in the human body. But ATP and others in the protein family are held together by a delicate, temperature-sensitive balance of chemical bonds. Increase or decrease the ambient temperature too much, and the specific 3D structure of proteins can be critically damaged, depending on how sensitive they are to temperature.

And that’s exactly what happens when our bodies overheat. Muscle pyruvate kinase, or MPK, is responsible for the production of ATP within muscles. Much of the general population can rely on MPK working perfectly fine at any given time, but athletes, who train rigorously and ferociously, need all the help and recovery they can get between bouts of exercise. Overheating an athlete’s body means deforming and deactivating the MPK proteins within it, thus slowing down muscle recovery. But when the muscle cells are cooled down, MPK is basically “reset” and can begin working again.

It’s a beautiful, elegant system that the researchers took advantage of by simply applying the most efficient solution.

But pyruvate kinase’s role in human physiology doesn’t just stop there. MIT researchers discovered a far more crucial role that it could be playing in the production of tumorous growth.

Pyruvate kinase comes into the picture during glycolysis, which produces two molecules of ATP from a molecule of glucose. When one form of pyruvate kinase, called PKM1 is active all the time, the process goes on to produce much more ATP. Tumorous cells, however, express another form of the protein, PKM2, where secondary processes don’t produce as much ATP but go on to produce much more carbohydrates and lipids — essentially, the building blocks of cells. The idea seems to be that normal cells simply need more energy to conduct their normal processes, whereas cancerous cells require more raw material to continue to multiply. A previous study by the same team showed that turning on PKM1 activity in cancerous cells slowed tumorous growth.

What the team is trying to do now is more subtle: to force PKM2, the “abnormal” expression of pyruvate kinase, to operate all the time, “essentially turning it into PKM1”. I must admit I’m not sure how turning on PKM2 is equivalent to turning on PKM1, but in mice implanted with cancer cells and tested with pharmaceutical compounds that turned on PKM2 constantly, the researchers found no evidence of tumorous growth.

It’s pretty fascinating that a single protein is beginning to prove its worth in many ways. I’ll be interested to see what else pyruvate kinase can help with.

Daily Roundup: The Search for a Self

Blogger’s Note: written on Sunday; finished on posted on Wednesday, alas. 

Lazy Sunday mornings are the best times to feel contemplative and intellectual. Also, they are good for the eating of pancakes, but since my efforts to that end were thwarted today, I’m going to have to take the intellectual high road.

And rather easy that is, too. Last night I read an interesting piece on how complex a brain would have to be to exhibit self-awareness, and the answer seems to be, “not very”. Neuroimaging studies have apparently showed that the sense of self and identity arises, or at least registers, in the cerebral cortex, the most recognizable, wrinkly exterior of the brain. If that is indeed the case, it stands to reason that humans with severely damaged cortices would be unable to perceive themselves independently from others. However, experiments with subjects — both adults and children — showed that those with damaged cerebral cortices were still able to distinguish themselves from the rest of the world and recognize themselves in photographs and mirrors.

Ferris Jabr, author of this piece, is quick to point out the differences between self-awareness and consciousness. What I’ve described above is consciousness: the ability to perceive oneself. A more subtle step towards what I’ll call “personhood” is self-awareness, the ability to “realize you are a thinking being and think about your thoughts”, as Jabr puts it. Another way I’ve heard this described is meta-cognition, which is arguably a skill that dolphins and some apes share with humans (citation needed). For instance, if you were about to make a decision and then paused because you weren’t sure if you were making the right decision, you’ve been thinking about whether your choice is correct: i.e. metacognition.

Self-awareness and consciousness are the tools we’ll need to imbue robots with, if we’re talking about “real” artificial intelligence. Many AI machines and programs today have the capacity to do one task very well, or to learn from their environment and infer the details of their situation. But it has so far been impossible for them to recognize that they are separate entities in the world, and that their limbs and peripherals belong to them. I’m of that rather romantic camp (if there even is such a camp) which argues that AI isn’t “real” unless robots can interact with humans in a social context, as opposed to the idea that AI must simply be able to perform some probabilistic task, like making recommendations, booking reservations and answering questions posed in a more human format, much like the iPhone’s Siri. So when I read the news that Yale’s Nico robot could identify its limb in a mirror, I was a little excited.

First, though, I must dispel any notion that this is Skynet incarnate. As i09 points out (somewhat disgustedly), all the authors have done so far is to get Nico to identify its arm in a mirror as part of the 3D space, using its own reflection in a mirror. This is about as artificial as intelligence gets, because this doesn’t mean that the robot has any sense of self-awareness — it’s just learned to determine that its arm can have a 3D spatial coordination by “looking” at it. I’ll admit my excitement ultimately abated at this point. 

If you think about it, the really interesting problems are only beginning. A robot being able to recognize itself in a mirror still isn’t possible: for that to happen, I believe the robot would have to be given a command to activate something on its own body, based on what it’s seeing in a reflection. On second thought, that still might not prove anything, because the robot would be blindly following instructions to identify a point in space and then depress it. How would a robot be able to tell that something on its body belongs to itself?

I’d love to be able to speak to one of the authors about the assumptions they made and the next steps we should be taking if we want a truly self-aware robot.

Flipping through the comments section in Slashdot reveals a few more nuances that I wish the writers of the science pieces had thought to explore.

“Knowing what part of the camera scene is moving because something is happening, and knowing what part of the scene is moving because you’re waving your end-effector is useful. If you can extract your own state from indicators in the environment, then you have more information to work with…” says commentor Kell Bengal. It would’ve been nice to verify if this is really what happened with the authors.

Perhaps the “proper” test would be to get a robot to perform a task involving objects hidden from its direct sight, but visible from the mirror. This way, the robot would be able to move something around according to what the mirror is telling it, recognize that that movement is being done by itself, and modify its actions to suit the task at hand.

Daily Roundup: DNA as Storage Device

(Late to the party, or what? Unfortunately, my day job every now and then interrupts my fantasy of becoming a science writer.)

Amid the building excitement around quantum computing, storage and transmission, researchers at Baltimore and Boston have managed to encode an entire book (on — what else? — biology) into DNA. As Nature News puts it, it’s the largest piece of non-biological information ever encoded in a biological molecule like DNA.

The concept isn’t difficult to comprehend, although the execution is delicate, for obvious reasons. The team decided to treat the information as digital bits of ones and zeros, and encoded every character of the textbook (and, note, this includes images and a JavaScript function as well)  into its ASCII representative. Then, they decided on a biological-binary mapping scheme: A(denine) and C(ytosine) would map to 0’s, and G(uanine) and T(hymine) would map to 1’s. The DNA strands would then be synthesized from scratch. To read the information back again, the researchers first multiplied the DNA using PCR or polymerase chain reaction, which ensured that data would be preserved with very few flaws. Then they sequenced the DNA — “reading” it back to determine the sequence of bases — and then decoded it back to 0’s and 1’s and then to characters.

There’s a remarkable graph in their abstract here that places their work in context, when it comes to information density.

Screenshot from the online paper. Copyright George M. Church, Yuan Gao, Sriram Kosuri and http://www.sciencemag.org.

We see that the information density of DNA is a few magnitudes higher than the next most efficient method of encoding information. This is also a good place to point out that DNA itself has evolved to be incredibly high fidelity: it must be capable of preserving information about an entire organism through its lifespan, so storing information in it is very stable indeed. In fact, in this experiment, the researchers recovered their data with only 10 bit errors in 5.27 million — which gives us an error rate of about 0.0002%.

Of course, this isn’t going to be immediately useful for anything we need in our daily lives. The sequencing and coding will take equipment that is far too specialized at the moment, and will take too long for it to be viable as, say, an alternative to an external hard drive. But think of long-term storage. Any highly sensitive, crucial information of international import could be stored long term, in very little space. Or, if you’d like to think futuristically, it could be part of a planetary colonizing ship’s cargo centuries into the future. All of humanity’s data, in one place!

One interesting thing to note is that DIY genetic tools have been around on the fringe of home grown labs for a couple of years now, and have definitely been coming into their own in the biohacking community. I wrote about genetic hackers previously, and a little web surfing has come up with a couple of articles — published in both Wired and Nature, interestingly enough — that highlight this growing community. If anyone’s going to come up with a way to create nano biobots that you can program through a simple desktop tool, inject into your body, and then wait for repairs or enhancements to be made… it would be the biohackers.

Daily Roundup: Blind Mice, Curiosity Check-in

One of the upsides of a complex neurochemical/biological pathway such as sight is that there are many ways of tackling the same problem. Although the complexity makes the solutions complex as well, at the very least it guarantees that there are multiple avenues to explore. Researchers from Weill Cornell Medical College have combined several of these avenues to create a retinal prosthetic for mice that gives them an ability to perceive things around them that is unparalleled.

Previous research had shown that introducing light-sensitive proteins in the retina, which is sometimes damaged in the blind, can enhance vision. The retina, treated thus with gene therapy, is then better able to simulate the output cells — the ganglion cells — to send along the electrical impulses to the brain.

Below is a crude little mockup of the process of vision, as I understand it. Click to enlarge; WordPress isn’t kind to these images. 

How vision works in the eye, as far as I can understand.

But Nirenberg and Pandarinath believed that another crucial piece of the puzzle had to be taken care of for the entire system to operate well: they had to able to replicate the pattern of electrical impulses produced by the eye’s circuitry, and make sure that those impulses correctly simulate the scene that is being observed. Their research into the “code” that needs to be generated between light hitting the rods and cones of the retina, and the ganglion cells sending signals to the brain, revealed that this pattern could indeed be artificially simulated.

In short, they’ve built a system of an encoder, which breaks down images entering the eye into electrical impulses, and a projector that converts the electrical impulses into light impulses. The latter then stimulates the genetically enhanced proteins in the ganglion cells.

How the researchers have augmented this system.

They’ve now gone on to crack the “code” of vision in monkeys, and hope to be starting human trials soon.

Another proposal for restoring vision that I saw recently involved a much simpler, but potentially far less useful technique. Berkeley scientists discovered that regular injections of a chemical called AAQ restored a degree of photosensitivity to mice that were formerly blind, aiding them in converting light to electrical impulses and eventually to images.

Perhaps combined with, or replacing the gene therapy step in the above research from Cornell, vision prosthetics/enhancement might become a reality in the next few years. It’s particularly interesting, to me, to see how the various methods of attacking the problem of sight have resulted in a multi-disciplinary, multi-faceted system like this. It’s cumbersome, but it works.

ExtremeTech has an extended article on this, with image comparisons of the enhanced and non-enhanced systems. The differences are quite stunning.

And now for the obligatory Curiosity news!

Everyone’s favorite Mars rover is doing well, as is NASA’s reputation at the moment. I spent most of today surreptitiously reading this Reddit thread (Ask Me Anything) which featured the scientists and engineers from the Curiosity team. And yes — it did include Mohawk Guy.

Some truly fascinating stuff that came out of this included the fact that NASA has a fledgling Planetary Protection Guidelines page. I say fledgling because a) it’s impossible to predict how we should react to alien civilizations when they haven’t been found, and b) we’re still not sure we’re anywhere close to finding them. Most of it is prosaic stuff, like “let’s be very clean so we don’t contaminate anything!” but I suspect directives like “always wait for the alien to introduce itself!” are just waiting in the wings.

A geek can dream.

Brave New Worlds

Curiosity fever is still under way (and with it, a greater appreciation of Mohawks, apparently), and while our plucky little rover is climbing craters and zapping rocks, we’re already beginning to ask: what’s next?

And the answer, inevitably, is: When do we get to go to Mars?

The plan to colonize Mars has a long and sometimes fanciful history. I remember reading the Reader’s Digest and being thoroughly fascinated by the idea that we would create a human colony in Mars by 2020. “I’ll still be alive then!” thought my ten-year-old self. And today morning, my past collided rather oddly with my future when ZDNet published a video by Reaction Engines, chronicling the future of Mars manned exploration.

The idea is thrilling. It gives me goosebumps, literally, to think of ourselves exploring a new planet, dealing with the challenges it’ll present and carving out an ecological niche for ourselves. The sheer knowledge and technological experience we’d gain from the exercise isn’t to be discounted, either. But I think it’s time we remind ourselves what we lack, and all the reasons we shouldn’t be colonizing anything until we’ve taken care of a few main points first.

Planets are not disposable. We cannot leap across the solar systems, colonizing worlds, simply because we’ve exhausted our planet’s resources and it’s time to mine another. If we’re incapable of living sustainably on the planet we evolved on, what makes us think we’ll be capable of doing so on a world we’ve never seen with our own eyes?

We’ll first have to cut ourselves a place on this new planet, finding out what — if any — ecological system exists and how we can fit ourselves in without displacing too much of it. And then we’ll have to figure out how to remain there for a significant period of time without destroying the climate — or ourselves, for that matter.

For instance, what do we really know about how global warming occurs, whether it cyclical, and what the best way to deal with it is? We’ll move to other worlds for many noble reasons, but surely for selfish ones as well, like the extraction of resources. And when we begin to argue over the best way to balance extraction and preservation, whose logic will prevail?

This isn’t Manifest Destiny, by a long shot. The idea of colonization is glorious. Whichever generation does it will be hailed as pioneers and superheroes. We’ll be facing the biggest challenge of our lives, and that collectively, as a species, as a civilization. Which is why it will be so easy to be swept off our feet by the possibility and the power. The stars beckon!

But no.

We cannot do this simply because we can. Even if there isn’t life on other planets, even if the solar system seems as though it’s ours for now, we cannot irresponsibly wreck havoc on worlds. We destroy all chance of learning from our mistakes that way.

Of course we should colonize Mars, and any other planets we can reach and set foot on. But only after we’ve understood the damage it might do, after we’ve considered how best to be responsible, after we’ve learned from our own mistakes.

In other words, we’re not ready for Mars. Not yet, not by a long shot.

Curiosity Lands On Mars!

While athletes celebrated the peak of human physical performance in London, NASA scientists were busy breaking other kinds of records in outer space. The latest Mars Rover, Curiosity, just landed on the surface of the red planet and transmitted the first images of itself. It’s been a voyage and a descent strategy that can only be called audacious.

When we first saw the seven minutes of terror video, it was to remind us of the number of things that could go spectacularly wrong. The complexity of the landing stages was the main trouble. But the rover seems to have landed absolutely on target, without a hitch, transmitting heart beat information via Odyssey and sending back images all the way.

And the first image?

That is a wheel on Mars!

Watching the NASA live stream was wonderful. People clapping every successful stage of the mission (“Parachute deployed! Powered flight!”), the growing certainty that we were going to make it, the final confirmation that Curiosity had touched down and the first images. The room erupted in cheers and clapping. I distinctly saw a few people in tears, hugging everyone within reach.

This is history in the making.

Here’s a nice Wired article summarizing the rover’s capabilities. I’m most impressed by the nuclear-powered lab and the resolution on the Hand Lens Images. The Wall Street Journal, surprisingly, has a good close up shot of the Rover’s sensors as well. I can’t wait for information from the rover over the next two years.

Onward and upward!

Daily Roundup: Living Longer, Better

A little break from the mechanics of the last few posts: yesterday’s news consisted of some startling, and possibly controversial, biological revelations.

It’s been known for a while now that women outlive men, by about five to six years. Looking through some simple statistics, it’s clear that this has been the case as early as 1930, so better living conditions for women and improved access to female healthcare might not tell the whole story. Some scientists in Lancaster University think they have the answer: mitochondrial genetic inheritance.

Mitochondria are tiny organelles believed to have been co-opted by eukaryotic organisms (which includes humans) a couple of billion years ago. Eukaryotic organisms — my high-school biology is slowly returning to me! — are creatures whose internal structures are enclosed and separated by membranes. Most important of these internal structures is the nucleus; prokaryotes lack one, and eukaryotes are defined by it.

Mitochondria are one of the most important of these internal structures. Producers of ATP, the cell’s energy source, they are crucial to cellular health. One of the reasons that mitochondria are theorized to be symbiotic with our bodies is that they actually contain their own version of DNA, with a handful of genes that code for proteins important to respiratory processes, or the production of energy via ATP. The idea of a mitochondrial Eve arose when biologists discovered that every child carries only the mother’s copy of the mitochondrial DNA. There’s no recombination analogous to the meeting of egg and sperm; the entire DNA of the mitochondria is simply handed down from mother to child1.

Scientists from Lancaster conducted a rather interesting study to figure out if this mitochondrial handing-down had any effects on the males as opposed to the females. Using some fruitflies, they determined that variations in mitochondrial DNA seemed to correlate with male life expectancy, while they had no effect on female life expectancy. The idea, if I understand it correctly, is that mutations that are harmful for women don’t accumulate, since natural selection weeds out the women who couldn’t have survived nearly as well. But they may very well have preserved mutations harmful to men. This could mean that the mutations which contribute, in whatever small way, to a smaller male lifespan, would be passed on through generations. The Lancaster researchers argue that the “Mother’s Curse”, which is probably the most frustratingly hyperbolic scientific contraction I’ve heard, would account for reduced male life expectancies.

It’s an interesting hypothesis, but I think I’ll wait for the experiments to be either repeated or something analogous to be discovered in human research. It’s rather too sweeping a realization, especially when combined with the assertion that this could have implications across all species that have similar life expectancy gaps. Does the mitochondrial inheritance work the same way across all of them? If not, what other factors could contribute? This rather well-annotated Wikipedia article indicates that mitochondrial DNA is remarkably slow in accumulating mutations — perhaps once every 3500 years, or 35 human generations. That’s plenty of time to develop mutations harmful to men, but it would be interesting to see where the life expectancy differences began to show up, corresponding to the mutations in DNA.

Another article, this one far more controversial, was the link between persistent cancer and something called “cancer stem cells”. Researchers in three different studies tracked pre-cancerous tumors and found that most of the cell populations in later stages of division had descended from a small subset of the original cell population. In the study conducted by researchers from Belgium, it was reported that the cancer stem cells looked similar to skin stem cells.

At first, the idea of cancerous stem cells seemed rather paradoxical to me. After all, cancer is the result of a small population of cells gone wild, refusing to undergo apoptosis where they trigger a sort of self-destruct mode. But this must begin in some fairly mature, developed, specialized cells of internal organs.  So I’d like to discover how cancerous cells grow and spread across the body, causing the cells of other internal organs to go rogue.

It might be time to do more research — and talk to a graduate student I know…

Footnote:

1.  A whole other interesting tangent is the idea of the “mitochondrial Eve”, the ancestor of most living humans today whose genetic inheritance can be traced in an unbroken line to today’s women. This Wikipedia article gives a little bit of an overview, although more citations are probably needed.

Daily Roundup: 3D Printing… Everything

Two articles in PopSci caught my eye this week, and both concerned 3D printing — in wildly different ways.

French design student Luc Fusaro created the first custom-built running shoe using 3D printing techniques. At 96 grams, it’s a full 42% lighter than the Nike Mayfly, until recently the world’s lightest running shoe.

On the other end of the spectrum of usefulness (I’m revealing my liberal ways here, aren’t I?) is the working assault rifle one gentleman produced from a Stratasys printer.

And after I found those two examples of what prototype, or 3D, printing can do for us, I looked around PopSci.com a little while to see what else  I could find. The almost-completely printed UAV (the only thing created using conventional means was the engine) wasn’t even the most extreme example — the Urvee is an entirely printed car, and this was back in 2010.

I think 3D printing has been creeping up on us for a while, now. My first exposure to it was this rather terrible video, which was so astonishing that I honestly believed it to be a hoax for a good several months. But the novelty will begin to wear off soon enough, and we’ll start asking ourselves the really difficult questions — not “What can I make with this?”  or “Can I print a space shuttle?”, but “What does it mean if I can manufacture weapons at home?”

To inject a little reality into a nightmarish scenario of terrorists printing weaponry in some remote location, we should note that you’d need 3D printer schematics to be able to do something like this. Thingiverse, which is linked to in the article about the assault rifle, has a number of other gun and rifle parts; I’m not well-versed enough in the mechanics to determine how much of a threat this sort of thing is. I must note, though, that (using a link given by a commenter on HaveBlue’s design), Thingiverse’s Acceptable Use Policy does state that “(a) You agree not to use the Site or Services to collect, upload, transmit, display, or distribute any User Content… ii) that… promotes illegal activities or contributes to the creation of weapons, illegal materials or is otherwise objectionable”.

Announcing it is one thing; enforcing it is another. None of the admittedly few gun parts I found on Thingiverse had been taken down. Perhaps they’ve taken down the others and are slowly working their way through them, but this would require some serious moderating.

If those levels of moderation grow, along with the community, does that mean that we’re in greater danger? Should 3D printing be restricted to strictly regulated companies and industries, even though they could be boons to innocent engineers and hobbyists? Put another way, should the possibility of printing weaponry prevent us from printing cars and shoes and planes?

Perhaps the situation isn’t as dire as we think it is. Given the state — or the lack, rather — of gun control in this country, I’d hesitate to suggest it, but maybe a supervisory approach is really what we need here. If you’ve invested in a 3D printer, gotten the schematics and printed out the various components of an entire rifle, perhaps your interest in weaponry is a little more sophisticated than someone whose sole purpose is to facilitate crime (I am, of course, leaving the 3D printer-enabled  terrorists out of this equation). In this case, we could assume that the person doing the printing would either know what they’re doing with their product, or would want to be trained to use it. So maybe, in the case of guns, a process of registration, licensing, training and regular maintenance should be enforced. 

What about the terrorists and the mentally disturbed, then, those who can’t be trusted to register anything they create? There’s where I find myself without an answer — except, Big Brother-like, to track the purchases of 3D printers. That’s a far from satisfactory suggestion.

I think 3D printing is the cusp of a new brand of technology altogether. It isn’t simply information. It’s a way to create tools, and perhaps in the future weapons and vehicles. Possibly, if we stretch the boundaries of what’s allowed in 3D printing (I’d have to look this up) food substitutes for impoverished regions of the world. The vending machines of Neal Stephenson’s Snow Crash could be around the corner.

I’ll be following developments with equal parts trepidation and excitement.

Edit: How’s this for developments?