Monthly Archives: July 2012

Daily Roundup

Lesson learned this week: I should really pay more attention to Google Science Fair.

While the rest of the world debated politics and committed crime, a bunch of 13 to 18 year-olds came up with some fantastic submissions for the Fair. The one that caught my eye — naturally — was the winner, Global Neural Network Cloud Service for Breast Cancer.

There are multiple reasons to love this work: it’s a more accurate form of breast cancer detection that works with the raw data provided by biological samples1; it’s up on the cloud so that as much data as possible can be collected; and it’s created by a girl who taught herself programming and neural networks, which she created with Java.

Whew. If I’d done a third of this when I was 17, I’d have been happy with myself.

What’s more, this isn’t the first such competition that Brittany Wenger has participated in. According to the Google slides of her project presentation, she’s been doing similar things for at least the past two years.

But one of the things I find the most arresting is that she put together a neural network simulation from Java, which is the language I program in 90% of the time. I know the basics of how a neural network functions, but to be honest, I hadn’t considered how it would be implemented in object-oriented languages. And this is something Brittany introduces with the deceptively simple “4. Implement a custom neural network in Java.” in her slides. (I’m pretty sure I didn’t know what asynchronous methods were when I was 17. )

I wish I could ask for a peek at her code or something, purely to see how the feedback and weightages would work in a system like this, in Java.

Here’s a nice little interview of her by Scientific American, one of the Fair’s sponsors.

Possibly my favorite part of the interview is this:

I went home that night, and I bought a computer programming book and, with no experience, decided that was what I was going to do with the rest of my life.

This is what I find the most inspiring about the participants of the fair: they find something fascinating, throw themselves at it, and end up scaling up their ideas to solve larger and larger problems.

Daily Roundup

The internet today is practically seething with news of the rat-heart-jellyfish story, which is less absurd and far more interesting than this description makes it sound.

Researchers and bioengineers at Harvard have created what might be called a morphologically accurate jellyfish: a slip of a creature that pulses its way through liquid when an electric current is applied across it. We say “morphologically” because it’s nothing to actually do with a jellyfish; the creature is made from rat heart cells.

The news surrounding this, at first, seemed confused. The words “artificial jellyfish”, “rat heart” and “heart cells” have been bandied about without any real context to the issue. Take ABC’s reporting of the news, for instance. The meat of the matter makes its first appearance in the seventh paragraph (“Why do such an experiment?”). Build-up is all well and good, but establishing relevancy is a more important goal. The odd juxtaposition of jellyfish and rat hearts isn’t going to be really rewarding until you present the link — and soon.

Ed Yong, whose work I’ve been coming across more often recently, has a very nice piece in Nature which might as well be the polar opposite of ABC’s1. I very much appreciate the fact that both articles go beyond quoting the press release, but Yong’s is not only scientifically more informative, it’s also more… endearing, for lack of a better word.

A science writer’s comment thread I saw somewhere spoke about the importance of quotes from scientists, especially those who are apt to get excited about their work. I can’t think of a better representation of science than to quote these people, because science is so far from being a cold and mathematically driven discipline. It’s motivated by a need to improve lives, or to discover some even more fundamental truth about the universe, or to simply create something magnificent. Which is why these scientists are the best ones to get the story from. I liked the manifestation of this in the Ed Yong article:

In 2007, Parker was searching for new ways of studying muscular pumps when he visited the New England Aquarium in Boston, Massachusetts. “I saw the jellyfish display and it hit me like a thunderbolt,” he says… “I grabbed him and said, ‘John, I think I can build a jellyfish.’ He didn’t know who I was, but I was pretty excited and waving my arms, and I think he was afraid to say no.”

And soon enough, Yong gets to the point, in Parker’s own words:

“It’s exactly like what you see in the heart. My bet is that to get a muscular pump, the electrical activity has got to spread as a wavefront.”

Space, time and resources permitting, I’d love to be able to write an article like this. By the way, the extended version of the interview by Yong can be found here.

The other piece of news was something I should’ve written about over the weekend: researchers from Stanford U and J. Craig Venter Institute have simulated the entirety of an organism using a cluster of 128 computers.  The magnitude of this task needs to be emphasized: they simulated all known genetic processes of this humble bacterium that lives in human genitalia. Thus, they were able to track the entire life cycle of the organism — and, incidentally, open the way to creating better models for complicated processes like the genetic expression of human diseases.

“A lot of the public wonders, ‘Why haven’t we cured all these things?’ The answer, of course, is that cancer is not a one-gene problem; it’s a many-thousands-of-factors problem.”… For medical researchers and biochemists, simulation software will vastly speed the early stages of screening for new compounds.

Of course, this little guy has 525, while E. Coli has 4288 genes. It’s like a mathematical problem: if it takes 10 hours to simulate one division of an organism with 500-odd genes, how long will it take to simulate E. Coli? And how much extra hardware would that take?

What tickles me is this line:

In designing their model, the scientists chose an approach called object-oriented programming, which parallels the design of modern software systems.

I’m not sure what author John Markoff’s background is, but it certainly isn’t in programming (although I see he used to write for tech related things back in ’76. Punch cards, y’all). OOO is a very common concept that’s used in a variety of programming languages and isn’t nearly as exotic as this article makes it out to be. Every time you create a HelloWorld class in Java, you’re using object-oriented programming. The press release, however, has a better grasp of what constitutes cutting edge software technology. It’s quite fascinating, actually, and I need to ask my CS friends who have used CAD how exactly it might be adapted for more biological purposes.

1. I understand ABC isn’t a science magazine, but it’s still important to cover science news well and accurately. The New York Times covers science and politics as well as any other topic, for instance. Or maybe there are certain newspapers that specialize in science news? That would be an interesting thing to look up.

Daily Roundup

The internet’s been abuzz with quirky little (and not so little items) of note these past few days. For one, there’s a new lightest material in town — aerographite, produced by scientists in two European universities. It’s not simply 75 times lighter than styrofoam, which isn’t exactly concrete itself; it’s also electrically conductive, extremely strong, and absorbs light very efficiently, leaving it jet black. That means it’s probably going to be useful for a huge number of applications — from aviation to stealth weaponry to more efficient batteries. The material is tubular and porous, which is apparently what contributes to both its strength and lightness. I don’t imagine it will be seeing any immediate application, though. The production process doesn’t seem scalable at this point and is still being tweaked for maximum efficiency.

A completely different piece of news caught my eye earlier today: Wired had an interesting report on the Nonhuman Rights Project, which seeks to award legal rights to cetaceans like dolphins and whales. This is particularly quirky and very contentious, because it’s beginning to wrench at the deeper meaning of rights vs. legality and what rights even mean. As the article points out, the group isn’t saying dolphins have a right to education or anything like that; they are saying that cetaceans have a right to life, liberty and freedom.

For example, nobody will argue that SeaWorld’s orcas have a right to free speech or guaranteed medical care  — but they could have rights to freedom from imprisonment or captive breeding.

There are a lot of people who’d agree with that, and they wouldn’t necessarily use a scientific argument either. There are parts to this that are intriguing and possibly peculiar, though. A friend argued that simply assigning rights to cetaceans, no matter how complex they may seem to be, is still putting a very human perspective and value on things like brain size, individualism and social behavior. Paradoxically (at least it seems that way to me) he suggested that because of these skewed perspectives on what it means to be “a person”, we might be leaving out whole other species and even ecosystems. I will admit that this is indicative of some hubris. But — of course — the first thing that occurred to me was robots. Organizations like this place a premium on human intelligence, but what happens when we face a consciousness that we can’t relate to? Interesting thoughts.

Whiplash again as we move to another fascinating article on cryptography. Why, ask an interdisciplinary team of neuroscientists and cryptographers, would you want to consciously memorize your password when you can subconsciously — and safely! — access it? The idea is that, through a series of training exercises, before which the test subject has to create a random series of letters. Then the sequence, along with some randomized mixes of the sequence, is played back to the user almost exactly the way Guitar Hero works, where the player must hit a button when a falling disc hits the fret. Apparently, by the end of this training, you’d be able to pick out the correct sequence interspersed with other random letters. As the article says, “To pass authentication, you must reliably perform better on your sequence.” I think the practical applications of this would be… humorous at best, although the theoretical form of it is interesting.

The last item, I think, deserves a separate blog post of its own. I’ve noticed PopSci’s been running a series of climate change-inspired articles, and this one fits right into that sphere. Scientific American reports that an experiment to dump a load of iron into the ocean — thus fueling rapid algae growth and death — would suck a significant amount of carbon out of the air and thus perhaps offset global warming. Even besides the science, which is optimistic at best because we can’t control for all the ecological variables, this raises a whole slew of questions about ecological ethics. When we’ve screwed up, do we try to be as unobtrusive as possible or do we re-engineer our planet? Do we do it at the cost of whole ecosystems, when the bacteria that grow and die in countless numbers deplete the oxygen that’s normally at steady levels?

Buckminster Fuller With the NCSWA

The more I think about it, the more it seems as though this blog should be re-purposed. Perhaps “Tales of an aspiring science writer”, because it might be a good place to document the sorts of things I’ve been trying to do while dipping my toes into the science writing field. One excellent piece of advice I received lately was to try to profile or interview researchers who had already had some interesting work out. I’m not sure how I would be able to contact these writers, but perhaps the good people at NCSWA would be able to help.

Speaking of which — I met those good people this past Saturday at SF MoMA’s Buckminster Fuller special tour. The exhibit was short but fascinating, if not scientifically so (it was the MoMA, after all). The first thing that strikes the viewer is the sheer volume of his work: a 42 hour video called, aptly, Everything I Know, and 28 published books. This is not a man afraid to share his knowledge with the world.

The next thing that hits you, as you look at some of the patents that he filed, is that he was decades ahead of his time in some of his designs. Consider this teardrop shaped, three-wheeled contrivance that is meant to be a futuristic car.

The patent for this was issued in 1933. Stop and think about that for a second.

This is the Dymaxion car, aerodynamically superior because of its shape and designed to be fuel-efficient. It wasn’t just Fuller’s inventions — I think it’d be safe to say that his motto, of environmentally friendly behavior and goodwill towards all mankind — was decades ahead of the hippies.

He envisioned the world as a single, interconnected thing, and even concocted a rearranged map to illustrate the idea:

This is a man rather devoutly attached to triangles.

We need people who dream in sci-fi, who are miles ahead in their imagination. We might not get to a transdimensional portal this century, but perhaps someone will say, “Hey, maybe we can build a spaceship that travels at 10% the speed of light!” And maybe that will lead to a trickle-down effect — efforts in spaceflight, or superluminal messaging, or even just faster spaceflight.

Evolution, Past and Present

Two related articles showed up in my news feed today, both having to do with evolutionary mechanisms. We’re still a long way from Jurassic Park, but at the least, these studies give us a window into what evolution’s capable of doing and how it works.

The first concerns an experiment conducted by Georgia Tech scientists, who spliced a 500 million year old bacteria’s Elongation Factor-Tu gene into the genomes of the modern day E. coli. Neither the article nor the press release are clear on what exactly EF-Tu does, so I had to dig around the internet and resurrect some of my lapsed biology knowledge to understand its role.

Genes are synthesized into amino acids; these in turn are combined into peptide chains, which fold in complex ways to form proteins like hemoglobin. The first step of gene-protein synthesis is mRNA transcription, which is a blueprint of sorts for the proteins to form. When the mRNA exits the nucleus of a cell, it’s bonded to a ribosome, itself a large protein. While the ribosome “reads” the mRNA, transfer-RNA or tRNA molecules holding amino acids corresponding to the bases in mRNA are linked into one large chain. From what I can understand, EF-Tu makes sure that the correlation between amino acid and groups of bases (the codons) are correct; if they are, then EF-Tu allows the amino acid to be added to the chain and facilitates peptide elongation (aha! And here you just thought it was a name).

The long and short of it is that EF-Tu is crucial to the protein creation process and, therefore, to life itself. When initially placed inside modern bacteria, EF-Tu’s performance wasn’t impressive. But over time, the bacteria with EF-Tu spliced in seemed to perform just as well as, if not better than, their modern day counterparts. It wasn’t the EF-Tu gene itself that mutated, however; instead, the proteins interacting with the EF-Tu mutated to accommodate the ancient gene.

This research is interesting, certainly, but I’m even more interested in why the gene itself didn’t evolve. It might be harder for a genetic sequence to accumulate mutations, and easier for the transcription process to interject mutations in the associated proteins. Then again, neither the press release nor the article gives much information about which proteins evolved and why this might be the case.

The second article, focusing similarly on evolution, is a little more straightforward: a group of researchers from the Wilfrid Laurier University in Canada have bred fruit flies that have evolved to be able to count. It’s rather an elegant experiment, where flies were tested to see if they were prepared for a stimulus after a specific number of times.

What, I wonder, would’ve happened if the number of times the light flashed was replaced by a sequence of letters? I’m not saying flies would be able to learn language, but perhaps they’d be able to distinguish symbols. Would that be a legitimate point for the argument that pictorial and symbolic representations are also inherent traits?

The Beauty of Complexity

There are sometimes days like this, when the afternoon drags on interminably because nothing is going right at work, that I’m extremely glad I volunteer at The Tech.

This past Sunday was a perfect example of how volunteering ideally should work: a bunch of enthusiastic, science-obsessed people who are willing and able to explain the concepts behind exhibits to children, adults and grandparents do a great job, and people leave, better informed about the world around them. The enlightening thing about the whole experience, for me, is that complexity doesn’t have to be a bad thing. In fact, it can be the best thing about the whole experience, done right.

Take our newest exhibit, the Reactables. It is — in my very unbiased opinion — one of the best things to have been installed at the Tech. It consists of a podium you can stand around, with an opaque tabletop where you can place a number of sensor-enabled cubes, blocks and disks in order to make music. Put one of the blocks down and a drum lines starts playing; place another one down and you can get a melody. One of the best parts of the whole thing is that the music is from five different regions of the world, color-coded for easy of comprehension, and putting them together in random fusionistic ways is incredibly fun.

That, in fact, is the simplest part of the whole process. After I’ve demonstrated this, I start showing visitors the small squares, which are the individual instruments. Put down a piano or guitar or drum piece, and you’ll play one note. I show them how boring that is; then I bring out the controllers.

There are a pretty good variety of these — you can play around with the frequency of the note played, change the intervals at which notes are produced, or create a combination of effects. It sounds rather pedantic when put this way, but even the most beautiful of music is an arrangement of notes, a fusion of frequencies. As the pieces pile up on the table, strange, eerie, irresistible pieces of music start catching the ear of passers-by, and pretty soon there’s an audience looking on while you demonstrate.

I show the more entranced ones the subtleties of the exhibit — the fact that no matter when you put down a cube, the rhythm of the music or the beat of the drums always sync up; the fact that when you move blocks around on the table, the sound realigns to arrive through the speaker nearest that piece. It’s not simply about the science — it’s about the experience of the exhibit itself.

To me, Reactables is a perfect fusion of science and art. Music is as ubiquitous as it ever was, but now the difference is, everything is digitized and available through a dizzying array of media. These are sounds first made through stretching animal skins over wood, or blowing through a reed with holes gouged out, and we’ve gotten to the point where we can accurately and electronically break them down and re-synthesize them. On the other hand, think about it this way: it takes a lot of non-linear thinking and creativity to come up with a musical piece from melodies and rhythms that are digitally stripped of all context.

Either way, it’s wonderful to see people embracing the complexity of the endeavor and trying to experiment with the pieces. Not all of them do, of course, but I believe we underestimate our own intelligence and our capacity to create. I honestly think that, while trying to communicate science to the general public, we should err on the side of too many details  rather than too few. This, of course, is determined by more prosaic things like word counts and deadlines and unsympathetic editors, but nevertheless — the point is, I think we’re all wired to want to learn. The great challenge of science writing, to me, is taking advantage of that in the general populace.

Finding the Higgs

“We have observed a new boson, at 125.3 GeV, with 4.9 sigma confidence.” — from the CMS team at CERN in the LHC. 

12.45 am on the eve of Independence Day, and I’m watching — with great puzzlement but a mounting sense of excitement — a live webcast from Zurich, at CERN, about the Higgs boson. I say “great puzzlement” because it’s not particularly easy to follow the jargon. It is, however, really great to be awake and thinking about the possibility that I’m right here when history’s being made.

If you’re still wondering why anyone would care about a new subatomic particle when we’ve got more than enough to spare, this blog post does a good job of giving us an overview.

A couple of months ago, I posted a link to PHD Comics’ lovely comic on the search for the Higgs boson. The comic focuses on the various methods that the Higgs could be detected, and the ways physicists try to make sense of the staggering amount of data streaming out of the LHC.

There’s a nifty little explanation of how the collisions themselves work, by the Christian Science Monitor, which isn’t something I’ve come across very often before. It’s a straightforward article to read, full of good analogies, but I want to point out one astonishing paragraph:

Using powerful magnets to steer the protons around two lines running in opposite directions, the beams are at last brought into focus at each of two mammoth detectors the size of a cathedral’s nave. This is where the collisions take place. By the time the beams are focused for collision, each is about half the width of a human hair. The detectors that track the collision debris must be able to locate the telltale debris trails to within half the width of a human hair.

Gives brand new meaning to the phrase “needle in a haystack”, doesn’t it?

But this doesn’t mean that our work is done. Finding the Higgs will validate the Standard Model, but that means we’ve still got to flesh out the details. I think of it this way: imagine you’re pulling pieces out of a puzzle box and you’re not quite sure what the final image is. But you have a good idea, and eventually you find all the corner pieces, which seem to make a very strong case for this to be the sort of image you think it is. And then you look down, and realize that a good chunk of the middle of the puzzle is yet to be finished.

Now imagine the Higgs is the final corner piece; the other particles predicted by the Standard Model and which are required by supersymmetry to exist, must still be found. This Wired article has a very nice summary of the challenges we face in a post-Higgs world.

One writer whose work I enjoy consistently is Dennis Overbye of The New York Times. I’m not sure I could even tell you why; it’s a sort of wry, whimsical take on physics, where I get the impression that every subatomic particle he writes about is anthropomorphized. Perhaps he takes the physicists out for a drink first and then gets them to talk. Either way, I was looking for his first take on the matter, and it’s here — one of the first to publish about the Higgs boson. Other news outlets haven’t been as restrained in their coverage, but that’s a matter for tomorrow morning.

This might be a little cliched and perhaps three hundred years from now it won’t seem like a piece of history at all, when we’ve discovered the true nature of our universe — but as I’m about to head to bed, I can’t help but feel that I’ll wake up to a new interpretation of the world.

Welcome, Higgs boson! I expect nerdy t-shirts to be available for sale within the end of the week.