Sunday, May 31, 2009

The future of computing – rotaxanes?

One of the great human endeavors at the moment is being able to work at the molecular scale (e.g.; 1-100 nm), using organic and inorganic materials for a variety of purposes ranging from basic materials to computing to electronics to life sciences therapeutics to energy. This includes the designed direction of existing molecular processes (e.g.; biology) and the synthesis of novel materials, structures and dynamic behavior.

Three of the most interesting advances in working at the molecular scale and examples of bio-infotech convergence are described below…

1) Hybrid organic-inorganic rotaxanes
First is the March 2009 work of David Leigh’s lab at the University of Edinburgh in creating hybrid organic-inorganic rotaxanes. A rotaxane (rota/wheel + axis) is a mechanically-interlocked molecular structure, essentially a dumbbell shape with a ring around its middle (Figure 1), often man-made but occasionally existing in nature. In this case, the dumbbell portion of the hybrid organic-inorganic rotaxane is an organic amine, the ring around the middle is a metal.

Figure 1: Rotaxane graphical schematic and crystal structure (source)

The benefit of metal-organic frameworks is that they have the properties of both organic and inorganic materials; structural and functional properties from organic materials and electronic, magnetic and catalytic properties from inorganic materials. This rotaxane molecule has directed shuttle-like behavior, where the metal ring around the center can be pushed to bind at either end, with its biggest potential application being in quantum computing.

2) Bio-inorganic interfaces
A second interesting example of molecular scale work and bio-infotech convergence is biocomposites/GEPIs (genetically-engineered peptides for inorganics) which regulate cell behavior and improve binding at bio-inorganic interfaces by modifying surface chemistry and immobilizing infection-causing bioactive molecules. Candan Tamerler’s lab at the University of Washington is doing some interesting work in this area. Improved bio-inorganic interfaces are needed for reduced infection and seamless interaction between human wetware and implants: heart, hip, prosthetics, eye, brain, etc.

3) DNA nanotechnology
The third interesting bio-infotech convergence example is DNA nanotechnology, the notion of using DNA as a structural building material (for example, for self-directed rapid templating) rather than as an information carrier. One key use is employing DNA as a programmable scaffolding for the self-assembly of nanoscale electronic components, meaning that scaffolds comprised of self-assembled DNA serve as templates for the targeted deposition of ordered nanoparticles and molecular arrays. DNA is formed into tubes and then metallized in solution to produce ultra-thin metal wires. John Reif’s lab at Duke University, Erik Winfree’s lab at Caltech and many other groups are working on DNA nanotechnology. Moving to the molecular scale for electronics manufacture is imperative for maintaining Moore’s Law computing performance improvements.

Future implications
It seems likely that working at the molecular scale and bio-infotech convergence will continue to grow. Organic-inorganic hybridization approaches could proliferate to exploit the full suite of properties afforded by organic and inorganic inputs, and as researchers suggest, lead to novel properties and the ability to harness molecular dynamics for human use.

Sunday, May 24, 2009

Expanding notion of Computing

As we push to extend inorganic Moore’s Law computing to ever-smaller nodes, and simultaneously attempt to understand and manipulate existing high-performance nanoscale computers known as biology, it is becoming obvious that the notion of computing is expanding. The definition, models and realms of computation are all being extended.

Computing models are growing
At the most basic level, how to do computing (the computing model) is certainly changing. As illustrated in Figure 1, the traditional linear Von Neumann model is being extended with new materials, 3D architectures, molecular electronics and solar transistors. Novel computing models are being investigated such as quantum computing, parallel architectures, cloud computing, liquid computing and the cell broadband architecture like that used in the IBM Roadrunner supercomputer. Biological computing models and biology as a substrate are also under exploration with 3D DNA nanotechnology, DNA computing, biosensors, synthetic biology, cellular colonies and bacterial intelligence, and the discovery of novel computing paradigms existing in biology such as the topological equations by which ciliate DNA is encrypted.

Figure 1. Evolving computational models (source)

Computing definition and realms are growing
At another level, subtly but importantly, where to do computing is changing from specialized locations the size of a large room in the 1970s to the destktop to the laptop, netbook and mobile device and smartphone. At present computers are still made of inorganic materials but introducing a variety of organic materials computing mechisms helps to expand the definition of what computing is. Ubiquitous sensors, personalized home electricity monitors, self-adjusting biofuels, molecular motors and biological computers do not sound like the traditional concept of computing. True next-generation drugs could be in the form of molecular machines. Organic components or organic/inorganic hybrid components, as the distinction dissolves, could be added to many object such as the smartphone. A mini-NMR or mini-Imager for mobile medical diagnostics from a disposable finger-prick blood sample would be an obvious addition.

Sunday, May 17, 2009

Synthetic biology – what is next?

Synthetic biology is the engineering of biology, re-designing existing biological systems and designing new ones, for a myriad of purposes. The most obvious killer apps are the improved synthesis of drugs and other medicines and the synthetic generation of biofuels.

Right now the most exciting aspect of synthetic biology –suggesting that the field is getting some traction – is that three key community constituents are getting more heavily involved: traditional academic researchers (SB 4.0 conference videos and agenda), undergraduates and high school students through the annual iGEM (international genetically engineered machines) competition (1200 students from 112 teams are expected at this fall’s iGEM Jamboree at MIT, and a growing group of non-institutionally affiliated enthusiasts, diybio’ers, the 2000s version of the Homebrew Computer Club, for both wetlab (an interesting recent example) and computer modeling, simulation and data management projects.

Venture capitalists are slowly starting to realize that synthetic biology could be a huge growth industry and could be the next generation of biotechnology. Amyris is probably the best-known synthetic biology company, estimating to launch its biofuel (ethanol) business publicly in Brazil and the US in 2011.

The long road to automation
Other waves in the history of biotechnology have shown that life sciences problems tend to be much more complex, take much longer than expected to solve and ultimately underdeliver results. There is no reason to think that synthetic biology would be any different, but it is obviously not futile to work on the challenges. When the synbio community analogizes their status to the heterogeneous screws and bolts of the construction industry circa 1864, they are not kidding.
The DNA synthesis process is astonishingly unautomated, unstandardized and expensive ($0.50-$1.00 per base pair) at present (it would be $15-30 billion to synthesize the full genome of a human (ignoring ethical, legal, etc. issues)).
Synthetic biology is a new field and the demand for synthesized DNA is still small; the 2,000 or so iGEM community members are the biggest market. Ginkgo Bioworks is working to deliver robotic synthesized DNA assembly and other startups would be likely to spring up in this area. Ginkgo has also helped to expand and improve one of the main synbio tools, the Registry of Standard Biological Parts.

Sunday, May 10, 2009

Status of cancer detection

The Canary Foundation’s annual symposium held May 4-6, 2009 indicated progress in two dimensions of a systemic approach to cancer detection: blood biomarker identification and molecular imaging analysis.

Systems approach to cancer detection
A systems approach is required for effective cancer detection as assays show that many proteins, miRNAs, gene variants and other biomarkers found in cancer are also present in healthy organisms. The two current methods are one, looking comprehensively at the full suite of genes and proteins, checking for over-expression, under-expression, mutation, quantity, proximity and other factors in a tapestry of biological interactions and two, seeking to identify biomarkers that are truly unique to cancer, for example resulting from post-translational modifications like glycosylation and phosphorylation. Establishing mathematical simulation models has also been an important step in identifying baseline normal variation, treatment windows and cost trade-offs.

Blood biomarker analysis
There are several innovative approaches to blood biomarker analysis including blood-based protein-assays (identifying and quantifying novel proteins related to cancer), methylation analysis (looking at abnormal methylation as a cancer biomarker) and miRNA biomarker studies (distinguishing miRNAs which originated from tumors). Creating antibodies and assays for better discovery is also advancing particularly protein detection approaches using zero, one and two antibodies.

Molecular Imaging
The techniques for imaging have been improving to molecular level resolution. It is becoming possible to dial-in to any set of 3D coordinates in the body with high-frequency, increase the temperature and destroy only that area of tissue. Three molecular imaging technologies appear especially promising: targeted microbubble ultrasound imaging (where targeted proteins attach to cancer cells and microbubbles are attached to the proteins which make the cancerous cells visible via ultrasound; a 10-20x cheaper technology than the CT scan alternative), Raman spectroscopy (adding light-based imaging to endoscopes) and a new imaging strategy using photoacoustics (light in/sound out).

Tools: Cancer Genome Atlas and nextgen sequencing
As with other high-growth science and technology areas, tools and research findings evolve in lockstep. The next generation of tools for cancer detection includes a vast cataloging of baseline and abnormal data and a more detailed level of assaying and sequencing. In the U.S., the NIH’s Cancer Genome Atlas is completing a pilot phase and being expanded to include 50 tumor types (vs. the pilot phase’s three types: glioblastoma, ovarian and lung) and abnormalities in 25,000 tumors. The project performs a whole genomic scan of cancer tumors, analyzing mutations, methylation, coordination, pathways, copy number, miRNAs and expression. A key tool is sequencing technology itself which is starting to broaden out from basic genomic scanning to targeted sequencing, whole RNA sequencing, methylome sequencing, histone modification sequencing, DNA methylation by arrays and RNA analysis by arrays. The next level would be including another layer of detail, areas such as acetylation and phosphorylation.

Future paradigm shifts: prevention, omnisequencing, nanoscience and synthetic biology
Only small percentages of annual cancer research budgets are spent on detection vs. treatment, but it is possible that the focus will be further upstreamed to prevention and health maintenance as more is understood about the disease mechanisms of cancer. Life sciences technology is not just moving at Moore’s Law paces but there are probably also some paradigm shifts coming.

The three most suggestive areas for coming life science discontinuities are genomic sequencing, nanoscience and synthetic biology.
Genomic sequencing contemplates the routine scanning of each individual and tumor at multiple levels: genomic, proteomic, methylomic, etc. Nanoscience is the ability to design, construct and render mobile a large variety of molecular [biological] devices. Synthetic biology is designing new or modifying existing biological pathways in order to produce systems with superior or different properties, exercised by both traditional practitioners (recent conferences: Advances in Synthetic Biology, Synthetic Biology 4.0) and diybio’ers.

Sunday, May 03, 2009

Opportunities in level-two nanoscience

The April 20-24, 2009 Foundations of Nanoscience conference at Snowbird UT provided an interesting look at the wide variety of subfields and applications for nanoscience in thirteen tracks roughly organized into five areas: principles, materials, nanostructures, components and processes (Taxonomy, Quick Reference Guide to Current Research). Many of the nanoscience subfields have been in existence for five to ten years, however the different nanotechnology science and commerical efforts are still fairly isolated (for example, there could be an NNI roadmapping initiative). Nanoscience is largely still at the stage of experimental demos rather than quick advances to commercialization. The diversity of approaches demonstrates creativity and the increasing complexity, refinement and sophistication signals that nanoscience could be moving into a more mature era.

Definition and applications
Nanoscience is the interdisciplinary nexus of several fields including chemistry, physics, electronics, biology and materials - a convergence hub between life and technology, organics and inorganics, biotic and abiotic, top-down engineering and bottom-up nature. Researchers exhibit substrate agnosticism as approaches, techniques, tools and applications may be organic, inorganic or synthetic; the focus is on properties, functionality and requirements.

Nanoscience also encompasses fundamental understandings such as the definition of life, for example, it can be argued that self-replicating crystals constitute organic life. The potential uses of nanoscience are manifold, particularly in electronics, medicine, sensors and materials.

Drug delivery and bridging the gap from end-of-the-roadmap Moore’s Law computing to molecular electronics are the most urgent potential applications.
Figure 1: The End of Moore's Law and the gap between microprocessing and nanoprocessing

The central issue is working with today’s top-down engineered approaches which are specific but limited, to reach the molecular scale, by either extending existing methods or integrating or substituting them with molecular (organic) methods. Biology is a molecular system that works, in fact many interlinked systems. While it is messy to characterize and direct, it has tremendous potential both in its existing mechanisms and novel constructions. However, new materials and processes could be challenging to bring into the existing electronics fabrication value chain.

Status: increasing complexity and working with trade-offs
Broadly, nanoscience is currently in the phase of building on basic configurations to achieve more complex design motifs, for example scaling up circuit arrays from single to double digits, generating 3D construction materials such as 3D nanocrystals, making molecular motors from biological parts, producing active vs. static building blocks and a variety of structurally strong shapes such as icosahedra and other polyhedra. In addition to increasing complexity, another major theme is the sophisticated design trade-offs amongst a variety of parameters such as chirality, charge, planarity, time scale dynamics, thermodynamics, binding, distance, solubility, aggregation, functionalization and materials.

Wonder tools: DNA and CNTs
DNA and CNTs are the most widely used materials in nanoscience. DNA is a tremendously versatile tool not just as an information carrier and material for building structures but also as an external tagging agent on particles and as a template for directing the growth of nanocrystals and metal wires. As has long been realized, carbon nanotubes have many desirable properties for a wide range of applications but still prove elusive to manufacture to spec in large quantities.

Conclusion: moving nanoscience to nanotechnology
Many fields of science now operate at the nano or molecular scale and it is clearly useful to have a foundational characterization and established toolkit for molecular science. One next phase would be moving nanoscience to nanotechnology, seeing a tight linkage between the emerging novel materials, nanostructures and architectures to the engineering and realization of applications.