Author Topic: World’s first standard measurement for DNA created by NIST  (Read 2237 times)

0 Members and 1 Guest are viewing this topic.

Offline Elaine Davis

  • Administrator
  • *****
  • Posts: 1467
  • Gender: Female
  • To thine own self, be true.
World’s first standard measurement for DNA created by NIST
« on: March 03, 2014, 17:59:19 PM »
World’s first standard measurement for DNA created by NIST

                 By Graham Templeton Mar. 2, 2014 10:02 am

DNA sequencing technology has been advancing at a lightning pace in the past few years, with upcoming commercial models now becoming so fast they require a new name: gene sequencers. However, as we start zooming out of the DNA molecule to read it at the level of the gene, we have to be careful that accuracy does not take a hit as a result. How can we tell? One measure is how well a particular sequencer’s results agree with each other over multiple runs — that’s reliability — but that still leaves the possibility that a sequence is incorrect in the same way each time. To measure that, you’d need some sort of genetic standard, and that’s precisely what the National Institute of Standards and Technology has devised.

James Watson has the distinction of being the first person with a fully sequenced genome, but the unnamed participant who donated the genetic material for this study is the first person to have their genome fully scoured for information by multiple techniques. The idea is that since every individual sequencing technology has reliable biases and errors, the “real” sequence can only be found by weighted analysis of different sequence reports for a single stretch of DNA. By using five different sequencing technologies over 14 separate sequencing runs, NIST has derived a sequence more accurate than any other in history.

A vial of DNA fluoresces under black light.

The result is a world first: a genetic standard. This is the closest geneticists have come to the standard kilogram that has become so famous in recent years. The hope is that by introducing a known standard, NIST can help speed approval of sequencing technologies and push companies to compete for those last few fractions of a percentage point in accuracy. When a sequencer produces a result, the checking algorithm will quickly report the level of deviation from the standard, which, when the standard is perfectly accurate, stands in for an accuracy rating.

Gene-level sequence readers that arise from fast-scanning technologies like graphene nanopores can produce throughput incredibly quickly. This fast, cheap, and (hopefully) accurate technique could allow the rise of personalized medicine we’ve been waiting for since the unveiling of the Human Genome Project. Fast, efficient DNA sequencing has the potential to end seemingly random drug rejections and to spot upcoming problems sometimes decades in advance. Sequencing tumor cells could allow personalized drug regimes to every tumor — going below even the level of the patient — and sequencing of bulk bacterial populations for metagenomic analysis has already begun to revolutionize agriculture.

So, it makes sense that there is urgency in approving these technologies, and in making sure they really do measure up to expectations.

The NIST sub-group that took point on the project.

Early sequencing options like Sanger sequencing looked at single nucleotides, while later options could read hundreds or even thousands of bases, but those bases came out of order in need of time-consuming reassembly. New technologies read a DNA strand in a linear fashion, returning a sensible sequence off the bat with much or all of the speed of so-called “shotgun” sequencing methods of the past. Nanopore sequencing basically works by feeding a molecule of DNA through a tiny pore, which is stretched a characteristic amount as each type of base passes through. The amount of stretch corresponds to the electrical conductance across that pore, which can be measured and taken as an indication of the base.

This reference sequence would be used mostly to test accuracy ratings, but there’s no reason that researchers couldn’t create much larger databases of densely cross-referenced material like this. Such a library would allow highly accurate searching for so-called single nucleotide polymorphisms, or mutations in just a single letter of the DNA code. Right now, library sequences contain some predictable amount of error, and that error can lead to over- or under-estimation of mutation in a test sequence, or lead to false positives in searching.
GOD FORBID THE LIGHTS GO OUT and a zillion brains have to be retrained to function in manual reality.

Does anyone else get the idea that the tweets on the WL account are starting to sound a little like someone is bathing in a bird bath, eating bird food & possibly smoking bird * in his own sphere??