Since then-President Obama’s announcement in early 2015, precision medicine has been an even bigger buzzword in medical research. It’s one of those terms you hear frequently, but what exactly is it? And what makes it so important for future health care?
Precision or Personalized?
It’s both, but definitely not average. The term replaces the older description, personalized medicine. Although they mean roughly the same thing, precision medicine is the preferred term for “developing treatment and preventive medicine for individuals based on genetic, environmental and lifestyle factors.” In translation: instead of a one-size-fits-all approach for an average patient, the precision approach looks at individual variability when mapping out the treatment plan that has the best chance of success. Doctors base the drug choices on the individual patient’s genetics.
Precision for Cancer Treatment
So, apart from maximizing success, what makes precision medicine so important?
Consider cancer, one of the short-term goals outlined in Mr. Obama’s 2015 initiative. Characterizing a patient’s cancer helps clinicians design an effective treatment plan. Tissue profiling reveals cell markers that are useful when choosing chemotherapeutic drugs. For example, breast cancers that overexpress the HER2 receptor respond very well to trastuzumab (Herceptin) treatment, whereas those with abundant estrogen receptors respond better to hormone therapy. This kind of approach can also customize treatment for other conditions.
Enter the Omics Gang
How do doctors know what works?
Omics is shorthand for a suite of biotechnologies devoted to uncovering the secrets of the genome (DNA), the proteome (proteins), the transcriptome (how genes translate into proteins) and more. Essentially, omics researchers study the basic machinery of the cell and how growth, aging, disease and nutrition affect it. These technologies underpin most research into precision medicine. By studying thousands of individuals, researchers build a picture of health, disease and risk.
Cataloging the genomic information from thousands of individuals in large population cohorts, and then matching it up with health, environmental and lifestyle records shows genes associated with specific diseases. In tumors, it shows how sensitive they are to chemotherapy.
The Data Effect
Omics technology is advancing very rapidly and generating vast amounts of data. Sequencing a person’s genome first took almost 10 years and $3 billion; current next generation sequencing (NGS) instruments will whiz through around 18,000 individuals or more in a year. Proteomics technology is catching up rapidly.
One genome generates around 780 MB of data out of around 30 terabytes of raw NGS data; typical proteome datasets run into many gigabytes in size. Studying thousands of individuals for population studies generates terabytes of data — approximately 40, according to one article. And that takes a lot of processing power to analyze for clinically relevant results — more than can be done manually, so biomathematicians develop algorithms and other software tools to tease the answers from the digital soup. Bioinformatics for storing and accessing electronic health records is vital for precision medicine research. Furthermore, IT systems such as the Northrop Grumman-supported MedDRA initiative encode health information consistently ensure that data banks can talk to each other, with advances in cybersecurity ensuring patient privacy despite the cross talk.
… Is This Where the Robots Come In?
Yes. Just think of where all that data comes from.
Population studies are as big as they sound; the Million Veteran Program collects biosamples from U.S. veterans, around 400,000 so far. It aims to generate omics data that — in conjunction with information on health, lifestyle and environment — will translate into clinical practice. That’s a lot of samples to handle, store and analyze.
Furthermore, microelectronics advances mean that omics instruments handle more samples at a faster rate. Next-generation sequencers such as the Illumina HiSeq and the Thermo Fisher Ion Torrent use chip-based and semiconductor technology to decode genomic materials. A simple flash of fluorescence or change in pH zaps DNA base pair information into a digital format much faster than old-school gel-based Sanger sequencing.
In order to exploit the speed of these tools, robotic handling manages everything from sample aliquots for biobank storage, to 384-well plate assay wrangling. Their speed and automation bring faster results with fewer errors.
Robotic or automated workflows are also important for nanotechnology and microfluidics where the miniaturization that reduces instrument footprint and sample volume also precludes manual input. Even though they will benefit from precision medicine, our clumsy fingers and thumbs are not as welcome in the lab as they once were.
Are you interested in all things related to technology? We are, too. Check out Northrop Grumman career opportunities to see how you can participate in this fascinating time of discovery.