Richter Scale and Your Genomic Portfolio
The field of personal genomics needs a richter scale. This scale would provide a mechanism for giving each new genomic association a score, maybe from 1-10, based on some criteria such as penentrance, actionability, and validity. Existing genetic tests should be scored as well. Commercially available tests might have additional criteria, like whether there is an FDA-approved test or whether the test is reimbursed.
The higher the score the “better” the association or test. A low score might indicate that the association is very likely just “noise” regardless of the fact that it was all over your morning newspaper.
This scale will be very handy once you have a copy of your own genome. Let’s be honest, if you’re sipping on your morning cup of coffee, reading the paper, and see an article about a newly discovered “gene for alzheimer’s” or “snp for sudden stroke”…you’re going to be compelled to run over to your computer to see if your genome possesses that genetic variant. Without a good way to quickly judge the relevance of the news article, journalistic sensationalism may have you running over to your computer several times a day. That doesn’t sound like a very good use of time, does it?
Designing the scale
How should this scale be designed? Here are a few preliminary thoughts:
First of all, I don’t think there should be just one scale. There should be many different scales for different purposes. Some scales may become more predominant than others because they are more useful or they are more widely adopted and therefore become the de facto standard.
Second, the scale should be transparent so that everyone can see why a test is given a score. No secret sauce or private algorithms. An open source algorithm will allow people to refine the scale over time and even make their own scales for themselves or for small communities of users. An openly available scale will allow people to do innovative stuff with cool mash-ups etc.
Third, we should leverage existing scales. There has been tremendous brainpower invested in coming up with useful and practical technology assessment mechanisms. Health insurance companies and the federal government have whole buildings of people who dedicate their careers to the evaluation of technologies. A few examples are BlueCross BlueShield’s Technology Assessment Center and the CDC’s EGAPP (see the ACCE page).
While it might be important for there to be “official” scores from experts or bodies of experts, I think unofficial scores will be important to nourish and promote. The official channels rely on centralizing expert opinion, which can be expensive and time-consuming. Their throughput is less than stellar as a result (how many reports does EGAPP produce per year?), albeit of very high quality. If we lowered our standards a tidbit, could we ramp up the throughput? Social networks of professionals, like Sermo, have already started to chip away at bringing economies of scale to the codification and assimilation of medical knowledge. In the research community, there are glimmering stars like the Alzheimer’s Research Forum and the Schizophrenia Research Forum which could play an instrumental role.
I’ll happily join the “cult of the amateur” and promote the ability of people to join in on scoring as long as there is transparency about the origin of the score. Adding the voices of amateurs will inevitably generate a lot of nonsense (or spam scores). But, the end result may start to look like a marketplace of scores and I suspect a lot of instructive information would emerge, including mechanisms for separating the wheat from the chaff. Amazingly talented and industrious amateurs will emerge from all over the world, I’m sure. Not only that, but all of the PhDs who decided to start a bakery instead of work at a lab bench may be able to apply their skills in their spare time.
There will be a lot of variance in how tests are scored by different types or groups of people. These differences may not be an indication that the methodology is failing, but might reflect important differences between the people who generate the scores. The health economist working for a health insurance company may have a completely different view of clinical actionability than a patient (who might be receiving the action) or a physician (who would dispense the action).
I suspect that the most powerful and timely scores would be generated by a mix of expert and amateur effort. By inviting amateurs into the mix there could be a natural division of labor to exploit. Of the tremendous amount of the work that goes into the evaluation of new technologies by official bodies like EGAPP, a significant percentage must be dedicated to the collection and organization of information. How many human subjects were enrolled in the study that discovered the genetic variant for alzheimer’s predisposition? Has this discovery been confirmed or denied by other studies? Some of this work could be distributed across a huge number of individuals, no PhD required.
If my family has a history of alzheimer’s I might be motivated to chip-in and volunteer to do some grunt work so the alzheimer’s experts can spend more time doing what they do best. The incentive of getting a good score in a timely fashion may be all that is needed. Well almost. The experts have to cooperate and invite me and all the other amateurs out there into the big tent…
Vignette: Promote open access to scientific and medical literature
This is a plea to scientists and researchers to be more thoughtful about how and where they publish their scientific work. There are lots of interesting possibilities when people can actually access your published work. In return for sharing your data, there may be legions of people that can help ensure that your discoveries and insights can achieve maximum impact.
Since I brought it up, ethicists, legal scholars, policy-makers, public health people and many other stripes of experts should be more conscientious about where they publish their intellectual output. I’m not going to pay $25 per article so that I can get an education about issues that directly affect me. It is easy to attack journalists and bloggers for getting their facts wrong, but we’re living in an information wasteland without access to good scholarly work. Trash in, trash out. This has to change. [Warning: Melodramatic comment dead ahead] We need nothing less than a publishing perestroika.
Your Genomic Portfolio
The score of one genomic association will change over time. Scores will rise as studies reproduce the early findings or as therapeutics are developed for previously untreatable conditions or as health insurance companies reimburse. Scores will fall when the results of small scale studies cannot be reproduced in larger cohorts.
In this way, the user interface for such a scoring system starts to look a lot like stock market websites, with a trend line showing dips and jumps in a score over time. Something similar in spirit to the Google Finance interface could be transformed quite naturally into a platform for your genomic portfolio:
Having a genomic scoreboard may actually enable you to put that article about the “gene for alzheimer’s” in the morning newspaper in a helpful context. You might find that the association has a score of 5 on the richter scale, which is below your threshold for caring (since by this time you may have learned that 90% of associations with a score of 5 or below get delisted from the genomic stock market within 2 years b/c they turned out to be noise).
At the bottom of the interface there might be a little button that you can press that will automatically check to see whether your genome possesses the variant in question. After pushing the button, a little informed consent quiz pops up to make sure you really understand the implications of learning about this feature of your genome.
Is this vision achievable? Or any piece of it? What parts are worthwhile? What is the low hanging fruit?
A first step might be to develop a very basic scale that leverages existing filters such as reimbursement or FDA approval and then apply them to commercially available tests. Maybe all of the Gene Genie energy could be diverted for a few months (or a few years)? Since the Gene Genie project isn’t scheduled to finish its mission of blogging about the whole genome until 2082, what harm would a short digression cause?
Anyone want to work on designing a scale? Build a user interface? Figure out how get more scientific and medical literature to be published in open access journals? This would all take some serious planning, but it might be fun and rewarding.