As part of our Notable Books series, Nicholas Christakis will be reviewing the interesting relationship between our social networks and disease burden (and related phenomena). This is in context of his recently published book:Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives. Time: December 10th, 4pm. Place: Countway Library of Medicine, 5th floor, Ballard auditorium.
Just as Open Access is a threat to an unsustainable publishing model, so it seems are open source medical applications to the most of the closed-source offerings of the for-profit sector. In conjunction with the development of substitutable platforms, there is a new and widening opportunity for independent developers to innovate and disseminate their solutions and to let a larger and more diverse ecosystem of solutions to be adopted. What is most promising in this regard is the growing acceptance of open source solutions for implementation
Addendum: It's rewarding to see our efforts listed here as among the top 10 open source software projects changing medicine (i2b2 and indivohealth).
Who is more knowledgeable? The physician who remembers more diagnostic tests than any other physician or the physician who is the quickest and most savvy at online searching for the relevant tests? Who is the most technically expert surgeon? The one who has the most nimble fingers and the sharpest eyes or the one who can make herself most comfortable with robotically assisted micromanipulators? This story taken from athletics suggests that we are going to be uncomfortable with some of the answers to these questions for many years to come.
Last week, Gina Kolata of the New York Times summarized a growing controversy around the value of some of the types of medical screening tests currently employed.The salutory effect of this and related articles is a growing awareness of the tradeoff between increased sensitivity and specificity. It also is a shot across the bow as we contemplate the growth in the number of incidental findings that are going to occur as we test hundreds if not thousands of genetic variants today and in the future. Also, today, Gina Kolata reviewed how little we know about diseases as extensively studied as cancer. Some of them do disappear. Do these spontaneously regressed tumors contribute to the surprisingly high false positive rates for screening?
An invitation to celebrate the life, the accomplishments, and the continuing relevance of the literary and scientific contributions of Dr. Oliver Wendell Holmes.
Oliver Wendell Holmes (1809–1894) spent parts of the nineteenth century as America’s best-known physician and best-selling author. Sir William Osler praised him as “the most successful combination which the world has ever seen, of the physician and man of letters.” Henry James, Sr., called him “intellectually the most alive man I ever knew.” Today, he is remembered as a physician for his investigation of the contagiousness of puerperal fever (two decades before the advent of the germ theory), his advocacy for therapeutic skepticism and rationalism, and for coining such terms as “anesthesia.” He is celebrated as a literary and cultural figure for such poems as “Old Ironsides” (considered responsible for saving the U.S.S. Constitution), for his early forays into what would be considered a new depth psychology, and for terming Boston the “Hub of the solar system” and describing its “Brahmin” caste.
Join us to help celebrate the life, the accomplishments, and the continuing relevance of the literary and scientific contributions of Dr. Oliver Wendell Holmes.
Dr. Oliver Wendell Holmes and the Spirit of Skepticism:
November 17, 2009, 1:00 PM- 5:00 PM, reception 5:00-6:30
Location: Countway Library, 10 Shattuck St., Boston
Prof. Regina Herzlinger, with the Harvard Business School, will be teaching a free intensive seminar course, Innovations in Consumer-Driven Health Care, in January 2010. This is a one week class, beginning on 1/11/10 and ending on 1/15/10.
She welcomes students from the various Harvard and MIT graduate schools as well as both undergraduate universities to submit their resumes to her for consideration if they wish to enroll . These can be to firstname.lastname@example.org. The class itself will be held on the Harvard Business School campus and will likely run from 9.00am until 3.30pm with a lunch period from 11.30am-1.00pm. Students chosen to take part in this class will be notified in November.
Innovations in Consumer-Driven Health Care
Monday, January 11-Friday, January 15; 9-11:30am and 1-3:30pm
Health care industry
This seminar will focus on the creation of innovations in health care that
better meet consumer needs.
Content and Organization
In the first two sessions on day one, students will examine three different
national models for achieving universal coverage:
* Consumer-driven health care in Switzerland, in which consumers use their
own funds to purchase insurance
* Single payer health care in the UK, in which the government controls the
health care system
* Managed competition system in the Netherlands, in which the government
creates a national health care market
On day two, the second two sessions will delineate the entrepreneurial
opportunities and obstacles created by a consumer-driven health care system.
On days three, four, and five, students will discuss case studies of
entrepreneurial, consumer-driven ventures in the following fields:
* Health insurance - innovative efforts that support health promotion and
reward efficiency (two cases)
* Health services - focused, integrated care for chronic diseases; specialty
hospitals; retail health care outlets; medical travel (four cases)
* Personalized diagnostics tests for mutated genes; companies that offer
genetic maps (two cases)
* Personalized medical devices - Proteous; Chronicle (two cases)
* Personalized drugs (one case)
* Personalized information (one case)
Erez Lieberman, of the HST Bioinformatics and Integrative Genomics program has just published a provocative paper which uses moderate resolution mapping (1 megabase) of the 3 dimensional structure of the genome. The results are consistent with prior work suggesting that DNA maintains its function by packaging itself into a structure that is free of knots. Now, if I could only apply this to my collection of wires in my drawer.
Medicine always was a discipline of information processing. We took data (signs and symptoms) from the patient, matched them against our knowledge-base (the hopefully updated residue of medical school) and then came up with a interwoven diagnostic and therapeutic plan. We then understood that this information processing could be automated. But then, there were no electronic medical record systems that could truly provide the data that such automation required. Decades later, the federal government is trying to make a concerted push into this arena, one that explicitly includes the patient (us) as an active participant in this information processing enterprise. Yesterday, we wrapped up an interesting meeting attended by representatives of the government, academia, and industry to address some specific opportunities to catalyze successful deployment.
This article suggests that a new bill in introduced in California may regulate how the modern curators and interpreters of biomedical data (bioinformaticians) may end up being regulated and tarred by the same brush as direct-to-consumer genetics testing companies.
“This law doesn’t just cover companies, it covers what’s done in academic institutions, too,” Butte said. “Nothing in this bill blocks that.”
More evidence, in any case, of the centrality of information processing to the biomedical enterprise.
In preparation for a conference on substitutable platforms in health IT, I was directed to an instance of a growing form of self-publication that we call the Never Ending STory (NEST). This instance of NEST is the knol which has become an increasingly popular venue for publications including ones that look a lot like standard peer reviewed journals. More generally, a NEST starts as an embryonic paper. With iteration and with the help of co-author and reader suggestions, it incubates a mature manuscript. Unlike a blog, it is not just a snapshot of a narrative perspective in a sequence of snapshots, but a single integrated document. Unlike a wikipedia article it does not claim encyclopedic authoritativeness (or at least sole authoritativeness so that disagreeing contributors have to battle it out) but only the moderated perspective of the authors. Unlike a standard peer review article, it's publication does not signify the end of its incubation and the hatching of a fully mature narrative. And it is timely and time efficient to make NEST's more prevalent. How often, have you read a scientific article from five years ago and wondered if more recent developments had influenced the authors' perspective on their prior results and/or conclusions? Would it not be more effective to allow the author to update their articles (while maintaining an archival history of all prior versions) so that they continue to be current? Or if there were additional data that bolstered the case of the original article, the author could add these data to that article without having to go through an entire process of a new publication just for the incremental data. That would reduce unnecessary publication noise and increase the value of the article to the reader.
Although, right now, we are using the knol as the infrastructure for our NESTs, we can hope that academic publishers will provide vehicles of similar functionality. Until, then we will just have to incubate our own.
We, as nation, are in the process of investing several billion dollars into the implementation of electronic health records. If all goes well, there will be a lot of individual data buried in these care systems. This begs the question of what utility, if any, this data has for research whether for genomics, comparative effectiveness research, pharmacovigilance, or public health. The NIH is hosting a conference at the end of October (entitled “Widening the Use of Electronic Health Record Data for Research”) to attempt to answer the question. All are interested parties are invited.
This is not an abstract question about the future of libraries, although that is also an interesting question. It is a question about what the medical school accrediting organizations have determined. "The Liaison Committee on Medical Education (LCME) is the nationally recognized accrediting authority for medical education programs leading to the M.D. degree in U.S. and Canadian medical schools. The LCME is sponsored by the Association of American Medical Colleges and the American Medical Association." and this is what they had to say (the bold face is mine for emphasis):
D. Information Resources and Library Services
ER-11 The medical school must have access to well-maintained library and information facilities,
sufficient in size, breadth of holdings, and information technology to support its education and
There should be physical or electronic access to leading biomedical, clinical, and
other relevant periodicals, the current numbers of which should be readily
available. The library and other learning resource centers must be equipped to
allow students to access information electronically, as well as to use self-instructional
ER-12 The library and information services staff must be responsive to the needs of the faculty, residents
and students of the medical school.
A professional staff should supervise the library and information services, and
provide training in information management skills. The library and information
services staff should be familiar with current regional and national information
resources and data systems, and with contemporary information technology.
[Revised annotation approved by the LCME in October 2007 and effective immediately.]
Both school officials and library/information services staff should facilitate access
of faculty, residents, and medical students to information resources, addressing
their needs for information during extended hours and at dispersed sites.
(This is taken from:
http://www.lcme.org/functions2008jun.pdf found at:http://www.lcme.org/standard.htm
Hat tip David Osterbur.)
These are important recommendations and ones which foreshadow trends from the very near future. We have embraced this educational mission from access of electronic resources to teaching biomedical researchers how to perform bioinformatics-enabled research (see the bioinformatics nanocourses offered to all by Reddy Galli— details here ). The central question is whether librarian training will embrace the information technology that will be required to keep libraries current and relevant to their patrons. The answer to that question will determine where the future librarians are trained and that will in turn determine how central libraries remain to the academic mission.
A Swiss-American collaboration makes vivid just how specific the effect of cellular/tissue context is upon the impact of genetic variation. We have understood for several years that each tissue has a different (if highly overlapping) mix of expressed genes (mRNA). However this study shows, in a study of three different cell types in 75 individuals, that the variation in expression between individuals that is attributable to genetic variation in control elements (i.e. regulatory SNP's) is highly tissue dependent. That is, over half of the regulatory variants only have impact in particular tissues. This suggests that understanding the impact of human genetic variation will take a lot more detailed study and not only in one tissue (usually blood).
[Dimas, A.S., Deutsch, S., Stranger, B.E., Montgomery, S.B., Borel, C., Attar-Cohen, H., Ingle, C., Beazley, C., Arcelus, M.G., Sekowska, M., Gagnebin, M., Nisbett, J., Deloukas, P., Dermitzakis, E.T. and Antonarakis, S.E. (2009) Common Regulatory Variation Impacts Gene Expression in a Cell Type-Dependent Manner, Science.]
Vocabulary of citation distortions
Both scholarly and social forms: the scholarly form connects statements to the broader
medical literature, the social form (social citation) includes self serving and persuasive
Self serving citation is always a distortion
Persuasive citation may be necessary to communicate new, sound claims to the scientific
community; it may, however, have distorted uses—citation bias, amplification, and
Systematic ignoring of papers that contain content conflicting with a claim
Bolster claim; justifying animal models to provide opportunities to amplify claim
Expansion of a belief system without data
Citation made to papers that don’t contain primary data, increasing the number of
citations supporting the claim without presenting data addressing it
Citation diversion—citing content but claiming it has a different meaning, thereby diverting
Citation transmutation—the conversion of hypothesis into fact through the act of citation
Back door invention—repeated misrepresentation of abstracts as peer reviewed papers to
fool readers into believing that claims are based on peer reviewed publishedmethods and
Dead end citation—support of a claim with citation to papers that do not contain content
addressing the claim
Title invention—reporting of “experimental results” in a paper’s title, even though the paper
does not report the performance or results of any such experiments
I was recently reviewing research on the costs of recruiting subjects for population studies. I found that even with several million dollars of subscription costs paid by the University per year, about 1/3 of the articles I had identified were not covered by our subscriptions. These articles then required an additional $20 to $35 personal investment each if I wanted to see their details. For less well-endowed institutions, the challenge is even more significant for their faculty and students. In this context, it is heartening to read this report from the Public Library of Science organization. Not only have they published high quality articles but they have 5.4 million readers, 26,000 authors, 13,000 peer reviewers, over 11,000 articles submitted in 2008 and they project that 90% of operating expenses will be covered by the PLoS funding model by the end of 2009. Hats off to a daring and creative act of social and academic organization.
A recent issue of Perspectives in Psychological Science demonstrates the kind of full-throated give and take that makes for better science. An article by Ed Vul et al, questions the results of functional magnetic resonance imaging (fMRI) studies that relate imaged changes to personality traits, emotional states, and social interactions. The heart of the critique is how the correlations are calculated and how their statistical significance is reported. Their analysis is backed up with a very substantial survey of the investigators of the studies in question. In the same issue, the supporters and detractors of the critique are given their say. Most impressive is the statistical perspective of Lindquist and Gelman. It is even-handed and informative and should be required reading for any student about to engage in research of the increasingly familiar "high dimensionality" data sets (e.g. in genomics and imaging) that allow a multiplicity of questions to be asked or hypotheses to be tested.
Update: 7/7/2009: Nice piece on this topic from NPR.
Early on in the development of personal health records, we had many discussions about how to jump-start the critical first step to adoption: populating that personally controlled record with clinical data. We recognized that, especially in the early years (i.e. 1994) healthcare institutions might be loathe to share data of their patients with these same patients (in electronic form) and even less so with other institutions delivering health care. So, it was with a slightly evil gleam that we considered the following opportunity when some of the local healthcare institutions developed patient portals to their healthcare systems (e.g. PatientSite and Patient Gateway). If patients could access their own medical record at one or more institutions, then a program working on their behalf (using their credentials) could then automatically log in and scrape up whatever HTML-formatted data was available and then reformat it into a standardized data model within a truly portable and personally controlled health record. This solution would not require that any institution hew to a particular data standard or communication protocol (beyond HTTP and HTML) but it would require us to be very nimble to update our translation programs. Specifically, if one of the hospitals would change the formatting of the screens on the patient portal, we would have to change the program that transforms those screens into a useable personal health database. That would certainly have been possible and legal but we decided not to proceed out of a sense of collegiality with our medical informatics colleagues and instead worked to cultivate more explicit data sharing governance and agreements.
Recently, however, I was reminded by my colleague Jeff Behrens, of a very successful model of data sharing in a personally controlled financial record: Mint.com. It's a remarkable site that allows anybody (and without additional cost) to develop a cohesive view of their finances drawn from myriad information sources including: your bank account(s), your retirement accounts (e.g. 401K plan), your investment brokerage account, your credit/debit card(s), and your mortgage account. To just name a few. This in the absence of any data interchange model across this multiplicity of institutions and functions. Once this aggregation is performed for you, many services are available such as alerts for over-budget spending, due dates of credit card bills, changes in your spending mix, and decision support for improved financial performance (e.g. switching to lower interest credit cards—Mint.com knows the interest rates on each card). There are a few remarkable properties of this site that may be quite instructive for healthcare information technology.
- Essential to this comfort however, is the immediate value that the user obtains by participating. Not in the distant future but today.
- Hundreds of data sources of relevance are unified without a single standard data model (unless you consider Mint.com's model the standard).
- The data source institutions did not enter into an agreement for data sharing. Only the consumer had to agree to let Mint.com automatically access these data sources with their borrowed credentials.
- A national data sharing infrastructure was created in 1 year from a startup-sized technology development and marketing budget.
So many of our colleagues appear to be doing their daily work holding onto an electronic device carefully positioned below eyesight (barely) that this direct comparison of two leading mobile platforms may be useful. Alternatively, there are ways to make meetings more efficient and personal.
I recall several earnest conversations with colleagues who insisted that by making all the projects I was involved in (e.g. i2b2 and Indivo) adhere to an Open Source license, I was being both naive and counterproductive. There was no way, I was told (as recently as last year) that "real" companies would get anywhere close to supporting products that had an Open Source license. This story about Microsoft's use of Open Source software in its search engine developments should settle those concerns, at least for the next few days.
When studies that helped fuel the sexual revolution become the subject of scholarly retrospectives, one has to wonder what is the current social aphrodisiacal locus?
Critically acclaimed biographer Thomas Maier will be at the Countway Library of Medicine for a presentation and signing of his newest release, The Masters of Sex, offering an unprecedented look at William Masters and Virginia Johnson, the nation’s top experts on sex; their pioneering studies of intimacy, and the sexual revolution they inspired.
Thursday, May 14, 2009, Minot room, Countway Library
4:30 pm: PRESENTATION
5:30 pm: BOOK SIGNING
(The Harvard Medical Coop will be selling the book at Countway)
5:30 RECEPTION: Lahey Room
There has been some concern articulated about the bias that could arise from author-fee-driven publications (which is one flavor of open access). Here is an example of bias within the closed access framework that can arise almost completely undetected (thanks to The Scientist for the pointer) and enabled by an industry:industry collaboration
The Australasian Journal of Bone and Joint Medicine, which was published by Exerpta Medica, a division of scientific publishing juggernaut Elsevier, is not indexed in the MEDLINE database, and has no website (not even a defunct one). The Scientist obtained two issues of the journal: Volume 2, Issues 1 and 2, both dated 2003. The issues contained little in the way of advertisements apart from ads for Fosamax, a Merck drug for osteoporosis, and Vioxx. (Click here and here to view PDFs of the two issues.)
Hat tip Josh Parker
Addendum 5/8/2009 here.
5/10/2009 More from The Scientist.
We all pay a lot of money for the product of our own collective academic enterprise. With 2.5 million downloads of pdf's by Harvard University patrons from our top three publisher packages, I wondered what the costs might be. Well, thanks to Betsy Eggleston, we now have a better idea:
Elsevier package 2008 article downloads: ($.76/download)
Wiley package 2008 article downloads: ($1.52/download)
Springer package 2008 article downloads: ($2.98/download)
That is a lot of money per click but several questions pose themselves:
a) Do all libraries have a similar cost per download?
b) Is the relative cost per download similarly ordered for each of these three publishers in other libraries?
c) What is the equivalent cost for a circulated book/monograph per patron-use? Is that a fair comparator?
d) What is the equivalent cost per download for open access publications (including the author cost)?
I suspect that knowing the answers to these questions is a source of leverage and power. How can we make decisions with and on behalf of our researchers, faculty and public without knowing these answers? Should we not insist on greater transparency of the relationship of academic value and cost. If you have any additional data, feel free to enter a comment regarding this post or send me an email and I will add it to this post.
First there was the machine that went ping. Then there were a lot of pings because of all the machines around patients and it became clear that most of them were not relevant (94% of alarms in the pediatric intensive care unit). Although there was some work done to attempt to reduce the noise, a lot of the monitors now have their alarms silenced. So, perhaps it is not too surprising that a recent study has shown that many of the electronic health record-driven advice, reminders, and warnings regarding medication are now being ignored. This begs the question whether the solution is primarily addressable by technology or really a matter of re-engineering the process of patient care.
(hat tip Ted Shortliffe)
Those of us who have worked with electronic healthcare data have been long aware of the limitations of billing data (aka claims data, aka administrative data) for research. They are often too coarse grained for clinical research and are inherently biased to maximize income. It is motivated by these limitations that Natural Language Processing (NLP) has become increasingly important in mining clinical records for research. What a doctor writes in her notes is much more revealing of her patient's state than what she bills for. Notwithstanding there are some significant challenges in the de-identification of textual records and in transforming these records into standardized clinical categories (e.g. SNOMED). Yet the appeal of using the clinical narrative text rather than claims data is compelling. In our work in i2b2, we have seen significant overrepresentation of diagnostic codes where a diagnostic encounter to "rule out" a disease was codified as that disease in the claims data. For example, a radiologist asked to rule out rheumatoid arthritis based on an X-ray will often classify the X-ray with a billing code corresponding to rheumatoid arthritis when perusal of the full narrative text of the radiologist's notes that there were NO findings consistent with rheumatoid arthritis.
A recent article in the Boston Globe points out additional challenges in using claims data for personal medical records. The same limitations of claims data for research appear to impinge on their utility for clinical care. My colleague John Halamka makes several useful suggestions on how to improve the use of such data, including recruiting patients themselves as collaborators in refining the categorization of their clinical records or even removing gross errors. Notwithstanding, a small number of codes are likely to be quite limiting and it may be that codifying the patient's record by using the entirety of their clinical documentation (i.e. what their care providers wrote about them) will ensure the most nuanced and most faithful representation available of what the clinician was thinking about in each clinical encounter with that patient.
One of the many reasons healthcare records do not have broad adoption, is that they are burdened with a multitude of functions beyond "merely" serving as a communications vehicle among the members of the healthcare team. All these functions are then bundled into monolithic systems by single vendors. It is then typically arduous and expensive to substitute new functionality for existing functions. Perhaps one of the most onerous uses of these systems, one that is not tightly linked to quality of care, is clinical documentation for preemptive legal defense rather than for effective clinical communication and decision making. This brief article in the NY Times offers a path towards safety and rationality so that patients who are injured are justly compensated and those who are not, are not compensated. It also offers as an implicit side-effect, changes in physician behavior and use of electronic medical record systems that would be focused on improving quality of care and communications (to and from patient as well as physician).
(Originally uploaded by otisarchives1 )An interesting example of how historical medical data can be shared more widely for scholars worldwide. If all medical libraries followed this lightweight formula for dissemination, we could avoid repeating many mistakes of the past and learn how better to deal with old challenges revisited (e.g. epidemics). Kudos to the Medical Museum.
Our very own John Brownstein, has just published an article in the Canadian Medical Association Journal which demonstrates the consequences of removing information bottlenecks between the public and population-level analytics. Some have expressed skepticism regarding the value of mining web-activity for "objective" reporting but, in this instance at least, it seems that a distributed network of non-experts can be effective in detecting outbreaks earlier than through standard surveillance methods.
We've read a lot about direct to consumer disclosure of genetic risk. Here is an opportunity to learn from Prof Robert Green about what he has learned from the methodological study of the disclosure process of the genetic risk for a serious disease.
Translational Genomics Seminar Series
"Genetic Risk Assessment for Alzheimer's Disease: The REVEAL Study."
Robert C. Green, MD, MPH
Duncan Reid Conference Room Brigham and Women's Hospital on Thursday, March 19th at 5pm.
A very nice account here regarding the work of our very own Alexa McCray and Scott Lapinski in leading the medical school to implementation of an open access policy and compliance with the NIH's mandate for open access in for the publications funded by the NIH.
Those of us who are pediatricians cannot help but take this personal story (Hat tip Ted Shortliffe) about the effect of electronic health records very seriously. It does point out that we are still a long way from the day when the computer is not a distracting magic box but rather serves as an active vigilant partner in the healthcare of our patients.
The politics of openness are heating up. As per SPARC (Scholarly Publishing and Academic Resources Coalition):
"To accommodate widespread global interest in the movement toward Open Access to scholarly research results, October 19 – 23, 2009 will mark the first international Open Access Week. The now-annual event, expanded from one day to a full week, presents an opportunity to broaden awareness and understanding of Open Access to research, including access policies from all types of research funders, within the international higher education community and the general public.
Open Access Week builds on the momentum generated by the 120 campuses in 27 countries that celebrated Open Access Day in 2008. Event organizers SPARC (the Scholarly Publishing & Academic Resources Coalition), the Public Library of Science (PLoS), and Students for FreeCulture welcome key new contributors, who will help to enhance and expand the global reach of this popular event in 2009: eIFL.net (Electronic Information for Libraries), OASIS (the Open Access Scholarly Information Sourcebook); and the Open Access Directory (OAD).
For more information about Open Access Week and to register, visit http://www.openaccessweek.org."
Our very own Ben Reis appears to have touched the same nerve that turns baseball fans into obsessed statisticians. Speechwars is now generating secondary analyses about the meaning of presidential utterances from a purely lexical perspective. Who is to say this is any less insightful that allegedly semantically rich punditry? Some have argued that Google's success points to the strengths of this "low-level" data driven perspective, Can we be equally effective in mining our clinical health records?
We already know that television viewing is associated with inactivity, obesity and a variety of other problems. Now, apparently Internet-borne social networking is being billed as a risk to cognitive development and a stable affect (or not). Yet, here is a stunning example of an extremely tight and productive social network. Most of us probably would not want to be this networked but some apparently do.
In the context of all the hubbub around the monies for health care information technology in the stimulus package, how are we going to know whether we will be have any lasting value from this investment?This reminded me to peruse again the best resource on how to think about evaluation: Friedman and Wyatt's book on the topic "Evaluation Methods in Biomedical Informatics". Particularly useful is the chapter that describes approaches to evaluating systems when the conventional outcome metrics are either unavailable, premature, or inappropriate. Given that Chuck is in the Office of the National Coordinator for Health Information Technology, it's not unreasonable to suppose that some thought will be given to selecting those proposals for the stimulus that include worthy evaluations.
The American College of Physicians is promoting the virtues of electronic health records. Contrast this offering with this.
Hat tip Ted Shortliffe.
Privacy –Title XIII Subtitle D
Provides individuals the right to obtain from a covered entity using an EHR a copy of their information in electronic format, allowing the individual to designate a 3rdparty, such as a PHR, to receive a copy
This has been a long time coming and it is a welcome development. It should further stimulate an industry working on the personal use of such data.
Boston University in its entirety has adopted an open access policy that includes both the undergraduate and graduate schools. This move signals the continued tectonic shift in the awareness of faculty of the importance of providing more efficient access to their scholarly output, for themselves, for their colleagues and for their students. Details of the policy can be found here.
The recent brouhaha regarding the Facebook change of terms or service illustrates just how much many care about their personally placed public information. Do we, as patients, care any less about who gets to have our personal data in perpetuity? Should we advocate for tighter personal control of our health information?
If you ever have had a visit to the emergency room that was not a life-threatening emergency you might have made an unfavorable comparison of queue management there to that in a supermarket. It is therefore all the more stark how innovative this site from Australia is in this regard. What will it take to make this functionality available routinely? It certainly would allow consumers to effectively spread the load across the local emergency rooms.
From the capacious open source code vaults at Google, here is an interesting application called gpeerreview. It allows irrepudiably sourced reviews of any publication. In their own words:
- First, you read someone's paper.
- Next, write a review. (The review is just a simple text file that contains a few scores and your opinions about the paper.)
- Use GPeerReview to sign the review. (It will add a hash of the paper to your review, then it will use GPG to digitally sign the review.)
- Send the signed review to the author. If the author likes the review, he/she will include it with his/her list of published works.
- Prospective employers or other persons can easily verify that the reviews are valid.
- Peer reviews give credibility to an author's work.
- Journals and conferences can use this tool to indicate acceptance of a paper.
- Researchers can also give credibility to each other by reviewing each others' works.
- This enables researchers to publish first, and review later.
- It meshes seamlessly with existing publication venues. Even the credibility of works that have already been published can be enhanced by obtaining additional peer reviews.
- A decentralized social-network of reviewers and papers is naturally formed by this process. The structure of this network reflects that of the research community.
This is a nice idea and if paired with the right social networking web infrastructure, could make for a nice peer view community. More evidence of the growing search for alternative modes of publication and peer review.
A sensible senior citizen has come up with an efficient and cost-effective portable medical record. It's his key medical facts stored on a commodity USB drive and hung on his person by lanyard or bracelet. Alas, the real question is, how many clinical practices have a care provider who has the wherewithal and willingness to plug this drive into their office (or emergency room) computer to check the contents of the record? Perhaps the stimulus package can help increase this likely small number?
House Judiciary Committee Chairman John Conyers has introduced legislation to overturn the legislatively mandated open access publication of National Institutes of Health-supported biomedical research (i.e. taxpayer supported research).
The text of the introduced legislation currently states: "No Federal agency may, in connection with a funding agreement-- (A) impose or cause the imposition of any term or condition that--`(i) requires the transfer or license to or for a Federal agency of- `(I) any right provided under paragraph (3), (4) or (5) of section 106 in an extrinsic work; or..."
Peter Suber provides a sober analysis of this effort and makes the point very clearly that the current mandate does not in anyway violate publisher's copyright. This is certainly an interesting time, made all the more vivid by the extent of the economic stakes.
Dennis Wall and Peter Tonellato are leading a genuinely innovative seminar in the biomedical application of cloud computing. Applications ranging from population simulations in healthcare to interpreting personal genomes. The seminar is available by Webex.
It remains a matter of debate whether or not iTunes from Apple has resulted in a greater diversity of musical offerings. It is less controversial that Apple has created a market force which has enabled consumers to buy affordable single tracks from an album without digital rights restrictions that are often inconvenient, if not onerous. This has caused a shift of power from the recording industry to Apple and perhaps the consumer. So, is there an player equivalent to Apple in the sphere of biomedical publication? Will it be one of the current open access publishers or a "player" as unanticipated as Apple was for music distribution?
For example, Google could host a full peer-review and dissemination/annotation publication ecosystem if it wished. Or the AAMC could host a distributed biomedical research press. Or the NIH could spend less than 5% of its total budget and create a high quality peer-reviewed system that would hew to the highest standards and directly bring the output of NIH-funded biomedical research directly to the consumers without any additional cost.
In the current maelstrom of competing models for sustainable, affordable, or profitable publishing enterprises, an early pioneer of open access publication recently closed it doors. The Medscape Journal of Medicine (MJM), was a brave experiment, particularly in that the authors were not charged large fees as is done for many of the current open access journals. Most relevant for future scholarship is how the back issues of MJM will be kept readily accessible. Most encouragingly, it seems that copies of MJM will be archived with Pubmed Central which constitutes one of the wiser investments of our tax dollars.
Concurrently, we see the emergence of a new model of peer review from the Journal of Biology (hat tip David Osterbur). In this model, authors whose manuscripts were accepted for review can choose to not have their manuscripts re-reviewed but instead can appeal to the editors for publication without revision. Their appeal may not be accepted (in which case the manuscript is rejected) or it is accepted in which case it is published along with an editorial commentary. Regardless of the merits of this model, it does provide editorial added value to each publication, in contrast to many open and closed access journals.
This temporary proscription of large segments of the web to users of the Google search engine vividly illustrates the application of the power of ontologizing. Whole segments of our collective electronic corpus can be made obscure or brought to the top of our societal awareness merely by changing a few bits on an electronic tag. Librarians of the world unite! Yours is the power to [re-]organize.
This story at cnn.com is of the kind that usually makes me cringe because it clearly provides advertising for a new healthcare technology that has not been widely tested and uses a truly dire personal circumstance to illustrate that the new technology can almost magically save the day. This is a technique that has been repeatedly used to hype several drugs or devices or alternative medicines that ultimately have not proven to be helpful (or worse). Yet in this instance, it seems that it is the real deal, a brain tumor that was effectively inoperable and there was a newly available technology that could do the job. And it was applied in the nick of time. Let's assume that this case in fact represents the appropriate application of technology. It might be interesting to view it as an opportunity to ask some questions of the current peer review and publication process in order to meet the following goal: timely communication of the best globally acquired knowledge for state-of-the-art treatment.
- Where can we find an authoritative system for timely reviews of new healthcare treatments on which patients with experience or physicians (in this instance, parents of children with an intracranial teratoma and neurosurgeons).
- How do we minimize the "gaming" of such an authoritative system?
- Is there a mechanism in the current academic publication pipeline that would support such a system or should we look to electronic social networks, mass media and word-of-mouth to provide the solution?
- Can we represent the degree of certainty we have about efficacy of a new treatment in a way that is both intuitive (i.e. actionable by the lay public) and that has solid statistical or evidentiary support?
- If we do not support timely dissemination of potentially useful technologies, is our ethical position suboptimal? Alternatively, does any premature knowledge dissemination fly in the face of the time-tested admonition to first do no harm.
- Can the federal government provide a transparent and timely vehicle of expert opinion in this regard?
The rate limiting step in the development, dissemination, and adoption of informatics technologies to the healthcare system is in large part due to the lack of enough individuals with expertise in this domain. It is therefore should be a source of considerable satisfaction (and some surprise) that among the first acts of the 111th U.S. Congress is likely to be a proposal that speaks to increasing the pool of informatics-trained professionals:
"The Secretary, in consultation with the Director of the National Science Foundation, shall provide assistance to institutions of higher education (or consortia thereof) to establish or expand medical health informatics education programs, including certification, undergraduate, and masters degree programs, for both health care and information technology students to ensure the rapid and effective utilization and development of health information technologies (in the United States health care infrastructure).
Activities for which assistance may be provided .... may include the following: (1) Developing and revising curricula in medical health informatics and related disciplines. (2) Recruiting and retaining students to the program involved. (3) Acquiring equipment necessary for student instruction in these programs, including the installation of testbed networks for student use. (4) Establishing or enhancing bridge programs in the health informatics fields between community colleges and universities."
Then a lot of detail around mechanisms of support. Quite impressive.
Thanks to Joe Barillari for the tip.
In this article in a new journal (GenomeMedicine), we learn of the current efforts at the Cardiff University Institute of Medical Genetics to annotate the clinical meaning of the human genome. This database (of "over 85,000 different lesions detected in 3,253 different genes, with new entries currently accumulating at a rate exceeding 9,000 per annum") is arguably the human genome project most relevant to medicine. It also seems quite akin to curation projects very familiar to librarians. Can we use the knowledge and experience of librarians to increase the quality and timeliness of such genomic annotations? Can librarians learn from the efficiencies of the relatively small team of curators at Cardiff?
Finally, in the spirit of open access, would public funding of the Human Gene Mutation Database enable full free access to all the annotations?
In the words of my colleague, Ben Reis "The days of being anonymous in public are officially over."
Go to the following crowd shot and realize that you can zoom in all the way. You can see the faces of thousands of people in detail.
With the inauguration of a new U.S. president, the focus has returned to the challenge of implementing an equitable, affordable and high quality healthcare system. In that perspective, three relatively recent books speak to the challenge. Two of them written by our own colleagues. These are:
- "Who Killed HealthCare? : America's $2 Trillion Medical Problem - and the Consumer-Driven Cure" (Regina Herzlinger)
- "The Innovator's Prescription: A Disruptive Solution for Health Care" (Clayton M. Christensen, Jerome H. Grossman M.D., Jason Hwang M.D.)
- "Critical: What We Can Do About the Health-Care Crisis" (Tom Daschle)
I remember talking to Jerry Grossman, a year ago, about the future of healthcare and how optimistic he was that technology could serve to aggressively re-align our care delivery system. I hope history will vindicate him but I worry that we will have to first face much more honestly and transparently the conflicting drivers of maximization of societal health outcomes vs. maximization of personal health outcomes. These three books provide models of how to meet this inevitable tension and it is likely the President's advisors are fully aware of all of them.
if you have some interesting to report. This is impressively illustrated by this new journal of undergraduate research that was recently launched. Many of these articles would do "first tier" journals proud such as this one on a very finely tunable encapsulation technology for cells.
Joe Nocera appears to castigate Steve Jobs in this article regarding his secrecy about his medical condition. It is often argued that public figures both in government and publicly traded companies have to be transparent in disclosing their medical condition. Others have taken the further step of arguing that our perception of any privacy is sorely askew and therefore we should just accept that in cyberspace everyone knows you are a dog. Although I have argued for information altruism, most of us have some reasonable expectation of privacy. Disclosure as an act of altruism or even exhibitionism is not in the same category as disclosure compulsion. It's not just worrying about employment and insurability, but about stigmatization that may not even be widespread. For example, if a CEO has found his life to be at risk (and which of ours are not?) and does not want to tell his family about it yet, is it really acceptable that he should be compelled to do so on the behalf of some perceived public good?
Are we going to ask all CEO's to get full personalized medical and genomic screening to predict their functional lifespan because of the hoped-for financial risk perspective? Or only when they are going to die within the year? Or if their exercise capacity is down 20% from the year before? And what is the fundamental basis for arguing that such disclosure should be restricted to the CEO? How about other important jobs: CFO, CIO, the head of Human Resources? Corporate Counsel? Granting a "right-to-know" on any other person's health status is likely to be a very slippery slope. Does your information want to be free?
There is a growing array of patient/consumer directed educational materials for a variety of medical conditions, surgical procedures, and risk assessments that probably exceeds that available in many doctor's offices and are accessible wherever there is a web browser. Given the renewed national interest in harnessing automation to improve healthcare, a strategy to integrate these valuable resources into clinical practice and patient awareness would appear to be highly leveraged.
Perhaps it is completely analogous to the ascendancy of the blogosphere in news reporting, but it is nonetheless shockingly satisfying to see this article on the ascendancy of the R statistical programming language. As a former user of its commercial competitors, I became disenchanted with their licensing policies, not least of which was the hassle of reconfiguring the license manager every time I had to re-install the software on a new laptop. But when I jumped ship, it also quickly became apparent that through the enthusiastic, almost cult-like, devotion of legion programmers and statisticians across the globe, the R software code base provides a far richer set of functionality than any other. Particularly when it comes to rapidly evolving areas of quantitative science such as genomics, R software (especially the huge set of R modules collectively called Bioconductor) have provided the best and most up-to-date functionality. Moreover, the educational programs that have grown around R have been equally first rate. I have recommended this book on learning R for introductory statistics to many colleagues who have then confessed that it was the first time that they had truly understood basic statistics.
I am quite sure that anybody presenting the R "business model" eight years ago would have been scoffed at by most experts in software distribution and dissemination. Yet it is but one example of what can happen if the the producers and consumers of a knowledge or information product are the very same academics. Perhaps we can one day achieve the same efficiencies in disseminating our scholarly publications.
Nothing induces a deep anesthetic coma faster than yet another discussion about what the field of biomedical informatics comprises and what are the expected competences. Ira Kalet has therefore done us all a favor by writing a book on Principles of Biomedical Informatics. This is not a book for those who primarily wish to approach medical informatics from the perspective of management, or policy or governance. It is a technical book that teaches by doing much in the vein of some other very successful texts. And in biomedical informatics, doing often means programming.