Month: July 2016

DNAdigest interviews Phil Bourne from BD2K

Fiona Nielsen from DNAdigest interviewed Phil Bourne – Associate Director for Data Science at the National Institutes of Health about the Big Data to Knowledge (BD2K) project. Photo credit: Wikipedia What is Big Data to Knowledge (BD2K)? BD2K is an NIH program across 27 institutes of about 110 million dollars a year. The program focus is on the challenges emerging in data across biosciences, and leveraging the power of biomedical data to benefit the NIH. The impact of the BD2K is across all biomedical data science research, supporting the exciting science that would not happen with traditional means. Take for example the Center for Predictive Phenotyping which is mining electronic health records (EHRs) at scale. They use computing ability combined with the vast information captured in EHRs, such as CD-9 codes associated with medical conditions, to get to a point of undertaking medical intervention for the patients. Another example project is the Stanford project regarding mobility data. They use mobility info, body mass index, GPS coordinates, and more for gait rehabilitation and weight management research. What does BD2K provide for these projects? BD2K is providing the funding as well as the environment that supports big data research. This includes: – addressing […]

Get the most out of your impact data

It’s time to put our impact data to work to get a better understanding of the value, use and re-use of research. Published under CC BY 3.0 license. Originally Published by Liz Allen, PhD on the London School of Economics and Political Science Blog If published articles and research data are subject to open access and sharing mandates, why not also the data on impact-related activity of research outputs? Liz Allen argues that the curation of an open ‘impact genome project’ could go a long way in remedying our limited understanding of impact. Of course there would be lots of variants in the type of impact ‘sequenced’, but the analysis of ‘big data’ on impact, could facilitate the development of meaningful indicators of the value, use and re-use of research. We know that research impact takes many forms, has many dimensions and is not static, as knowledge evolves and the opportunities to do something with that knowledge expand. Over the last decade, research institutions and funding agencies have got good at capturing, counting and describing the outputs emerging from research. A lot of time and money has been invested by funding agencies to implement grant reporting platforms to capture the myriad outputs and products of research (e.g. […]

A new multi-centralised cryptocurrency: Coinami

Last year we interviewed Can Alkan, an Assistant Professor in the Department of Computer Engineering at the Bilkent University, about Biopeer – a data sharing tool for small- to medium-scale collaborative sequencing efforts. Today we are talking to Can about his new project – Coinami. Please tell us more about Coinami. How did it start? Coinami is basically a volunteer grid computing platform that generates a new multi-centralised cryptocurrency, which uses high throughput sequence (HTS) read mapping as proof-of-work. After Bitcoin gained popularity, many different currencies that are called “altcoins” emerged around the same structure: decentralised, secure transactions in a public ledger called blockchain. All cryptocurrencies basically are composed of two parts: mining, which is generating new coins (i.e. “printing banknotes”), and transactions, which is spending and receiving coins. To provide integrity and prevent “overprinting”, a computationally intensive task has to be performed, which is called proof-of-work. Different cryptocurrency systems use different proof-of-work schemes, but, including Bitcoin, all current proof-of-work tasks serve no practical purpose other than maintaining the currency. Here, we suggest a different approach for proof-of-work. We propose that instead of impractical calculations, the miners should use their computational power for scientific computing. The idea is very similar […]