Several people have asked for an electronic copy of my poster, Maximizing Utility of Genome Sequence Data (pdf) (posted on the Internet Archive). As is hopefully clear from the poster, in addition to high-throughput sequencing, we now have high-throughput sequence analysis. After listening to Lynda Chin's talk on the first evening of the conference, which described the arduous process of translating a single putative cancer driver mutation to its function in the cell, one can't help but feel we are just kicking the can down the road here. The alleviation of one bottleneck just creates another. This was the case with the PC, where after CPUs became faster and faster, other components, e.g., memory, network, and disk I/O, became bottlenecks. This has also been the case with high-throughput production sequencing. You buy more sequencers, you need more disk, then need more CPUs to analyze all the data, and then you need to upgrade your network to move all the data around. Now in genomics, we have a situation where we are able to generate lots of data and lots of variants which may play a role in cancer. How will we be able to determine the function of all these variants? What technologies are on the horizon that will enable high-throughput functional genomics?