We have dimensional data mart running on SQL Server 2012. The larger fact tables vary between 7 and 17 billion rows. No fancy data types, no LOB data types, and all the join keys are ints. There were already well-tuned conventional covering indexes on the fact tables. Clustered indexes and column statistics are kept up-to-date. Non-clustered indexes are disabled and rebuilt with each daily data load. Queries always have some sort of date range in the WHERE clause. In implementing columnstore indexes, we experienced an average performance improvement of around 7:1. We were very happy, considering the level of tuning already in place.
We did find one query were the performance actually worsened when using a columnstore index. I dug into the query and found that it utilized a view that was doing aggregations. It’s the classic anti-pattern of having aggregates in an intermediate result set without using a temp table. There’s a later join against those aggregates and the query optimizer cannot do a cardinality estimate. The view met a legitimate business requirement, so we created a regular table that materialized the view and added some code to load the new table as part of the nightly ETL process. After that we pointed the query to the new table and tested conventional rowstore covering and columnstore indexes. Again, about a 7:1 performance improvement between the two.