My 2 cents on SQL Server 2014

Yesterday during the opening keynote of TechEd North America Microsoft released the first official details about the next version of SQL Server: SQL Server 2014. There are huge improvements in the area of High Availability, Performance, and Scalability, on which I want to concentrate in this weblog posting.

Disclamer: This blog posting is based on the first official information that Microsoft released yesterday (June 3, 2013) about SQL Server 2014. This posting is not a complete feature reference for SQL Server 2014, nor it could be 100% accurate, because there is not really too much information currently available.

SQL Server AlwaysOn

With SQL Server 2012 Microsoft released the first implementation of AlwaysOn, mainly a completely new High Availability technology based on the principles of Database Mirroring. With the SQL Server 2014 release Microsoft now supports up to 8 different Secondaries, which provides you a huge scaleout in comparison of the 4 Secondaries that are possible currently with SQL Server 2012. The question for me is, who will really pay for it. I’m working with a lot of different customers, which are currently not able to migrate to AlwaysOn, because of the additional licensing costs (AlwaysOn is only part of the Enterprise Edition of SQL Server). Scaling out Read-Only workload to Secondaries is a perfect concept, but almost no one of my customers will actually pay for that nice feature (as soon as you use a Secondary, you have to fully license it).

Online Database Operations

Online Database Operations are for some SQL Server based shops just a foundation. Unfortunately Microsoft made here our life in the past not really easy. SQL Server 2012 has already made a huge step forward in this area. With SQL Server 2014 Microsoft is doing an additional step and provides Online Index Operations on the Partition Level, which will be a very nice feature for VLDBs (Very Large Databases).

Backups to Azure

The 1st time when I read about Backups to Azure, I thought Microsoft is kidding us. No, they really made it: you can now store your backup files directly in Azure. In my opinion this is just a crazy idea. It introduces a lot of overhead regarding the involved latency times, and you have no direct access to your backup files anymore. Just imagine your ISP (Internet Service Provider) goes offline, your connection to the cloud is cut-off, and in the mean time your database crashes, and you have to restore it from backup files that are currently not accessible. Just a horror story – nothing more. Not recommended from my side because you have no direct control over the RTO (Recovery Time Objective) anymore.

Smart Backups

With Smart Backups Microsoft takes regularily backups to Windows Azure of your databases or your whole SQL Server instance, whenever it is necessary. Yes, you read correctly: whenever necessary. Normally when I’m doing consulting and training, I’m saying your backup strategy is only as good as your recovery strategy. This means you should define how long it should take until your database is online after a crash/desaster recovery, and from that period of time you should derive your backup strategy. But with Smart Backup you are just doing it the opposite way. Also not really recommended.

SQL Server with Windows Azure Storage

This is a really funny feature which makes sure that I can also do a lot of consulting engagements over the next years which indirectly drives the improvements of the realism of my Boeing 737-800 flight simulator ;-). With SQL Server 2014 you are now finally able to store your data and transaction log files directly in Windows Azure Storage. Very cool regarding performance because you just introduce a huge overhead regarding latency times for your files in your SQL Server installation. Mainly all my customers currently have problems with huge latency times to their local based SANs, imagine what happens when you go over the wire through the internet to the Cloud to access your data files, or write out transaction log records for committing your inflight transactions. I hope sys.dm_io_virtual_file_stats has BIGINT data types for the stall time columns – I have to check that afterwards 😉

Hekaton

With Hekaton SQL Server 2014 provides us a true Multi Version Concurrency Control system (MVCC) integrated directly within the good old traditional relational engine of SQL Server. The main goal about MVCC is to avoid traditional locking and latching in the main memory of the SQL Server process. With Hekaton you can completely avoid locking when accessing and changing data. Yes, also the Exclusive Locks are completely gone when changing data – really awesome work!

In addition to that Hekaton also avoids completely the traditional latching architecture that relational databases must use for synchronizing concurrent resource access in main memory. Because of that big shift parts of the relational and storage engine of SQL Server are completely rewritten to take advantage of these new approaches for dealing with large scale OLTP workloads. Hekaton can be interesting, but in the 1st step you have to hit the limits of SQL Server, so that Hekaton can be interesting for you. In my opinion customers in the 1st step hit limits in SQL Server because of their bad indexing strategy, or because the transaction log can’t cope with the current amount of transaction log records (just think about Page Splits). I have just seen over the time a few customers which have really hit the limits of SQL Server because of Latch Contention. If you are hitting these limits, Hekaton can be interesting for you. But on the other hand, there is also a long list of restrictions that you have to keep in mind:

  • Row Sizes can’t be larger than 8060 bytes (incl. Variable Length Columns!)
  • LOB Data Types are not supported
  • No Foreign Key Constraints.

When I’m currently looking into database designs, they are mainly not qualified for Hekaton because of the above listed restrictions. Just imagine you have a table with 2 VARCHAR(4500) fields: in sum you are over the 8060 bytes limits, and you can’t use Hekaton for that specific table, without doing any reengineering of that table…

ColumnStore Indexes

Yes, they did it in SQL Server 2014: Updateable Clustered ColumnStore Indexes! Nothing more to say- just WELL DONE! J

Statistics Improvements

Statistics are now maintained on the partition level, but they are still created on the table level. Maintaining on the partition level means, that 20% of changes of the specific column on the partition level triggers the update of the underlying statistics objects. Also very nice, but I would have preferred Partition Level Statistics…

Resource Governor

Till SQL Server 2012 we were able to throttle CPU and Memory (Query Memory). With SQL Server 2014 they finally added IO Throtteling. You can now specify the maximum IOPS for a specific Resource Pool. Also a very nice addition J.

Conclusion

As you can see there are a lot of new enhancements in the area of peformance, scalability, and high availability in SQL Server 2014. But for some enhancements you really have to think if they really make sense for your environment, and if you can live with the introduced side effects – like higher latency times. All in all I’m really looking forward to the first public CTP version to try out all that nice stuff.

Call to action: What do you think about the various enhancements in SQL Server 2014, and do you think you will migrate after the RTM version to it? Please feel free to leave a comment with your feelings/concerns about it…

Thanks for reading

-Klaus

5 thoughts on “My 2 cents on SQL Server 2014”

    1. Hello Allan,

      Thanks for the correction, it’s nice to know that they have finally done the renaming 🙂

      -Klaus

  1. Hey Klaus, great roundup and concur with much of your sentiment. For me I think probably the biggest error is in the name. I know we have known for quite sometime that it was never going to be called R2 but I think naming the product 2014 will result in killing off adoption to 2012. I know of very few companies who have adopted it yet -and those companies that were currently planning it are dropping those plans (from my observations). With 2008/ R2 (from my experience) companies didn’t really seem to make a difference to adoption of either version. One version was used for completely new independent projects and the other to keep new servers (for scaleout) deployments in sync. In real terms there is mostly no big difference between then and now, but psychologically to the people that control the budgets it is huge.

  2. Hey,

    Do you know 2014 support multi column partitioning ? – Just wondering.

    Thanks
    John

Comments are closed.

Do you want to master SQL Server like an expert?

Checkout my SQLpassion Online Trainings!

Only EUR 229 incl. 20% VAT