Quantcast
Channel: Microsoft Azure Blog > SQL Server
Viewing all 34 articles
Browse latest View live

How to Sync SQL Server Data with Azure Search

$
0
0

A common request we receive in Azure Search is how to extend data in SQL Server databases to Azure Search.  This may be because people wish to offload the full-text workload from their on-premises SQL Server database to the cloud or they are simply looking to make use of Azure Search’s advanced search capabilities.

In many circumstances, data changes need to be reflected in the search engine at close to real-time levels.  This can be a challenging task because tracking rows that changed can be an expensive task computationally if not done properly.  In this blog post, I want to explain how to use a capability in SQL Server called Integrated Change Tracking to efficiently sync data changes from SQL Server to Azure Search. Change tracking is an internal capability of SQL Server that can track changes (Inserts, Updates, and Deletes) that have been made to the user tables.  It is also an incredibly efficient method of tracking and finding changes that has very low impact on the performance of your database.

In this blog post, will make use of the sample found at Codeplex.

 

Requirements

This tutorial assumes you have access to:

  • SQL Server 2008 or higher
    • NOTE:  If you are using the default database configuration and SQL Server Management Studio, connect to the server instance as (LocalDB)\v11.0
  • An Azure Search Service (learn more here)
  • Visual Studio 2012 or higher
  • Source Code for sample

 

Configuring the SQL Server to Azure Search Sample

At this point you should have downloaded the sample project and opened it up in Visual Studio.  In the sample Visual Studio project, you will need to add the connection information for your Azure Search service.   Please also make note of the connection information for your SQL Server database in case it needs to be modified.

Open up app.config and make changes to the SearchServiceName and SearchServiceApiKey values to reflect your Azure Search service and Azure Search Service API Key which can be found in the Azure Portal.

 

Adding Change Tracking to Your SQL Server Database

When you launch this application, a new database called “AzureSearchSyncTest” along with a table called Products is populated with data.  Once this table is created, Change Tracking for this table will be enabled.

OPTIONAL: If you wish to try this in your own SQL Server database, there are two scripts in the \sql folder that you can use. Or simply change the connection information in the app.config file to point to your SQL Server instance.

One file that is worth reviewing is the add_change_tracking.sql file located in the \sql folder. Note that only two SQL Commands are required to turn on Integrated Change Tracking for this table:

ALTER DATABASE SyncTest SET CHANGE_TRACKING = ON

(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);

ALTER TABLE Products ENABLE CHANGE_TRACKING

WITH (TRACK_COLUMNS_UPDATED = OFF);

The first command turns on change tracking in the database and tells SQL Server to retain change information for 2 days, after which the change data will be deleted to avoid taking too much database space.  The second command tells SQL Server which table to track.  For this demo, we have told SQL Server to only track rows that were changed and not to track column specific updates.

An alternative approach is to enable change track on columns. You should do this if it makes more sense to send the columns that have changed rather than the entire row that changed.  Keep in mind that this would increase the amount of storage allocated for change tracking, however, it might be worth it if you are making a lot of data changes and the changes are usually limited to a few columns in a row.

 

How it Works

Let’s take a closer look at this console application that does the synchronization from SQL Server to Azure Search.  In the previous step, you would have opened the application in Visual Studio.  If you have not done so, please open it now.

 

Detecting Changes

The ability to detect the changes efficiently in SQL Server is key to this application.  Open the Program.cs file and move to the Main(string[] args) function.  The first line we want to make note of is:

_lastVersion = -1;

SQL Server Integrated Change Tracking uses a Change Version which gets incremented every time a change is made to one of the tracked tables.  Using this Change Version, you can ask SQL Server to send back the changes that have occurred since a specific Change Version.  This _lastVersion variable is used to track the Change Version that was used when the previous sync successfully completed.  In this case, we are running the app for the first time, so we set it to -1 which will tell the application to sync all of the data for the first synchronization.

An optional enhancement you could make to this application is to store this _lastVersion value somewhere and load it when the application runs.  That way you can pick up where you left off even if you close the application.

The next lines of this function initialize change tracking in the SQL Server database as well as to create the Azure Search index which will receive the data.

If we move to the while (true) loop we can see that this application will check for changes every 5 seconds [Thread.Sleep(5000)].

Within this loop we can see that a ChangeEnumeratorSql is created.  This is the query that will be used to do a full upload of the data from SQL Server to Azure Search the first time through.

Next, changeEnumerator.ComputeChangeSet(_lastVersion) is called.  This call does quite a bit of work which you can see if you open the ChangeEnumerator.cs file and move to the ComputeChangeSet(Int64 lastVersion) function.

First, it calls GetChangeSetVersion(con), which will ask SQL Server what the most recent Change Version is.  We will remember this, because the next time we run through this, we want to use this value to find any changes that have happened since this point.

Next, we see a call to EnumerateUpdatedDocuments(con, lastVersion), which gathers all of the data changes into an IEnumerable<Dictionary>.  If you drill into this EnumerateUpdatedDocuments  function you will notice the first time (where _lastVersion is -1) it will do the full select of the data.  If the _lastVersion is > -1 it will only get the changes.  You can see that the table CHANGETABLE is key to being able to get the changes.  This table is an internally table maintained by SQL Server.  You might also notice the following lines which say we only want to get the Inserts and Updates that have happened to the Products table:

sqlCmd += “and (CT.SYS_CHANGE_OPERATION = ‘U’  “;

sqlCmd += “or CT.SYS_CHANGE_OPERATION = ‘I’) “;

If you want to add the ability to sync deletes, you can also call this table in a similar way where you request CT.SYS_CHANGE_OPERATION = ‘D’.

SQL Azure does not currently support Integrated Change Tracking, so you would need to alter this application to implement a different change tracking method (e.g., rowversions + tombstones).

 

Pushing changes into Azure Search

Now that we have reviewed the method used for getting changes, let’s go back to the while (true) loop in the Main(string[] args) function of Program.cs.  Picking up where we left off we can see a call to ApplyChanges(changes). This will take the ChangeSet received of the data that is to be uploaded to Azure Search.  If you drill into this function you will see that this will upload the changes in batches of 999.  It also uses an action called: “mergeOrUpload”.  This tells Azure Search that the data that is being received should be inserted if the document key does not exist and update the values in the corresponding document if the key does exist.  A key field is used to uniquely identify a document in Azure Search.  In our case, the field productID is uses as our key field.

One optional enhancement you could make is if you are uploading new rows and you’re sure they are new, upload should be faster than merge and mergeOrUpload.

 

Scheduling of Sync

This is a very simple console application that uses a while loop to check for changes every 5 seconds and is run.  In a production environment on premise, you would want to consider making this a windows service or implement some sort of scheduling job.  If you are running this in the cloud (say against a SQL Server VM), a Webjob or a WebRole is probably the best way to implement this functionality.

 

Running the Application to Upload Data Changes from SQL Server to Azure Search

At this point we are ready to launch the application.  You may wish to add a breakpoint in the Main function located within the Program.cs file so that you can step through the application.  You should see text in the Console as follows:

Sync Processing Started…

Creating SQL Server database with Products table…

Enabling change tracking for Products table…

Uploading 294 changes…

Sync Complete, waiting 5 seconds…

Sync Complete, waiting 5 seconds…

Notice how the first execution of the application uploaded all 294 rows.  At intervals of 5 seconds, the application will check for changes and then upload them.

Let’s make a change to one of the rows and see what happens.  While keeping the application running, connect to the SQL Server database AzureSearchSyncTest and execute:

UPDATE [Products] set Color = ‘Green’ where ProductID = 680

Go back to the console window and you should see the following message:

Uploading 1 changes…

The application found the one update and uploaded it to Azure Search.  This will also work if you Insert a new row.  Please note, Integrated Change Tracking will also track row deletions, however, this has not been added to this sample.

Verify Data in Azure Search Index

Now that we have data synchronizing up to your Azure Search Index, let’s query the index and make sure all of the data is there.  To do this we will use Fiddler.

fiddler

In the left sidebar you should see a row as follows:

fiddler_response

Double click on this row and you should see a window open displaying the JSON result for this row.

Notice how this row does exist and also the Color of green has also been uploaded.

fiddler_json

At this point you have synchronization running between SQL Server and Azure Search.

Optional Enhancements to Sample

Here are some enhancements you may wish to make to this sample:

  1. Add support for uploading deletes to Azure Search.  For more details on how to execute deletes in Azure Search, please visit the Azure Search API Docs for this topic.
  2. Every time the application starts a _lastVersion value is set to -1.  This tells the application to execute a full upload of data changes to Azure Search.  You might want to store that last successful Change Version in your database and retrieve this when the application starts so that the application can pick up where it left off.
  3. Consider modifying this console application into a Windows Service.

 

Please keep the feedback coming.

Liam Cavanagh can be contacted at his blog or through twitter.


Running Critical Application Workloads on Microsoft Azure D-Series Virtual Machines

$
0
0

On the Azure Customer Advisory Team (AzureCAT), we’ve been testing the performance of one of the latest generations of hardware components now being introduced into our public cloud called the D-Series. And what’s especially cool is how they can help boost performance significantly for critical workload applications compared to the earlier VM series. This is extremely important for solutions based on Microsoft SQL Server as we described in a previous white paper. Our new findings extend those tests.

Customers told us that they wanted a straightforward way to transition their applications from traditional data centers to Microsoft Azure virtual machines (VMs), but performance is key with their critical workloads, and they weren’t always getting it.

The D-Series offers two key features related to performance, neither of which require you to make any particular application changes:

  • Local storage (temporary) based on solid-state drives (SSDs)
  • Higher number of attached data disks (up to 32 for D14 VMs)

In our performance tests, we used these new features to tune applications and saw gains in performance. For example:

  • Placing TempDB file on local SSD storage on a D13 VM gave approximately 4.5 times the throughput of an A7 VM with attached data disks, at a fraction of previous latency for the same SQL Server-generated IO patterns.
  • D14 VMs with 32 attached disks can provide up to 85 percent more write IOPS and bandwidth compared to an A7 VM with 16 attached disks.

We documented four scenarios in which the D-Series made a significant difference for our customers in the white paper, Running Critical Application Workloads on Microsoft Azure Virtual Machine.

It describes:

  • How persistent disk latency can directly impact application response times.
  • How limited throughput from persistent disks can impact application performance when SQL Server tempdb use is significant.
  • How to use SSD-based fast storage in the application tier to speed temporary file processing.
  • How to reduce compile and startup time for a large ASP.NET web application by moving the %temp% folder on a temporary drive in a D-Series VM.

In essence, new D-Series VMs in Azure can help run performance-critical workloads on both the data tier and application tier, offering better performance overall for CPU, storage, and networking, with a price performance ratio that can be favorably compared to other VM series.

Certain application scenarios such as OLTP database servers benefit mainly from local SSD-based temporary storage for extending buffer pools and hosting temporary operations. Application servers benefit from faster and low latency local storage and also from the increased CPU performance provided by this new generation of VMs.

Download the white paper for details and suggestions for improving application performance. And check out our performance expectations article for the D-Series.

Migration cookbook now available for the latest Azure SQL Database Update (V12)

$
0
0

We are delighted to announce the availability of a migration cookbook for the latest Azure SQL Database Update (V12)! The cookbook describes various approaches you can use to migrate an on-premises SQL Server database to the latest Azure SQL Database Update (V12). The cookbook can be viewed and downloaded here.

This latest Azure SQL Database Update (V12) brings near-complete SQL Server engine compatibility, even more Premium performance and marks the first step in delivering the next generation of the SQL Database service. Improved T-SQL compatibility with SQL Server 2014 makes it easier to migrate most on-premises databases to the latest Azure SQL Database Update (V12) using tools that you already use today (e.g. SQL Server Management Studio) with a single-click. The cookbook has details on using these updated tools and other approaches based on the features you use in your on-premises database.

In order to use the recipes in the migration cookbook, please make sure you’ve downloaded and installed the latest tools from the links below:

SQL Server Management Studio 2014 Cumulative Update 5

SQL Server Database Tooling Preview for the latest Azure SQL Database Update V12

SQL Database Migration Wizard

Download the migration cookbook and updated tools today and let us know what you think!

Automated Everything with SQL Server on IaaS VMs

$
0
0

Today, we are excited to announce support for automated backup and automated patching, available directly in the portal for SQL Server Azure Virtual Machines. Both of these features are built with the new SQL Server IaaS Agent, an Azure VM Extension, combining the power and management ease of SQL Server with the agility offered by extensions on Azure Virtual Machines, enabling single-click backup and patching configuration and management.

Using automated backup on SQL Virtual Machines deployed in Azure, you can configure a scheduled backup on SQL Server 2014 Enterprise. With a few clicks in the portal, you can control the retention period, the storage account for the backup, and the security/encryption policies of the database.

SQL VM

In addition to automated backup, we are also announcing automated patching for SQL Server VMs. This new solution allows you to define the maintenance window directly from the portal. The SQL Server IaaS Agent will configure Windows running on your Virtual Machine with your preferred maintenance settings, including the day for maintenance, the start time of the window and the proposed duration.

Optional Config

It is an exciting set of new capabilities that continues to show the integrated experience for running SQL Server on the fast scale-out and fast scale-up Azure Virtual Machines. We will continue to focus on these integrated experiences over the next few months.

For more details, check out the SQL Server blog post here. Go ahead, try these features out for yourself at https://portal.azure.com. Scale-out a bit.

Application-Aware Availability Solutions with Azure Site Recovery

$
0
0

Azure Site Recovery enables customers to deploy application-aware availability on demand solutions. Be it Windows Server or Linux based applications, Microsoft first party enterprise applications or offerings from other vendors, you can use Azure Site Recovery to enable disaster recovery, deploy on-demand DevTest environments or migrate them to Azure. ASR replication technologies can protect the entire virtual machine with all their disks and data. This allows it to be compatible with any application running on the machine.

Microsoft has deep expertise and experience in developing best in class enterprise applications such as SharePoint, Exchange, Dynamics, SQL Server. Over the last few months we have worked in close partnership with these application groups at Microsoft to enable you to deploy customized disaster recovery and availability solutions with ASR. Azure Site Recovery solutions have been tested and are now supported for SharePoint, Dynamics AX, Exchange 2013, Remote Desktop Services, SQL Server, IIS applications and System Center family like Operations Manager.  In addition to Microsoft applications, we have also done extensive testing for and support third party applications like SAP and applications running on different distributions of Linux.

Azure Site Recovery features have been designed with application level protection/recovery in mind:

  • Near-Sync replication with RPO as low as 30 seconds that meets the need of most critical applications.
  • App consistent snapshots for single or N-tier applications
  • Flexibility to choose & integrate with app level replication. Leverage best in class application level offerings like AD replication, SQL Always On, Exchange Database Availability Groups when applicable. Use ASR’s in-built replication for other tiers.
  • Extensible Recovery Plans to model entire application and organize application-aware recovery/migration workflows. Trigger single click end to end application recovery when needed.
  • Advanced network management in ASR and Azure. Automate all networking configurations specific to your application: reserve IP addresses, configure load balancers, or use traffic manager for achieving low RTO switch-over.
  • Rich Automation Library that provide production-ready, application specific scripts. Download them and integrate into your ASR based solutions.

 

If you are managing a SharePoint or Dynamics AX deployment, you can use ASR to eliminate the cost and overhead of maintaining a stand-by deployment for DR or DevTest. ASR can replicate the entire farm and bring it up on-demand for disaster recovery or for creating a production-like test copy. Any new application or configuration changes deployed on the primary farm will be automatically replicated within minutes so you do not have to invest in complex processes to keep the secondary farm up to date.

SQL Servers are the foundation of many critical enterprise applications. With ASR, you can easily replicate and recover SQL Server to another site or to Azure. ASR can also integrate with best in class native disaster recovery solution such as SQL Always On Availability Groups and manage their failover operations as part of ASR Recovery Plans.

For Exchange Servers,  DAG provides best in class disaster recovery solution and is the recommended deployment option. ASR Recovery Plans can be integrated with DAGs to orchestrate DAG failovers across sites via scripted actions. For small Exchange 2013 deployments, such as a single server or non-clustered servers, customers can use Azure Site Recovery to protect the servers to a secondary on-premises site.

Windows Server’s Remote Desktop Services provides technologies that enable users to access session-based desktops, virtual machine-based desktops, or applications. With ASR you can deploy Disaster Recovery or cloud bursting solution for your RDS deployments and enable high-fidelity desktop experiences to be always available to your customers.

We will take a deep-dive look at the these application specific solutions and supported configuration at the IGNITE conference. We recommend you checkout ASR related sessions at IGNITE this week. We will also share detailed technical guidance via our documentation sites in coming weeks to help you easily deploy these solutions into your environments.  This is an on-going journey and we will keep investing in covering more applications and lighting up richer application-aware experiences with Azure Site Recovery.

Check out the recording from our Ignite session on best practices for deploying disaster recovery with ASR.

Azure IT Workload: SharePoint Server 2013 with SQL Server AlwaysOn Availability Groups

$
0
0

The Azure IT workload for SharePoint with SQL Server AlwaysOn Availability Groups has been published. Although Microsoft recommends using SharePoint Online with Office 365 for SharePoint sites in the cloud, if you need your own SharePoint 2013 Server farm, you can deploy it in an Azure cross-premises virtual network.

SharePoint with SQL Server AlwaysOn Availability Groups in Azure guides you through the end-to-end process to:

  • Understand the value of the SharePoint farm in Azure IT workload.
  • Create a proof-of-concept configuration or a dev/test environment for SharePoint application development.
  • Configure the production workload in a cross-premises virtual network.

The result of this process is a functional, high-availability intranet SharePoint farm that is accessible to on-premises users.

site to site VPN

The end-to-end configuration of the production workload consists of these phases:

These phases are designed to align with IT departments or typical areas of expertise. For example:

  • Phase 1 can be done by networking infrastructure staff.
  • Phase 2 can be done by identity management staff.
  • Phases 3 and 5 can be done by database administrators.
  • Phase 4 can be done by SharePoint administrators.

To make the Azure configuration foolproof, Phases 1 and 2 contain configuration tables for you to fill out with all of the required settings. For example, here is Table V for the cross-premises virtual network settings from Phase 1.

table

To make the configuration of the Azure elements as fast as possible, the phases use PowerShell command blocks and prompt you to insert the configuration table settings as variables. Here is an example of the PowerShell command block for creating the first replica domain controller.

# Create the first domain controller

$vmName="<Table M – Item 1 - Virtual machine name column>"

$vmSize="<Table M – Item 1 - Minimum size column, specify one: Small, Medium, Large, ExtraLarge, A5, A6, A7, A8, A9>"

$availSet="<Table A – Item 1 – Availability set name column>"

$image= Get-AzureVMImage | where { $_.ImageFamily -eq "Windows Server 2012 R2 Datacenter" } | sort PublishedDate -Descending | select -ExpandProperty ImageName -First 1

$vm1=New-AzureVMConfig -Name $vmName -InstanceSize $vmSize -ImageName $image -AvailabilitySetName $availSet

$cred=Get-Credential –Message "Type the name and password of the local administrator account for the first domain controller."

$vm1 | Add-AzureProvisioningConfig -Windows -AdminUsername $cred.GetNetworkCredential().Username -Password $cred.GetNetworkCredential().Password

$diskSize=<size of the additional data disk in GB>

$diskLabel="<the label on the disk>"

$lun=<Logical Unit Number (LUN) of the disk>

$vm1 | Add-AzureDataDisk -CreateNew -DiskSizeInGB $diskSize -DiskLabel $diskLabel -LUN $lun -HostCaching None

$subnetName="<Table S – Item 1 – Subnet name column>"

$vm1 | Set-AzureSubnet -SubnetNames $subnetName

$vm1 | Set-AzureStaticVNetIP -IPAddress <Table V – Item 6 – Value column>

$serviceName="<Table C – Item 1 – Cloud service name column>"

$vnetName="<Table V – Item 1 – Value column>"

New-AzureVM –ServiceName $serviceName -VMs $vm1 -VNetName $vnetName

This new content set is designed to make it easy for you to understand, test, and deploy your first or next SharePoint 2013 farm in Azure.

If you have any feedback on this new content set or this approach on documenting Azure IT workloads, please comment on this blog post or leave Disqus comments on the individual topics.

Thank you.

July 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

Congratulations to our July top-10 contributors!​ Alberto Morillo and Hilary Cotter maintain their top positions.

leaderboard_top_10_july_2017

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

leaderboard_rules

August 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

Congratulations to our August top-10 contributors! Alberto Morillo maintain his first position in the cloud ranking while Erland Sommarskog climbs to the top in the All Databases ranking.

leaderboard_Top10_august

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

leaderboard_rules


Introducing SQL Vulnerability Assessment for Azure SQL Database and on-premises SQL Server!

$
0
0

I am delighted to announce the public preview of our latest security development from the Microsoft SQL product team, the new SQL Vulnerability Assessment (VA). SQL Vulnerability Assessment is your one-stop-shop to discover, track, and remediate potential database vulnerabilities. The VA preview is now available for Azure SQL Database and for on-premises SQL Server, offering you a virtual database security expert at your fingertips.

What is VA?

SQL Vulnerability Assessment (VA) is a new service that provides you with visibility into your security state, and includes actionable steps to investigate, manage, and resolve security issues and enhance your database fortifications. It is designed to be usable for non-security-experts. Getting started and seeing an initial actionable report takes only a few seconds.

VA_Azure_Dashboard

Vulnerability Assessment report in the Azure portal.

This service truly enables you to focus your attention on the highest impact actions you can take to proactively improve your database security stature! In addition, if you have data privacy requirements, or need to comply with data protection regulations like the EU GDPR, then VA is your built-in solution to simplify these processes and monitor your database protection status. For dynamic database environments where changes are frequent and hard to track, VA is invaluable in detecting the settings that can leave your database vulnerable to attack.

VA offers a scanning service built into the Azure SQL Database service itself, and is also available via SQL Server Management Studio (SSMS) for scanning SQL Server databases. The service employs a knowledge base of rules that flag security vulnerabilities and deviations from best practices, such as misconfigurations, excessive permissions, and exposed sensitive data. The rule base is founded on intelligence accrued from analyzing millions of databases, and extracting the security issues that present the biggest risks to your database and its valuable data. These rules also represent a set of requirements from various regulatory bodies to meet their compliance standards, which can contribute to compliance efforts. The rule base grows and evolves over time, to reflect the latest security best practices recommended by Microsoft.

Results of the assessment include actionable steps to resolve each issue and provide customized remediation scripts where applicable. An assessment report can be customized for each customer environment and tailored to specific requirements. This process is managed by defining a security baseline for the assessment results, such that only deviations from the custom baseline are reported.

How does VA work?

We designed VA with simplicity in mind. All you need to do is to run a scan, which will scan your database for vulnerabilities. The scan is lightweight and safe. It takes a few seconds to run, and is entirely read-only. It does not make any changes to your database!

When your scan is complete, your scan report will be automatically displayed in the Azure Portal or in the SSMS pane:

VA_SSMS_Dashboard

Vulnerability Assessment report in SSMS. Currently available in limited preview.

The scan results include an overview of your security state, and details about each security issue found. You will find warnings on deviations from security best practices, as well as a snapshot of your security-related settings, such as database principals and roles, and their associated permissions. In addition, scan results provide a map of sensitive data discovered in your database with recommendations of the built-in methods available to protect it.

For all the issues found, you can view details on the impact of the finding, and you will find actionable remediation information to directly resolve the issue. VA will focus your attention on security issues relevant to you, as your security baseline ensures that you are seeing relevant results customized to your environment. See “Getting Started with Vulnerability Assessment” for more details.

You can now use VA to monitor that your database maintains a high level of security at all times, and that your organizational policies are met. In addition, if your organization needs to meet regulatory requirements, VA reports can be helpful to facilitate the compliance process.

Get started today!

We encourage you to try out Vulnerability Assessment today, and start proactively improving your database security stature. Track and monitor your database security settings, so that you never again lose visibility and control of potential risks to the safety of your data.

Check out “Getting Started with Vulnerability Assessment” for more details on how to run and manage your assessment.

Try it out, and let us know what you think!

September 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

Congratulations to our September top-10 contributors! Alberto Morillo maintains his first position in the cloud ranking while Olaf Helper climbs to the top in the All Databases ranking.

leaderboard_top_10_september_2017

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

leaderboard_rules

October 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

Congratulations to our October top-10 contributors! Alberto Morillo maintains the first position in the cloud ranking while Visakh Murukesan climbs to the top in the All Databases ranking.

leaderboard_Top10_october

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

leaderboard_rules

November 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

Congratulations to our November top-10 contributors! Alberto Morillo maintains the first position in the cloud ranking while Visakh Murukesan maintains the top in the All Databases ranking.

leaderboard_Top10_november

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

leaderboard_rules

December 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

Congratulations to our December top 10 contributors! Alberto Morillo and Visakh Murukesan maintain their top positions.

leaderboard_Top10_december_2017

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

leaderboard_rules

Migrating to Azure SQL Database with zero downtime for read-only workloads

$
0
0

Special thanks to MSAsset engineering team’s Peter Liu (Senior Software Engineer), Vijay Kannan (Software Engineer), Sathya Muhandiramalage (Senior Software Engineer), Bryan Castillo (Principal Software Engineer) and Shail Batra (Principal Software Engineering Manager) for sharing their migration story with the Azure SQL Database product team.

Microsoft uses an internally written service called MSAsset to manage all Microsoft data center hardware around the world. MSAsset is used for tracking Microsoft’s servers, switches, storage devices, and cables across the company and requires 24/7 availability to accommodate break-fix requirements.

Before migrating to Azure SQL Database last year, MSAsset’s data tier consisted of a 107 GB database with 245 tables on SQL Server. The database was part of a SQL Server Always On Availability Group topology used for high availability and the scaling out of read-activity.

The MSAsset engineering team faced the following issues:

  • Aging hardware was not keeping up with stability and scale requirements.
  • There was an increase in high severity data-tier incidents and no database administrator on staff to help with troubleshooting, mitigation, root cause analysis and ongoing maintenance.
  • MSAsset’s database ran on SQL Server 2012. Developers and internal customers were increasingly requesting access to new SQL Server functionality.

After exploring various options and weighing several factors, the MSAsset engineering team decided that Azure SQL Database was the appropriate data tier for their future investment and would address all of their key pain points. With the move to Azure SQL Database, they gained increased scalability, built-in manageability and access to the latest features.  

With 24/7 availability requirements, the engineering team needed to find a way to migrate from SQL Server to Azure SQL Database without incurring downtime for read-only activity. MSAsset is a read-heavy service, with a much smaller percent of transactions involving data modifications. Using a phased approach, they were able to move to Azure SQL Database with zero downtime for read-only traffic and less than two hours of down time for read-write activity. This case study will briefly describe how this was accomplished. 

The original MSAsset architecture

The original MSAsset application architecture consisted of a web tier with read-write access to the primary database located on a SQL Server 2012 instance. The database was contained within an Always On Availability Group with one synchronous read-only secondary replica and three read-only asynchronous secondary replicas. The application used an availability group listener to direct incoming write traffic to the primary replica. To accommodate the substantial amount of read-only reporting traffic, a proprietary load balancer was used to direct requests across the read-only secondary replicas using a round-robin algorithm.

image


When planning for a move to Azure SQL Database as with the legacy SQL Server solution, the proposed new solution needed to accommodate one read-write database and depending on the final migrated workload volume and associated Azure SQL Database resource consumption one or more read-only replicas.

Using a phased migration approach

The MSAsset engineering team used a phased incremental approach for moving from SQL Server to Azure SQL Database.  This incremental approach helped reduce the risk of project failure and allowed the team to learn and adapt to the inevitable unexpected variables that arise with complex application migrations.

The migration phases were as follows:

  1. Configure hybrid SQL Server and Azure SQL Database read-only activity, while keeping all read-write activity resident on the legacy SQL Server database.
    • Set up transactional replication from SQL Server to Azure SQL Database, for use in accommodating read-only activity.
    • Monitor the replication topology for stability, performance, and convergence issues. 
    • As needed, create up to four active geo-replication readable secondary databases in the same region to accommodate read-only traffic scale requirements.
    • Once it is confirmed the topology is stable for a sustained period of time, use load-balancing to direct read-only activity to Azure SQL Database, beginning with 25 percent of the read-only traffic. Over a period of weeks, increase to 50 percent, and then 75 percent. For load balancing, the MSAsset engineering team uses a proprietary application-layer library.
    • Along the way, use Query Performance Insight to monitor overall resource consumption and top queries by CPU, duration, execution count. MSAsset also monitored application metrics, including API latencies and error rates.
    • Adjust the Azure SQL Database service tiers and performance levels as necessary.
    • Move or redirect any high-resource consuming unnecessary legacy traffic to bulk access endpoints.
  2. After stabilizing in the prior phase of 75 percent read-only activity on Azure SQL Database, move 100 percent of the read-only traffic to Azure SQL Database.
    • Again, use Query Performance Insight to monitor overall resource consumption and top queries by CPU, duration, execution count. Adjust the Azure SQL Database service tiers and performance levels as necessary and create up to four active geo-replication readable secondary databases in the same region to accommodate read-only traffic.
  3. Prior to the final cut-over to Azure SQL Database, develop and fully test a complete rollback plan. The MSAsset team used SQL Server Data Tools (SSDT) data comparison functionality to collect the delta between Azure SQL Database and a  four day old backup and then applied the delta to the SQL Server database.
  4. Lastly, move all read-write traffic to Azure SQL Database. In MSAsset’s case, in preparation for the final read-write cutover they reseeded, via transactional replication a new database in Azure SQL Database for read-write activity moving forward. Steps they followed:
  5. After the full reseeding, wait for remaining transactions on SQL Server to drain before removing the transactional replication topology.
  6. Change the web front-end configuration to use the Azure SQL Database primary database for all read-write activity. Use read-only replicas for read-only traffic.
  7. After a full business cycle of monitoring, de-commission the SQL Server environment.

This phased approach allowed the MSAsset team to incur no downtime for read-only activity and also helped them minimize risk, allowing enough time to learn and adapt to any unexpected findings without having to revert to the original environment. 

The final MSAsset architecture uses one read-write Azure SQL Database replica and four active geo-replication readable secondary databases. 

image

The remaining sections will talk about key aspects and lessons learned from the migration effort.

Creating a read-only Azure SQL Database using Transactional Replication

The first phase involved setting up transactional replication from SQL Server to Azure SQL Database, ensuring a stable replication topology with no introduced performance or convergence issues. 

The MSAsset engineering team used the following process for setting up transactional replication:

  • They first reviewed the existing SQL Server database against the requirements for replication to Azure SQL Database. These requirements are detailed in the Replication to SQL Database documentation. For example, a small number of the legacy tables for MSAsset did not have a primary key, and so a primary key had to be added in order to be supported for transactional replication. Some of the tables were no longer being used, and so it was an opportunity to clean up stale objects and associated code.
  • Since the MSAsset publication was hosted on an Always On Availability Group, the MSAsset team followed the steps detailed here for configuration transactional replication: Configure Replication for Always On Availability Groups (SQL Server).

For an overview of two primary methods for migrating from SQL Server to Azure SQL Database, see SQL Server database migration to SQL Database in the cloud.

Once transactional replication was configured and fully synchronized, read-only traffic was first directed to both SQL Server and Azure SQL Database with read-write activity continuing to go just against the SQL Server-resident database.

image

The read-only traffic against Azure SQL Database was incrementally increased over time to 25 percent, 50 percent, and 75 percent, with careful monitoring along the way to ensure sufficient query performance and DTU availability. The MSAsset team used a proprietary load balancing application library to distribute load across the various read-only databases. Once stabilized at 75 percent, the MSAsset team moved 100 percent of read-only activity to Azure SQL Database and continued with the other phases described earlier.

Cleanup opportunities

The MSAsset team also used this as an opportunity to clean up rogue reporting processes. This included in-house Microsoft reporting tools and applications that, while being permitted to access the database had other data warehouse options that were more appropriate for ongoing use than MSAsset. When encountering rogue processes, the MSAsset team reached out to the owners and had them re-route to appropriate data stores. Disused code and objects, when encountered, were also removed.

Redesigning around compatibility issues

The MSAsset team discovered two areas that required re-engineering prior to migration to Azure SQL Database:

  • Change Data Capture (CDC) was used for tracking data modifications on SQL Server. This process was replaced with a solution that leverages temporal tables instead.
  • SQL Server Agent Jobs were used for executing custom T-SQL scheduled jobs on SQL Server. All SQL Server Agent Jobs were replaced with Azure worker roles that invoked equivalent stored procedures instead.

The team used Data Migration Assistant to detect compatibility issues and also used the following reference, Resolving Transact-SQL differences during migration to SQL Database.

Microsoft is also introducing a new deployment option, Azure SQL Database Managed Instance which will bring increased compatibility with on-premises SQL Server. An expanded public preview is coming soon.

Understanding networking and connectivity with Azure SQL Database

With an array of services requiring access to MSAsset’s data tier, the engineering team had to familiarize themselves with Azure SQL Database networking and connectivity requirements as well as fundamentals. Having this background was a critical aspect of the overall effort and should be a core focus area of any migration plan to Azure SQL Database.

To learn about Azure SQL Database connection fundamentals and connection troubleshooting essentials, see Azure SQL Database Connectivity Architecture and Troubleshoot connection issues to Azure SQL Database.

Modernizing the platform and unlocking cloud scalability

The original MSAsset SQL Server hardware was powerful, but old. Before moving to Azure SQL Database, the MSAsset engineering considered replacing the servers. But they were concerned about the projected cost and ability for the hardware to keep up with MSAsset’s projected workload growth over the next five years. The MSAsset engineering team was also concerned about keeping up with the latest SQL Server versions and having access to the latest features.

Moving to Azure SQL Database means that the MSAsset team can scale resources much more easily and no longer have to worry about outgrowing their existing hardware. They can also now access new features as they become available in Azure SQL Database without having to explicitly upgrade. They are also now able to leverage built-in capabilities unique to Azure SQL Database like Threat Detection and Query Performance Insight.

Reducing high severity issues and database management overhead

The MSAsset engineering team has no database administrator on staff, so coupled with the support of old hardware and standard DBA maintenance requirements, these factors were a major contributor to increasingly frequent high severity incidents.

Moving to Azure SQL Database, the MSAsset team no longer worries about ongoing database server patching, backups, or complex high availability and disaster recovery topology configuration. Since moving to Azure SQL Database, the MSAsset engineering team has seen an 80 percent reduction in high severity issues for their data tier.

Next Steps

Learn more about Azure SQL Database and building scalable, low-maintenance cloud solutions: in What is SQL Database? Introduction to SQL Database documentation.

Want to get started but don’t know where to begin? Create your first SQL Database in Azure with your free Azure account.

Introducing SQL Information Protection for Azure SQL Database and on-premises SQL Server!

$
0
0

We are delighted to announce the public preview of SQL Information Protection, introducing advanced capabilities built into Azure SQL Database for discovering, classifying, labeling, and protecting the sensitive data in your databases. Similar capabilities are also being introduced for on-premises SQL Server via SQL Server Management Studio.

Discovering and classifying your most sensitive data, including business, financial, healthcare, and PII, can play a pivotal role in your organizational information protection stature. It can serve as infrastructure for:

  • Helping meet data privacy standards and regulatory compliance requirements, such as GDPR.
  • Data-centric security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data.
  • Controlling access to and hardening the security of databases containing highly sensitive data.

What is SQL Information Protection?

SQL Information Protection (SQL IP) introduces a set of advanced services and new SQL capabilities, forming a new information protection paradigm in SQL aimed at protecting the data, not just the database:

  • Discovery and recommendations– The classification engine scans your database and identifies columns containing potentially sensitive data. It then provides you an easy way to review and apply the appropriate classification recommendations via the Azure portal.
  • Labeling– Sensitivity classification labels can be persistently tagged on columns using new classification metadata attributes introduced into the SQL engine. This metadata can then be utilized for advanced sensitivity-based auditing and protection scenarios.
  • Monitoring/Auditing– Sensitivity of the query result set is calculated in real time and used for auditing access to sensitive data (currently in Azure SQL DB only).
  • Visibility - The database classification state can be viewed in a detailed dashboard in the portal. Additionally, you can download a report, in Excel format, to be used for compliance and auditing purposes, as well as other needs.

Additional SQL IP capabilities will continue rolling out throughout 2018 – Stay tuned!

How does SQL Information Protection work?

We designed SQL IP with the goal of streamlining the process of discovering, classifying, and labeling sensitive data in your database environment.

Our built-in automated classification engine identifies columns containing potentially sensitive data, and provides a list of classification recommendations, which can be easily applied as sensitivity metadata on top of columns, using new column sensitivity attributes that have been added to the SQL engine. You can also manually classify and label your columns.

Screenshot_1-2

Once you classify and label your data, our detailed overview dashboard provides you visibility into the classification state of your database, as well as the ability to export and download a classification report in Excel format:

Screenshot_2-2

Finally, the SQL engine utilizes the column classifications to determine the sensitivity of query result sets. Combined with Azure SQL Database Auditing, this enables you to audit the sensitivity of the actual data being returned by queries:

Screenshot_auditing

Get started today!

We encourage you to try out SQL Information Protection today for improved visibility into your database environment, as well as for monitoring access to your sensitive data.

More details on using SQL Information Protection can be found below:

 

Regards,

SQL Security team

microsoft


    Accelerate real-time big data analytics with Spark connector for Microsoft SQL Databases

    $
    0
    0

    Apache Spark is a unified analytics engine for large-scale data processing. Today, you can use the built-in JDBC connector to connect to Azure SQL Database or SQL Server to read or write data from Spark jobs.

    The Spark connector for Azure SQL Database and SQL Server enables SQL databases, including Azure SQL Database and SQL Server, to act as input data source or output data sink for Spark jobs. It allows you to utilize real-time transactional data in big data analytics and persist results for adhoc queries or reporting.

    Compared to the built-in Spark connector, this connector provides the ability to bulk insert data into SQL databases. It can outperform row-by-row insertion with 10x to 20x faster performance. The Spark connector for Azure SQL Databases and SQL Server also supports Azure Active Directory authentication. It allows you to securely connect to your Azure SQL database from Azure Databricks using your AAD account. The Spark connector also provides similar interfaces with the built-in JDBC connector and is easy to migrate your existing Spark jobs to use this new connector.

    The Spark connector for Azure SQL Database and SQL Server utilizes the Microsoft JDBC Driver for SQL Server to move data between Spark worker nodes and SQL databases:

    1. The Spark master node connects to SQL Server or Azure SQL Database and loads data from a specific table or using a specific SQL query.
    2. The Spark master node distributes data to worker nodes for transformation.
    3. The Worker node connects to SQL Server or Azure SQL Database and writes data to the database. The user can choose to use row-by-row insertion or bulk insert.

    clip_image002[5]


    To get started, visit the azure-sqldb-spark repository on GitHub. You can also find the Sample Azure Databricks notebooks and Sample scripts in Scala in the same repository. You can also find more details from online documentation.

    You might also want to review the Apache Spark SQL, DataFrames, and Datasets Guide and the Azure Databricks documentation to learn more details about Spark and Azure Databricks.

    Azure Backup for SQL Server on Azure now in public preview

    $
    0
    0

    Earlier this week, Corey Sanders announced preview of a new Azure Backup capability to backup SQL workloads running in Azure Virtual Machines in his post about why you should bet on azure for your infrastructure needs today and in the future. In this blog, we will elaborate on how this enterprise backup provides a new breakthrough in backup that differentiates Azure from any other public cloud. This workload backup capability is built as an infrastructure-less, Pay as You Go (PAYG) service that leverages native SQL backup and restore APIs to provide a comprehensive solution to backup SQL servers running in Azure IaaS VMs.

    Azure Backup for SQL

    Key benefits

    • Zero-infrastructure backup: Freedom from managing backup infrastructure (e.g. backup server, agents or backup storage) or writing complex backup scripts.
    • Centrally manage and monitor all backups using Recovery Services Vault:
      • Create policies to specify the backup schedule and retention for both short-term and long-term retention needs using Grandfather-father-son style retention schemes. Re-use these policies across multiple databases across servers.
      • Configure email notification for any backup or restore failure.
      • Monitor the backup jobs using Recovery Services Vault dashboard for all workloads including Azure IaaS VMs, Azure Files and SQL server databases.
    • Restore to any time, up to a specific second: Restore databases to any date and time up to a specific second. Azure Backup provides a graphical overview of the recovery point availability for the selected date, which will help users choose the right recovery time. In the backend, the solution will figure out the appropriate full, differential and series of log backup chain corresponding to the selected time that need to be restored.
    • 15-minute Recovery Point Objective (RPO): Configure transaction log backup every 15 minutes to meet the backup SLAs needs of the organization.
    • PAYG Service: No upfront payment is needed. Billing is based on consumption each month.
    • Native SQL API integration: Azure Backup uses native SQL APIs such that customers get the benefit of SQL backup compression, full fidelity backup and restore including full, differential and log backups. Customers can monitor their backup jobs using SSMS.
    • Support for Always On Availability Group: Azure Backup protects databases in an Availability Group such that data protection continues seamlessly even post failover while honoring the Availability Group backup preference.

    Get started

    The video below walks you through various steps on how to configure backup for your SQL Servers running in IaaS VM. You can refer to documentation for more details.

    Upcoming planned enhancements

    Below are some of the key features planned for general availability and we plan to stage them through the rest of the year. Please follow us on @AzureBackup and look out for a twitter poll shortly to share your feedback on them.

    • Central customizable backup reports using Power BI.
    • Central customizable monitoring using OMS Log Analytics.
    • Automatically protection future added databases (auto-protect).
    • Support for PowerShell and Azure CLI.

    Additional resources

    Lift SQL Server Integration Services packages to Azure with Azure Data Factory

    $
    0
    0

    Data is vital to every app and experience we build today. With increasing amount of data, organizations do not want to be tied down by increasing infrastructural costs that come with it. Data engineers and developers are realizing the need to start moving their on-premise workloads to the cloud to take advantage of its massive scale and flexibility. Azure Data Factory capabilities are generally available for SQL Server Integration Services (SSIS) customers to easily lift SSIS packages to Azure gaining scalability, high availability, and lower TCO, while ADF manages resources for them.

    image

    Using code-free ADF UI/app, data engineers and developers can now provision and monitor Azure-SSIS Integration Runtime (IR) which are dedicated ADF servers for SSIS package executions. This capability now comes with amazing new features:

    Data engineers and developers can continue to use familiar SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) to design, deploy, configure, execute, and monitor SSIS packages in the cloud. All of these capabilities are now generally available. Modernize and extend ETL workflows to automatically provision Azure-SSIS IR on-demand, just-in-time, inject built-in data transformations, and much more with SSIS in Azure Data Factory.

    Get started

    Simplify modern data warehousing with Azure SQL Data Warehouse and Fivetran

    $
    0
    0

    Gaining insights rapidly from data is critical to being competitive in today’s business world. With a modern data warehouse, customers can bring together all their data at any scale into a single source of truth for use cases such as business intelligence and advanced analytics.

    A key component of successful data warehousing is replicating data from diverse data sources into the canonical data warehousing database. Ensuring that data arrives in your data warehouse consistently and reliably is crucial for success. Data integration tools ensure that users can successfully connect to their critical data sources while moving data between source systems and their data warehouse in a timely yet reliable fashion.

    Introducing Fivetran

    We’re excited to announce that Fivetran has certified their zero maintenance, zero configuration, data pipelines product for Azure SQL Data Warehouse. Fivetran is a simple to use system that enables customers to load data from applications, files stores, databases, and more into Azure SQL Data Warehouse.

    fivetran-logo"Azure is our fastest-growing customer base now that we support SQL Data Warehouse as a destination for Fivetran users. We're excited to be a part of the Microsoft ecosystem."

    - George Fraser, CEO and Co-Founder at Fivetran

    We’re also pleased to announce the Azure SQL Data Warehouse presence in Fivetran’s Cloud Data Warehouse Benchmark, which helps compare cloud providers TPCDS 1TB TPCDS performance!

    benchmark-1tb-speed

    With Fivetran’s automated, replicate-all data connectors our customers can:

      • Bring together diverse sources into SQL DW as normalized, ready-to-query schemas.

      • Avoid complex customization and get started quickly.

      • Automatically adjust to source changes so that their solutions are never interrupted.

      • Deliver data reliably without coding or regular maintenance.

      Here are a few sources that Fivetran supports today:

      • Application APIs: Salesforce, Marketo, Adwords, MixPanel, DoubleClick, LinkedIn Ads, Netsuite.
      • Databases: Oracle, SQL Server, Postgres.
      • Files: Azure Blob Storage, FTPS, Amazon S3, CSV Upload, Google Sheets.
      • Events: Google Analytics 360, Snowplow, Webhooks.

      For a more comprehensive listing, please visit their connectors page.

      Custom connector support

      While Fivetran supports many data connectors today, sometimes your required connector isn’t supported. If that is the case, you can use the Azure Cloud Function connector to create a simple custom pipeline.

      How it works:

      • Write a small function to fetch data from your custom source. Then write into Fivetran, state logic to handle the incremental updating.
      • Host your function on Azure Cloud Functions.
      • Connect Fivetran and let us handle the rest. Fivetran loads data into your warehouse, calling your function as often as every five minutes to fetch new data. Duplicate it and incrementally update it.

      Next steps

      To learn how to get started with Fivetran data connectors for Azure SQL Data Warehouse visit their documentation or get started with a free 14 day trial.

      Learn more about SQL DW and stay up-to-date with the latest news by following us on Twitter @AzureSQLDW.

      Azure Database Migration Service and tool updates – Ignite 2018

      $
      0
      0

      In mid-July, I blogged about the exciting updates and additions we had made to the Azure Database Migration Service (DMS) and our data migration tools. Since that time, we have noticed increased usage of our database migration offerings. In August 2018, we helped migrate over 15,000 databases and since January 2018, we have assisted with the migration of more than 107,000 databases to Azure. We have also been hard at work in the interim, continuing to deliver functionality to address customer feedback and enhance the value of our database migration service and tools. Below is information about our latest updates.

      Azure Database Migration Service (DMS)

      Azure DMS is a fully managed service designed to enable seamless migrations from multiple database sources to Azure Data platforms with minimal downtime. In recent months, we have added the following improvements:

      • Online (minimal downtime) migrations. Customers can now use Azure DMS preview support for online migrations of:
        • SQL Server databases running on-premises or on virtual machines to Azure SQL Database and Azure SQL Database Managed Instance.
        • MySQL databases running on-premises or on virtual machines to Azure Database for MySQL.
        • PostgreSQL databases running on-premises or on virtual machines to Azure Database for PostgreSQL.
      • SKU changes and additions. The Basic 1 and 2 vCore SKUs for the Azure Database Migration Service have been renamed to General Purpose 1 and 2 vCore, and a General Purpose 4 vCore SKU has been added. In addition, a new Business Critical 4 vCore SKU is available for migrating business critical workloads.
      • Save Project and Run Activity: Azure DMS now supports the ability to create a migration project and perform a specific migration activity in a single workflow. This ensures that customers can perform migrations with fewer clicks, making the migration process more efficient.
      • Using existing backup files for migration: Azure DMS now supports using existing SQL Server backup files for migrations from SQL Server to Azure SQL DB Managed Instance.

      Data Migration Assistant (DMA)

      DMA enables you to upgrade to a modern data platform by detecting compatibility issues as well as feature parities between source and target database environments, which can impact database functionality on your new version of SQL Database. It allows you to not only move your schema and data, but also logins from your source server to your target server.

      We have recently released DMA v4.0 and v4.1.

      What's new in v4.1?

      Version 4.1 provides preview assessment support for migrating on-premises SQL Server databases to Azure SQL Database Managed Instance. Customers can now use DMA to assess SQL Server on-premises or use a provided PowerShell script to collect metadata about their database schema, detect the blockers, and partially supported or unsupported features that affect migration to Azure SQL Database Managed Instance and to gain detailed guidance on how to resolve the issues after the migration.

      What's new in v4.0?

      Version 4.0 introduces the Azure SQL Database SKU Recommendations feature, which allows users to identify the minimum recommended Azure SQL Database SKU based on performance counters that are collected from the computer(s) hosting the source databases. This feature provides recommendations related to pricing tier, compute level, and max data size, as well as estimated cost per month. It also offers the ability to provision all your databases to Azure in bulk.

      This functionality is currently available only via the Command Line Interface (CLI). Support for this feature via the DMA user interface is planned for delivery later this year.

      SQL Server Migration Assistant (SSMA)

      SSMA for Oracle, MySQL, SAP ASE (formerly SAP Sybase ASE), DB2, and Access allow users to convert a database schema to a Microsoft SQL Server schema, upload the schema, and then migrate data to the target SQL Server.

      We have recently released SSMA v7.9 and v7.10.

      What's new in v7.10?

      Version 7.10 introduces the following updates:

      • Each flavor of SSMA 7.10 has been enhanced with targeted fixes designed to provide additional security and privacy protections to meet changes in global requirements.
      • SSMA 7.10 for Oracle includes a conversion improvement related to hierarchical queries.
      • SSMA 7.10 for DB2 includes a fix for conversion of begin-end blocks.
      • SSMA 7.10 for MySQL includes a fix for conversion of spaces between function name and arguments list.

      What's new in v7.9?

      Version 7.9 brings a variety of updates, including:

      • Each flavor of SSMA has been enhanced with targeted fixes that improve quality and auto conversion rates. Some of the changes that were implemented with this release include:
        • Partial support for migrating spatial data types from MySQL to Azure SQL Database.
        • Support for migrating "Continue" statements from Oracle to SQL Server.
        • Support in SSMA command line to alter Data Type mapping and Project Preferences.
      • SSMA 7.9 for Oracle, MySQL, SAP ASE, and DB2 also provide the option to migrate data by using SQL Server Integration Services (SSIS). After converting the schema, it will be possible to create an SSIS package by using the right-click context menu option Save as SSIS package.
      • The Azure SQL Database connection dialog in SSMA has also been altered to specify the fully qualified server name. In previous versions of SSMA, the Azure SQL Database prefix had to be explicitly mentioned inside projects settings.

      Database Experimentation Assistant (DEA)

      DEA is a A/B testing solution for SQL Server upgrades that assists in evaluating a targeted version of SQL Server for a given workload. Customers upgrading from SQL Server 2005 and later to any new version of SQL Server can use the analysis metrics provided to help build with higher confidence for a successful upgrade/migration experience.

      We have recently released DEA v2.6.

      What's new in v2.6?

      The v2.6 release of DEA has the following improvements:

      • Capture and replay of production database(s) workloads through automated set up.
      • Support for server-side traces and XEvents.
      • Perform statistical analysis on traces and/or XEvents collected using both old and new instances.
      • Visualize data through analysis report via rich user experience.
      • Use SQL Authentication to both capture and replay.
      • An Inbuilt replay tool in addition to already supported SQL Server Distributed Replay for simple workloads.
      • Removes the dependencies of R and R-Interop.
      • Capture and replay workloads to Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Linux.
      • Reporting enhancements:
        • New error categorization chart to easily find upgrade/migration blockers.
        • New error pie chart grouped by error id to easily identify the root cause of the errors.
      • Bug fixes and other performance improvements.

      Azure Database Migration Guide (DMG)

      The Azure Database Migration Guide is a one stop shop that provides step by step guidance for modernizing data assets. We recently announced a new intuitive UX that will help customers more easily choose source/target pairs to define their migration scenario. We have also onboarded several specialty partners who can help with assessments and migrations.

      Additional resources for database migration tools and services

      If you are currently working with one of these tools or services, you might find the following links useful:

      Summary

      I hope that you enjoy working with the latest features and functionality available in our migration tools and services. Please share your impressions through User Voice: Azure Database Migration Service, by using the feedback links at the bottom of each article in our documentation, or by reaching out to the Data Migration Team directly. Also be sure to follow us on Twitter @Data_Migrations, #msdatamigration, for the latest news and announcements.

      Viewing all 34 articles
      Browse latest View live




      Latest Images