SQL Server 2008 Load Balancing

Service Broker provides support for load balancing. Load balancing is a technique for spreading tasks among available resources to avoid some resources being idle while others have tasks queued for execution. In the context of SQL Server 2008 and Service Broker, the resource is a Service Broker service that is hosted on different SQL Server instances. In this case, the service is referred to as a load-balanced Service Broker service. A load-balanced Service Broker service provides you with the following advantages:

• High availability: If one of the SQL Server instances of your Service Broker service is offline (maybe because of maintenance), your Service Broker application will work without any problems, because the other running SQL Server can handle and execute the request.

• Scale-out: If you have several thousand concurrent clients communicating with your service, a load-balanced Service Broker service can spread out the Service Broker requests. In this case, each service just has to handle and execute a few hundred requests instead of all several-thousand client requests. This approach helps you scale out your Service Broker application.

The same Service Broker service is deployed on two or more SQL Server instances. Deploying the identical Service Broker service to different SQL Server instances is the one and only requirement when you want to support a load-balancing scenario with Service Broker. Deploying a load-balanced Service Broker application involves the following two steps:

1. Deploying the service: You have to deploy the identical Service Broker service to different SQL Server instances. This involves the creation of the used message types, contracts, and associated queue objects.

2. Configuring the initiator: Once you deploy your Service Broker service to different SQL Server instances, you must configure the load-balanced scenario. On the initiator’s side of the Service Broker conversation, you just have to create a dedicated route for each deployed Service Broker service, and you also have to configure at least transport security between the Service Broker services.

As you can see, it’s easy to set up a load-balanced scenario for Service Broker. If you’ve set up the required routes to the different deployed target services on the initiator’s side, Service Broker will randomly pick a route from the sys.routes catalog view and forward the request to the chosen target service. As soon as Service Broker receives an acknowledgment for a message in a conversation, Service Broker uses the Service Broker identifier contained in the acknowledgment message for other messages in the conversation. Once the first acknowledgment message is received, all future messages in the conversation are routed using the Service Broker identifier in the acknowledgment message.

In this way, you can ensure that for each conversation, a target service is randomly picked up from the available services (as configured in the sys.routes catalog view) and the started conversation is completely bound to the originally picked-up SQL Server instance. This makes sense, because as soon as you start to process messages from a conversation on a SQL Server instance, you have an affinity to that instance, because the instance stores the data that was generated through message processing. The instance might also store conversation state information, depending on your service implementation details. Let’s have a look at how to set up a load-balanced scenario with Service Broker.

Service Deployment
The first step in setting up a load-balanced scenario with Service Broker is to deploy the loadbalanced Service Broker service to two or more different SQL Server instances. You have to deploy the following Service Broker objects:

• Message types
• Contracts
• Queue
• Service
• Service program

As you can see, the deployment of a load-balanced Service Broker service isn’t very different from the normal deployment process. The only difference is that the service is deployed to two or more SQL Server instances.

The OrderService is deployed to two different SQL Servervinstances: OrderServiceInstance1 and OrderServiceInstance2. When you deploy OrderService to the two instances, you have to set up at least transport security between ClientService and the two instances of OrderService. When you deploy both instances of OrderService, you’ll see that there is no difference between both deployments. You can easily scale out a Service Broker application with load balancing; you just have to deploy an additional instance of the required Service Broker service on another SQL Server instance. You don’t have to do anything else on the target side.

From a Service Broker perspective, load balancing would also work between different SQL Server instances hosted on the same machine. As you can probably guess, however, it wouldn’t make much sense in this case, because it’s not a real load-balancing scenario. Therefore, I refer to a SQL Server instance instead of a different SQL Server machine.

Initiator Configuration
As soon as you deploy OrderService, you’re ready to deploy ClientService, which communicates with OrderService. The difference here is that you have to configure two routes: one route to OrderInstance1 and the other route to OrderInstance2. Finally, you also have to set up transport security for both deployed OrderServices on the initiator’s side.

You configure two routes to OrderService: OrderServiceRoute1 and OrderServiceRoute2. Therefore, Service Broker now has two different options to forward your message to the final destination OrderService. The Service Broker classifier randomly picks one of the two routes and forwards the message on the chosen route to the target service.

Configuration of the OrderService for Load Balancing
USE ClientService
CREATE ROUTE OrderServiceRoute1
WITH SERVICE_NAME = 'OrderService',
ADDRESS = 'TCP://OrderServiceInstance1:4741'

-- The route to the second load-balanced OrderService
CREATE ROUTE OrderServiceRoute2
WITH SERVICE_NAME = 'OrderService',
ADDRESS = 'TCP://OrderServiceInstance2:4742'

USE master

CREATE CERTIFICATE ClientServiceCertPrivate
WITH SUBJECT = 'For Service Broker authentication - ClientServiceCertPrivate',
START_DATE = '01/01/2007'

CREATE ENDPOINT ClientServiceEndpoint

BACKUP CERTIFICATE ClientServiceCertPrivate
TO FILE = 'c:\ClientServiceCertPublic.cert'

CREATE LOGIN OrderServiceLogin WITH PASSWORD = 'password1!'

CREATE USER OrderServiceUser FOR LOGIN OrderServiceLogin

CREATE CERTIFICATE OrderServiceCertPublic1
FROM FILE = 'c:\OrderServiceCertPublic1.cert'

CREATE CERTIFICATE OrderServiceCertPublic2
FROM FILE = 'c:\OrderServiceCertPublic2.cert'

GRANT CONNECT ON ENDPOINT::ClientServiceEndpoint TO OrderServiceLogin

As soon as you configure the routing information and transport security between the
Service Broker services, you’re able to send a request message ([http://ssb.csharp.at/SSB_Book/c11/ OrderRequestMessage] message type) to OrderService.

There again is almost no difference in the message-sending code when you send a message to a load-balanced Service Broker service. The only difference lies in the configuration of the sys.routes catalog view. As soon as the Service Broker classifier finds more than one route to a target service, the opened conversations are dispatched randomly between the available Service Broker services. When you connect to each deployed OrderService, you’ll see that the received messages are processed and that the state information for each individual conversation is stored in the ApplicationState table. If you have more SQL Server instances available, you could add additional OrderService instances to the loadbalancing scenario. Easy, isn’t it?

Source of Information : Apress Pro SQL Server 2008 Service Broker


Subscribe to Developer Techno ?
Enter your email address:

Delivered by FeedBurner