The problem for ESBs is that they usually only connect
internal services and internal clients together. It’s hard to publish a service
you don’t control to your own bus. External dependencies end up getting wrapped
in a service you own and published to your ESB as an internal service. Although
this avoids the first problem of attaching external services to your ESB, it
introduces a new problem, which is yet more code to manage and secure.
Source of Information : Manning Azure in Action 2010
If you wanted to expose a service to several vendors, or if
you wanted a field application to connect to an internal service, you’d have to
resort to all sorts of firewall tricks. You’d have to open ports, provision DNS,
and do many other things that give IT managers nightmares. Another challenge is
the effort it takes to make sure that an outside application can always connect
and use your service.
To go one step farther, it’s an even bigger challenge to
connect two outside clients together. The problem comes down to the variety of
firewalls, NATs, proxies, and other network shenanigans that make
point-to-point communication difficult. Take an instant messaging client, for example.
When the client starts up, and the user logs in, the client creates an outbound,
bidirectional connection to the chat service somewhere. This is always allowed
across the network (unless the firewall is configured to explicitly block that type
of client), no matter where you are. An outbound connection, especially over port
80 (where HTTP lives) is rarely a problem. Inbound connections, on the other hand,
are almost always a problem.
Both clients have these outbound connections, and they’re
used for signaling and commanding. If client A wants to chat with client B, a
message is sent up to the service. The service uses the service registry to
figure out where client B’s inbound connection is in the server farm, and sends
the request to chat down client B’s link. If client B accepts the invitation to
chat, a new connection is set up between the two clients with a predetermined
rendezvous port. In this sense, the two clients are bouncing messages off a
satellite in order to always connect, because a direct connection, especially an
inbound one, wouldn’t be possible. This strategy gets the traffic through a
multitude of firewalls—on the PC, on the servers, on the network—on both sides
of the conversation.
There is also NATing (network address translation) going on.
A network will use private IP addresses internally (usually in the 10.x.x.x
range), and will only translate those to an IP address that works on the
internet if the traffic needs to go outside the network. It’s quite common for
all traffic coming from one company or office to have the same source IP address,
even if there are hundreds of actual computers. The NAT device keeps a list of
which internal addresses are communicating with the outside world. This list
uses the TCP session ID (which is buried in each network message) to route
inbound traffic back to the individual computer that asked for it.
The “bounce it off a satellite” approach bypasses this
problem by having both clients dialing out to the service. The Service Bus is here to give you all of
that easy messaging goodness without all of the work. Imagine if Skype or Yahoo
Messenger could just write a cool application that helped people communicate,
instead of spending all of that hard work and time figuring out how to always
connect with someone, no matter where they are. The first step in connecting is
knowing who you can connect with, and where they are. To determine this, you
need to register your service on the Service Bus.
Source of Information : Manning Azure in Action 2010
|
0 comments
Post a Comment