CLOUD COMPUTING IN A NUTSHELL

When plugging an electric appliance into an outlet, we care neither how electric power is generated nor how it gets to that outlet. This is possible because electricity is virtualized; that is, it is readily available from a wall socket that hides power generation stations and a huge distribution grid. When extended to information technologies, this concept means delivering useful functions while hiding how their internals work. Computing itself, to be considered fully virtualized, must allow computers to be built from distributed components such as processing, storage, data, and software resources.

Technologies such as cluster, grid, and now, cloud computing, have all aimed at allowing access to large amounts of computing power in a fully virtualized manner, by aggregating resources and offering a single system view. In addition, an important aim of these technologies has been delivering computing as a utility. Utility computing describes a business model for on-demand delivery of computing power; consumers pay providers based on usage (“payas-you-go”), similar to the way in which we currently obtain services from traditional public utility services such as water, electricity, gas, and telephony.

Cloud computing has been coined as an umbrella term to describe a category of sophisticated on-demand computing services initially offered by commercial providers, such as Amazon, Google, and Microsoft. It denotes a model on which a computing infrastructure is viewed as a “cloud,” from which businesses and individuals access applications from anywhere in the world on demand. The main principle behind this model is offering computing, storage, and software “as a service.”

Many practitioners in the commercial and academic spheres have attempted to define exactly what “cloud computing” is and what unique characteristics it presents. Buyya et al. have defined it as follows: “Cloud is a parallel and distributed computing system consisting of a collection of inter-connected and virtualised computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements (SLA) established through negotiation between the service provider and consumers.” Vaquero et al. have stated “clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services). These resources can be dynamically reconfigured to adjust to a variable load (scale), allowing also for an optimum resource utilization. This pool of resources is typically exploited by a pay-per-use model in which guarantees are offered by the Infrastructure Provider by means of customized Service Level Agreements.”

A recent McKinsey and Co. report claims that “Clouds are hardware based services offering compute, network, and storage capacity where: Hardware management is highly abstracted from the buyer, buyers incur infrastructure costs as variable OPEX, and infrastructure capacity is highly elastic.”

A report from the University of California Berkeley summarized the key characteristics of cloud computing as: “(1) the illusion of infinite computing resources; (2) the elimination of an up-front commitment by cloud users; and (3) the ability to pay for use . . . as needed . . .”

The National Institute of Standards and Technology (NIST) characterizes cloud computing as “. . . a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

In a more generic definition, Armbrust et al. define cloud as the “data center hardware and software that provide services.” Similarly, Sotomayor et al. point out that “cloud” is more often used to refer to the IT infrastructure deployed on an Infrastructure as a Service provider data center. While there are countless other definitions, there seems to be common characteristics between the most notable ones listed above, which a cloud should have: (i) pay-per-use (no ongoing commitment, utility prices); (ii) elastic capacity and the illusion of infinite resources; (iii) self-service interface; and (iv) resources that are abstracted or virtualised.

In addition to raw computing and storage, cloud computing providers usually offer a broad range of software services. They also include APIs and development tools that allow developers to build seamlessly scalable applications upon their services. The ultimate goal is allowing customers to run their everyday IT infrastructure “in the cloud.”

A lot of hype has surrounded the cloud computing area in its infancy, often considered the most significant switch in the IT world since the advent of the Internet. In midst of such hype, a great deal of confusion arises when trying to define what cloud computing is and which computing infrastructures can be termed as “clouds.”

Indeed, the long-held dream of delivering computing as a utility has been realized with the advent of cloud computing. However, over the years, several technologies have matured and significantly contributed to make cloud computing viable. In this direction, this introduction tracks the roots of cloud computing by surveying the main technological advancements that significantly contributed to the advent of this emerging field. It also explains concepts and developments by categorizing and comparing the most relevant R&D efforts in cloud computing, especially public clouds, management tools, and development frameworks. The most significant practical cloud computing realizations are listed, with special focus on architectural aspects and innovative technical features.

Source of Information : Wiley - Cloud Computing Principles and Paradigms 2011

Why an ESB is a good idea in the cloud

The problem for ESBs is that they usually only connect internal services and internal clients together. It’s hard to publish a service you don’t control to your own bus. External dependencies end up getting wrapped in a service you own and published to your ESB as an internal service. Although this avoids the first problem of attaching external services to your ESB, it introduces a new problem, which is yet more code to manage and secure.


If you wanted to expose a service to several vendors, or if you wanted a field application to connect to an internal service, you’d have to resort to all sorts of firewall tricks. You’d have to open ports, provision DNS, and do many other things that give IT managers nightmares. Another challenge is the effort it takes to make sure that an outside application can always connect and use your service.



To go one step farther, it’s an even bigger challenge to connect two outside clients together. The problem comes down to the variety of firewalls, NATs, proxies, and other network shenanigans that make point-to-point communication difficult.   Take an instant messaging client, for example. When the client starts up, and the user logs in, the client creates an outbound, bidirectional connection to the chat service somewhere. This is always allowed across the network (unless the firewall is configured to explicitly block that type of client), no matter where you are. An outbound connection, especially over port 80 (where HTTP lives) is rarely a problem. Inbound connections, on the other hand, are almost always a problem.



Both clients have these outbound connections, and they’re used for signaling and commanding. If client A wants to chat with client B, a message is sent up to the service. The service uses the service registry to figure out where client B’s inbound connection is in the server farm, and sends the request to chat down client B’s link. If client B accepts the invitation to chat, a new connection is set up between the two clients with a predetermined rendezvous port. In this sense, the two clients are bouncing messages off a satellite in order to always connect, because a direct connection, especially an inbound one, wouldn’t be possible. This strategy gets the traffic through a multitude of firewalls—on the PC, on the servers, on the network—on both sides of the conversation.



There is also NATing (network address translation) going on. A network will use private IP addresses internally (usually in the 10.x.x.x range), and will only translate those to an IP address that works on the internet if the traffic needs to go outside the network. It’s quite common for all traffic coming from one company or office to have the same source IP address, even if there are hundreds of actual computers. The NAT device keeps a list of which internal addresses are communicating with the outside world. This list uses the TCP session ID (which is buried in each network message) to route inbound traffic back to the individual computer that asked for it.



The “bounce it off a satellite” approach bypasses this problem by having both clients dialing out to the service.  The Service Bus is here to give you all of that easy messaging goodness without all of the work. Imagine if Skype or Yahoo Messenger could just write a cool application that helped people communicate, instead of spending all of that hard work and time figuring out how to always connect with someone, no matter where they are. The first step in connecting is knowing who you can connect with, and where they are. To determine this, you need to register your service on the Service Bus.

Source of Information : Manning Azure in Action 2010

Connecting with the Service Bus

The second major piece of Windows Azure platform AppFabric is the Service Bus. As adoption of service-oriented architecture (SOA) increases, developers are seeking better ways of connecting their services together. At the simplest level, the Service Bus does this for any service out there. It makes it easy for services to connect to each other and for consumers to connect to services. In this section, we’re going to look into what the Service Bus is, why you’d use a bus, and, most importantly, how you can connect your services to it. You’ll see how easy it is to use the Service Bus.


What is a Service Bus?
Enterprise service buses (ESBs) have been around for years, and they’ve grown out of the SOA movement. As services became popular, and as the population of services at companies increased, companies found it harder and harder to maintain the infrastructure. The services and clients became so tightly coupled that the infrastructure became very brittle. This was the exact problem services were created to avoid. ESBs evolved to help fix these problems.

ESBs have several common characteristics, all geared toward building a more dynamic and flexible service environment:

- ESBs provide a service registry—Developers and dynamic clients needed ways to find available services, and to retrieve the contract and usage information they needed to consume them.

- ESBs provide a way to name services—This involves creating a namespace around services so there isn’t a conflict in the service names and the message types defined.

- ESBs provide some infrastructure for security—Generally, this includes a way to allow or deny people access to a service, and a way to specify what they’re allowed to do on that service.

- ESBs provide the “bus” part of ESB—The bus provides a way for the messages to move around from client to service, and back. The important part of the bus is the instrumentation in the endpoints that allows IT to manage the endpoint. IT can track the SLA of the endpoint, performance, and faults on the service.

- ESBs commonly provide service orchestration—Orchestration is the concept of composing several services together into a bigger service that performs some business process.

A common model for ESBs is similar to the typical n-tier architecture model, where each tier relies on the abstractions provided by the layer below it. The orchestration has become not only a way to have lower-level services work together, but it also provides a layer of indirection on top of those services. In the orchestration layer you can route messages based on content, policy, or even service version. This is important as you connect services together, and as they mature.

Source of Information : Manning Azure in Action 2010

The road AppFabric has traveled

AppFabric is arguably the most mature part of Windows Azure, at least if you measure by how long it has been publicly available, if not broadly announced. AppFabric started life as BizTalk Services. It was seen as a complementary cloud offering to Biz-Talk Server. BizTalk is a high-end enterprise-grade messaging and integration platform, and indeed the services fit into that portfolio well. Some joke that it was called BizTalk Services as a clever way to keep it a secret, because BizTalk is one of the most underestimated products Microsoft has. Just ask a BizTalk developer.
When Windows Azure was announced at PDC 2008, the BizTalk Services were renamed to .NET Services. Over the following year, there was a push to get developers to work with the services and put the SDK through its paces. Out of that year of realworld testing came a lot of changes.

When Windows Azure went live in early 2010, the services were renamed again to Windows Azure platform AppFabric to tie it more closely to the Windows Azure platform. Some people were confused by the older .NET Services name, thinking it was just the runtime and base class library running in the cloud, which makes no sense whatsoever.


The two AppFabrics
Don’t confuse the AppFabric we’ll be covering in this chapter with the new Windows Server AppFabric product. They’re currently related by name alone. Over time they’ll merge to become the same product, but they aren’t there quite yet.

Windows Server AppFabric is essentially an extension to Windows Activation Service (WAS) and IIS that makes it easier to host WCF and Windows Workflow Foundation (WF)-based services in your own data center. It supplies tooling and simple infrastructure to provide a base-level messaging infrastructure. It doesn’t supply a local instance of the Access Control Service (ACS) or Service Bus service at this time. Likewise, Windows Azure platform AppFabric doesn’t provide any of the features that Windows Server AppFabric does, at least today. In early CTPs of Windows Azure platform AppFabric, there was the ability to host WF workflows in the cloud, but this was removed as it moved toward a production release.

The AppFabric we’re going to cover in this chapter makes two services available to you: Access Control Service and the Service Bus.


Two key AppFabric services
AppFabric is a library of services that focus on helping you run your services in the cloud and connect them to the rest of the world.

Not everything can run in the cloud. For example, you could have software running on devices out in the field, a client-side rich application that runs on your customer’s computers, or software that works with credit card information and can’t be stored off-premises. The two services in AppFabric are geared to help with these scenarios.

- Access Control Service (ACS) —This service provides a way to easily provide claimsbased access control for REST services. This means that it abstracts away authentication and the role-based minutia of building an authorization system. Several of Azure’s parts use ACS for their access control, including the Service Bus service in AppFabric.

- Service Bus—This service provides a bus in the cloud, allowing you to connect your services and clients together so they can be loosely coupled. A bus is simply a way to connect services together and route messages around. An advantage of the Service Bus is that you can connect it to anything, anywhere, without having to figure out the technology and magic that goes into making that possible.

As we look at each of these services, we’ll cover some basic examples. All of these examples rely on WCF. The samples will run as normal local applications, not as Azure applications. We did it this way to show you how these services can work outside of the cloud, but also to make the examples easier to use.

Each example has two pieces that need to run: a client and a service. You can run both simultaneously when you press F5 in Visual Studio by changing the startup projects in the solution configuration.

Source of Information : Manning Azure in Action 2010

Common SQL Azure scenarios

People are using SQL Azure in their applications in two general scenarios: near data and far data. These terms refer to how far away the code that’s calling into SQL Server is from the data. If it’s creating the connection over what might be a local network (or even closer with named pipes or shared memory), that’s a near-data scenario. If the code opening the connection is anywhere else, that’s a far-data scenario.

Far-data scenarios
The most common far-data scenario is when you’re running your application, perhaps a web application, in an on-premises data center, but you’re hosting the data in SQL Azure. This is a good choice if you’re slowly migrating to the cloud, or if you want to leverage the amazing high availability and scale SQL Azure has to offer without spending $250,000 yourself. A web server using SQL Azure in a far-data scenario. The data is far away from the code that’s using it. In this case, the web server is on-premises, and the data is in the cloud Web server SQL Azure with SQL Server. In a far-data scenario, the client doesn’t have to be a web browser over the internet. It might be a desktop WPF application in the same building as the web server, or any other number of scenarios. The one real drawback to far data is the processing time and latency of not being right next to the data. In data-intensive applications this would be a critical flaw, whereas in other contexts it’s no big deal.

Far data works well when the data in the far server doesn’t need to be accessed in real time. Perhaps you’re offloading your data to the cloud as long-term storage, and the real processing happens onsite. Or perhaps you’re trying to place the data where it can easily be accessed by many different types of clients, including mobile public devices, web clients, desktop clients, and the like.


Near-data scenarios
A near-data scenario would be doing calculations on the SQL Server directly, or executing a report on the server directly. The code using the data runs close to the data.
This is why the SQL team added the ability to run managed code (with CLR support) into the on-premises version of SQL Server. This feature isn’t yet available in SQL Azure.

One way to convert a far-data application to a near-data one is to move the part of the application accessing the code as close to the data server as possible. With SQL Azure, this means creating a services tier and running that in a role in Azure. Your clients can still be web browsers, mobile devices, and PCs, but they will call into this data service to get the data. This data service will then call into SQL Server. This encapsulates the use of SQL Azure, and helps you provide an extra layer of logic and security in the mix.


SQL Azure versus Azure Tables
SQL Azure and the Azure Table service have some significant differences. These differences help make it a little easier to pick between SQL Azure and Azure Tables, and the deciding factor usually comes down to whether you already have a database to migrate or not.

If you do have a local database, and you want to keep using it, use SQL Azure. If moving it to the cloud would require you to refactor some of the schema to support partitioning or sharding, you might want to consider some options.

If size is the issue, that would be the first sign that you might want to consider Azure Tables. Just make sure the support Tables has for transactions and queries meets your needs. The size limit surely will be sufficient, at 100 TB.

If you’re staying with SQL (versus migrating to Azure Tables) and are going to upgrade your database schema to be able to shard or partition, take a moment to think about also upgrading it to support multitenant scenarios. If you have several copies of your database, one for each customer that uses the system, now would be a good time to add the support needed to run those different customers on one database, but still in an isolated manner.

If you’re building a new system that doesn’t need sophisticated transactions, or a complex authorization model, then using Azure Tables is probably best. People tend to fall into two groups when they think of Tables. They’re either from “ye olde country” and think of Tables as a simple data-storage facility that’ll only be used for large lookup tables and flat data, or they’re able to see the amazing power that a flexible schema model and distributed scale can give them. Looking at Tables without the old blinders on is challenging. We’ve been beaten over the head with relational databases for decades, and it’s hard to consider something that deviates from that expected model. The Windows Azure platform does a good job of providing a platform that we’re familiar and comfortable with, while at the same time giving us access to the new paradigms that make the cloud so compelling and powerful.

The final consideration is cost. You can store a lot of data in Azure Tables for a lot less money than you can in SQL Azure. SQL Azure gives you a lot more features to use (joins, relationships, and so on), but it does cost more.

Source of Information : Manning Azure in Action 2010

Limitations of SQL Azure

Although SQL Azure is based on SQL Server, there are some differences and limitations that you’ll need to be aware of.

The most common reason for any limitation is the services layer that sits on top of the real SQL Servers and simulates SQL Server to the consumer. This abstraction away from the physical implementation, or the routing engine itself, is usually the cause. For example, you can’t use the USE command in any of your scripts. To get around this limitation, you’ll need to make a separate connection for each different database you want to connect with. You should assume that each of your databases are on different servers.

Any T-SQL command that refers to the physical infrastructure is also not supported. For example, some of the CREATE DATABASE options that can configure which filegroup will be used aren’t supported, because as a SQL Azure user, you don’t know where the files will be stored, or even how they will be named. Some commands are outright not supported, like BACKUP.

You can only connect to SQL Azure over port 1433. You can’t reconfigure the servers to receive connections over any other port or port range.

You can use transactions with SQL Azure, but you can’t use distributed transactions, which are transactions that enroll several different systems into one transaction update. SQL Azure doesn’t support the network ports that are required to allow this to happen. Be aware that if you’re using a .NET 2.0 TransactionScope, a normal transaction may be elevated to a distributed transaction in some cases. This will cause an error, and you won’t know where it’s coming from.

Each table in your database schema must have a clustered index. Heap tables (a fancy DBA term for a table without an index) aren’t supported. If you import a table without a clustered index, you won’t be able to insert records into that table until one has been created.

All commands and queries must execute within 5 to 30 minutes. Currently the system wide timeout is 30 minutes. Any request taking longer than that will be cancelled, and an error code will be returned. This limit might change in the future, as Microsoft tunes the system to their customers’ needs.

There are some limitations that are very niche in nature, and more commands are supported with each new release. Please read the appropriate MSDN documentation to get the most recent list of SQL Azure limitations.


Why you can’t use USE
You can’t use the USE command in SQL Azure because the routing layer is stateful, because the underlying TDS protocol is session-based. When you connect to a server, a session is created, which then executes your commands. When you connect in SQL Azure you still have this session, and the fabric routes your commands to the physical
SQL Server that’s hosting the lead replica for your database. If you call the USE command to connect to a different database, that database may not be on the same physical server as the database you’re switching from. To avoid this problem, the USE command isn’t allowed.

Source of Information : Manning Azure in Action 2010

How SQL Azure works

Although we say that a SQL Azure database is just SQL Server database in the sky, that’s not entirely accurate. Yes, SQL Server and Windows Server are involved, but not like you might think. When you connect to SQL Azure server, and your database, you aren’t connecting to a physical SQL Server. You’re connecting to a simulation of a server. We’d use the term virtual, but it has nothing to do with Hyper-V or application virtualization.


SQL Azure from a logical viewpoint
The endpoint that you connect to with your connection string is a service that’s running in the cloud, and it mimics SQL Server, allowing for all of the TDS and other protocols and behavior you would expect to see when connecting to SQL Server. This “virtual” server then uses intelligence to route your commands and requests to the backend infrastructure that’s really running SQL Server. This intermediate virtual layer is how the routing works, and how replication and redundancy are provided, without exposing any of that complexity to the administrator or developer. It’s this encapsulation that provides much of the benefit of the Azure platform as a whole, and SQL Azure is no different. The logical architecture of how applications and tools connect with SQL Azure.

As a rule of thumb, any command or operation that affects the physical infrastructure isn’t allowed. The encapsulation layer removes the concern of the physical infrastructure. When creating a database, you can’t set where the files will be, or what they will be called, because you don’t know any of those details. The services layer manages these details behind the scenes.


SQL Azure from a physical viewpoint
The data files that represent your database are stored on the infrastructure as a series of replicas. The SQL Azure fabric controls how many replicas are needed, and creates them when there aren’t enough available. There’s always one replica that’s elected the leader. This is the replica that will receive all of the connections and execute the work. The SQL Azure fabric then makes sure any changes to the data are distributed to the other replicas using a custom replication fabric. If a replica fails for any reason, it’s taken out of the pool, a new leader is elected, and a new replica is created on the spot. The physical architecture, relating the different parts of SQL Azure together.

When a connection is made, the routing engine looks up where the current replica leader is located and routes the request to the correct server. Because all connections come through the router, the lead replica can change and the requests will be rerouted as needed.
The fabric can also move a replica from one server to another for performance reasons, keeping the load smooth and even across the farm of servers that run SQL Azure.

What’s really happening behind this encapsulation is quite exciting. The infrastructure layer contains the physical disks and networks needed to redundantly and reliably store the bits that are part of your database. This is similar to the common storage area network (SAN) that many database server infrastructures use. The redundancy of the disks and the tight coupling of the networks provide both performance and reliability for your data.

Sitting on top of this infrastructure layer is a series of servers. Each server runs a set of management services, SQL Server itself, and the SQL Azure fabric. The SQL Azure fabric is the component that communicates with the other servers in this layer to help them communicate with one another. The fabric provides the replication, load balancing, and failover features for the platform.

On top of the servers is a series of services that manages the connection routing (including the firewall features), billing, and provisioning. This services layer is the layer that you connect with and the layer that hides all of the magic.

Deep down under all of these covers, SQL Server really is running. Microsoft has added these layers to provide an automated and redundant platform that’s easily managed and reliable.

Source of Information : Manning Azure in Action 2010

WCF Data Services and AtomPub

WCF Data Services (formerly known as Astoria) is a data-access framework that allows you to create and consume data via REST-based APIs from your existing data sources (such as SQL Server databases) using HTTP.

Rather than creating a whole new protocol for the Table service API, the Windows Azure team built the REST-based APIs using WCF Data Services. Although not all aspects of the Data Services framework have been implemented, the Table service supports a large subset of the framework.

One of the major advantages of WCF Data Services is that if you’re already familiar with the framework, getting started with the Windows Azure Table service is pretty easy. Even if you haven’t used the WCF Data Services previously, any knowledge gained from developing against Windows Azure storage will help you with future development that may use the framework.


WCF DATA SERVICES CLIENT LIBRARIES
WCF Data Services provides a set of standard client libraries that abstract away the complexities of the underlying REST APIs and allow you to interact with services in a standard fashion regardless of the underlying service. Whether you’re using WCF Data Services with the Windows Azure Table service or SQL Server, your client-side code will be pretty much the same. Using these libraries to communicate with the Table service allows you to develop simple standard code against the Table service quickly.


ATOMPUB
The Windows Azure Table service uses the WCF Data Services implementation of the Atom Publishing Protocol (AtomPub) to interact with the Table service. AtomPub is an HTTP-based REST-like protocol that allows you to publish and edit resources. AtomPub is often used by blog services and content management systems to allow the editing of resources (articles and blog postings) by third-party clients. Windows Live Writer is a well-known example of a blog client that uses AtomPub to publish articles to various blog platforms (Blogspot, WordPress, Windows Live Spaces, and the like). In the case of Windows Azure storage accounts, tables and entities are all considered as resources.

Although WCF Data Services can support other serialization formats (such as JSON) the Table service implementation of WCF Data Services only supports AtomPub.

If you’re interested in reading more about the AtomPub protocol (RFC 5023) you can read the full specification here: http://bitworking.org/projects/atom/rfc5023.html.

Now that you have a basic awareness of AtomPub, we can look at how the AtomPub protocol and the Atom document format are used to create a table using the Table service REST API.

Source of Information : Manning Azure in Action 2010

As when describing what to look for in selecting or hiring a good ScrumMaster, I've culled the long list of desirable product owner traits down to five must-have attributes.

Available. By far the most frequent complaint I hear from teams about their product owners is that they are unavailable when needed. When a fast-moving team needs an answer to a question, waiting three days for an answer is completely disruptive to the rhythm it has established. By being available to the team, a product owner demonstrates commitment to the project. The best product owners demonstrate their commitment by doing whatever is necessary to build the best product possible. On some projects this includes doing things like assisting in test planning, performing manual tests, and being actively engaged with other team members.

Business -savvy. It is essential that the product owner understand the business. As the decision maker regarding what is in or out of the product, the product owner must have a deep understanding of the business, market conditions, customers, and users. Usually this type of understanding is built over years of working in the domain, perhaps as a past user of the type of product being developed. This is why many successful product owners come from product manager, marketing, or business analyst roles.

Communicative. Product owners must be good communicators and must be able to work well with a diverse set of stakeholders. Product owners routinely interact with users, customers, management within the organization, partners, and, naturally, others on the team. Skilled product owners will be able to deliver the same information to each of these different audiences while at the same time tailoring their message to best match the audience. A good product owner must also listen to users, customers, and perhaps most important the team. Especially as team members learn more about the product and market (as they should over time, especially on a Scrum project), they will be able to offer valuable suggestions about the product. Additionally, all teams will have much to say to the product owner about the technical risks and challenges of the project. Although it is true that the product owner prioritizes all work for the team, the wise product owner will listen to her team when it recommends some adjustments in those priorities based on technical factors.

Decisive. Another common complaint teams make about their product owners is their lack of decisiveness. When team members go to the product owner with an issue, they want a resolution. Scrum puts a lot of pressure on teams to produce functionality as quickly as possible. Teams are frustrated when a product owner responds to a question with, "Let me call a meeting or convene a task force to work on that." A good team will understand that this is sometimes necessary, but teams are very perceptive at knowing when a product owner is actually just trying to avoid making a hard decision. Just as bad as a product owner who won't make a decision is the product owner who makes the same decision over and over but with different answers. A good product owner will not reverse prior decisions without a good reason.

Empowered. A good product owner must be someone empowered with the authority to make decisions and one who is held accountable for those decisions. The product owner must be sufficiently high up in the organization to be given this level of responsibility. If a product owner is consistently overruled by others in the organization, team members will learn to go to those others with their important questions.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

Today's surgeons are highly trained and skilled individuals who have had years of formal education followed by extensive internships. This was not always the case. Pete Moore has written that "the first surgeons had little anatomical knowledge, but plied their trade because they had sharp instruments and strong arms. They often did surgery in their spare time while working as the local barber or blacksmith".

Many organizations choose their first ScrumMasters in much the same way; but instead of seeking sharp instruments and strong arms, they look for management or leadership experience. As they become more experienced with Scrum, organizations eventually realize there are many more factors to consider in selecting ScrumMasters. To help save you from picking a ScrumMaster whose sole qualifications are strong arms and sharp instruments, I have listed the six attributes I have found to be common among the best ScrumMasters I've worked with.


Responsible
A good ScrumMaster is able and willing to assume responsibility. That is not to say that ScrumMasters are responsible for the success of the project; that is shared by the team as a whole. However, the ScrumMaster is responsible for maximizing the throughput of the team and for assisting team members in adopting and using Scrum. As noted earlier, the ScrumMaster takes on this responsibility without assuming any of the authority that might be useful in achieving it. Think of the ScrumMaster as similar to an orchestra conductor. Both must provide real-time guidance and leadership to a talented collection of individuals who come together to create something that no one of them could create alone. Boston Pops conductor Keith Lockhart has said of his role, "People assume that when you become a conductor you're into some sort of a Napoleonic thing— that you want to stand on that big box and wield your power. I'm not a power junkie, I'm a responsibility junkie". In an identical manner, a good ScrumMaster thrives on responsibility—that special type of responsibility that comes without power.


Humble
A good ScrumMaster is not in it for her ego. She may take pride (often immense pride) in her achievements, but the feeling will be "look what I helped accomplish" rather than the more self-centered "look what I accomplished." A humble ScrumMaster is one who realizes the job does not come with a company car or parking spot near the building entrance. Rather than putting her own needs first, a humble ScrumMaster is willing to do whatever is necessary to help the team achieve its goal. Humble ScrumMasters recognize the value in all team members and by example lead others to the same opinion.


Collaborative
A good ScrumMaster works to ensure a collaborative culture exists within the team.The ScrumMaster needs to make sure team members feel able to raise issues for open discussion and that they feel supported in doing so. The right ScrumMaster helps create a collaborative atmosphere for the team through words and actions. When disputes arise, collaborative ScrumMasters encourage teams to think in terms of solutions that benefit all involved rather than in terms of winners and losers. A good ScrumMaster models this type of behavior by working with other ScrumMasters in the organization. However, beyond modeling a collaborative attitude, a good ScrumMaster establishes collaboration as the team norm and will call out inappropriate behavior (if the other team members don't do it themselves).


Committed
Although being a ScrumMaster is not always a full-time job, it does require someone who is fully committed to doing it. The ScrumMaster must feel the same high level of commitment to the project and the goals of the current sprint as the team members do. As part of that commitment, a good ScrumMaster does not end very many days with impediments left unaddressed. There will, of course, be times when this is inevitable, as not all impediments can be removed in a day. For example, convincing a manager to dedicate a full-time resource to the team may take a series of discussions over several days. On the whole, however, if a team finds that impediments are often not cleared quickly, team members should remind their ScrumMaster about the importance of being committed to the team. One way a ScrumMaster can demonstrate commitment is by remaining in that role for the full duration of the project. It is disruptive for a team to change ScrumMasters mid-project.


Influential
A successful ScrumMaster influences others, both on the team and outside it. Initially, team members might need to be persuaded to give Scrum a fair trial or to behave more collaboratively; later, a ScrumMaster may need to convince a team to try a new technical practice, such as test-driven development or pair programming. A ScrumMaster should know how to exert influence without resorting to a dictatorial "because I say so" style. Most ScrumMasters will also be called upon to influence those outside the team. For example, a ScrumMaster might need to convince a traditional team to provide a partial implementation to the Scrum team. Or, a ScrumMaster might need to prevail upon a QA director to dedicate full-time testers to the project. Although all ScrumMasters should know how to use their personal influence, the ideal one will come with a degree of corporate political skill. The term "corporate politics" is often used pejoratively; however, a ScrumMaster who knows who makes decisions in the organization, how those decisions are made, which coalitions exist, and so on can be an asset to a team.


Knowledgeable
Beyond having a solid understanding of and experience with Scrum, the best ScrumMasters also have the technical, market, or other specialized knowledge to help the team pursue its goal. LaFasto and Larson have studied successful teams and their leaders and have concluded that "an intimate and detailed knowledge of how something works increases the chance of the leader helping the team surface the more subtle technical issues that must be addressed" . Although ScrumMasters do not necessarily need to be marketing gurus or programming experts, they should know enough about both to be effective in leading the team.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

Much has already been written about the job of the ScrumMaster in removing impediments to the team's progress (Schwaber and Beedle 2001, Schwaber 2004). Most ScrumMasters quickly grasp that part of their job. Where many falter— especially during the critical first 6 to 12 months of using Scrum—is in their relationships to their teams, which is why we will focus on that topic here.

Many who are new to the ScrumMaster role struggle with the apparent contradiction of the ScrumMaster as both a servant-leader to the team and also someone with no authority. The seeming contradiction disappears when we realize that although the ScrumMaster has no authority over Scrum team members, the ScrumMaster does have authority over the process. Although a ScrumMaster may not be able to say, "You're fired," a ScrumMaster can say, "I've decided we're going to try two-week sprints for the next month." Ideally, the ScrumMaster tries to get team members to decide this on their own. But, if they do not, the ScrumMasters authority over the process allows for this decision.

The ScrumMaster is there to help the team in its use of Scrum. Think of the help from a ScrumMaster as similar to a personal trainer who helps you stick with an exercise regimen and perform all exercises with the correct form. A good trainer will provide motivation while at the same time making sure you don't cheat by skipping a hard exercise. The trainer's authority, however, is limited. The trainer cannot make you do an exercise you don't want to do. Instead, the trainer reminds you of your goals and how you've chosen to meet them. To the extent that the trainer does have authority, it has been granted by the client. Scrum-Masters are much the same: They have authority, but that authority is granted to them by the team.

A ScrumMaster can say to a team, "Look, we're supposed to deliver potentially shippable software at the end of each sprint. We didn't do that this time. What can we do to make sure we do better the next sprint?" This is the Scrum-Master exerting authority over the process; something has gone wrong with the process if the team has failed to deliver something potentially shippable. But because the ScrumMaster's authority does not extend beyond the process, the same ScrumMaster should not say, "Because we failed to deliver something potentially shippable the last sprint, I want Tod to review all code before it gets checked in." Having Tod review the code might be a good idea, but the decision is not the ScrumMaster's to make. Doing so goes beyond authority over the process and enters into how the team works.

With authority limited to ensuring the team follows the process, the Scrum-Master's role can be more difficult than that of a typical project manager. Project managers often have the fallback position of "do it because I say so." The times when a ScrumMaster can say that are limited and restricted to ensuring that Scrum is being followed.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

Fike diehards, followers are more opposed to changing the status quo than they are opposed to adopting Scrum in particular. Unlike diehards, however, followers present passive resistance to the change. Dexter, a mid-level programmer at an ecommerce company was a follower. He asked questions like a skeptic but always with an undercurrent implying that he knew Scrum was a bad thing. Where a skeptic would ask, "How does Scrum work on projects where getting the user experience perfect is absolutely critical?" Dexter would ask, "Scrum doesn't work when getting the user experience perfect is critical, does it?"

I remember one conversation with Dexter in which he asked how many times I would be back to visit his company. "I'm scheduled back in July and October," I said. This was June.

"Nothing after that?" he asked.
"Maybe, but we haven't scheduled anything past October."
"Good. This will be done by the end of the year, then."

I was impressed by his enthusiasm, but I thought his timeline for adopting Scrum was a little aggressive considering the size of his company. "Well, probably not," I cautioned. "There will probably still be some work next year. Not everyone has even started running sprints. But you probably won't need me next year."

"Oh," Dexter replied, "I didn't mean it that way. I meant we'll be onto our next new process by then. After the Christmas shopping season is over, we always change our process."

No one had told me about these annual process changes prior to my first visit with this company, but considering the company's history of adopting a new process every January, it wasn't surprising that Dexter would take a wait-it-out approach to Scrum. In fact, many followers adopt this approach, reasoning that this change will be followed by some later change and they might as well skip a few along the way.

On his own Dexter didn't present a significant hurdle to a successful transition. But, have enough Dexters in your organization, and they can impede a successful transition. Fortunately, followers are not usually very vigorous in their resistance. They will put up minor, passive resistance, mostly hoping that the change goes away. In addition to some of the tools described already, there are a few more tools that can be useful in dealing with followers:

• Change the composition of the team. Some coworkers bring out the best in us; others bring out the worst. Changing the composition of the team will undoubtedly change the nature of resistance. Replacing a grumbling, always-negative saboteur with a skeptic may remove a follower's motivation for resisting.

• Praise the right behavior. Rather than focusing on changing the behavior of the followers, praise some aspects of appropriate behavior whether you observe it in a detractor or supporter. Followers will notice and resistance in some will weaken.

• Involve them. A great way to reduce the resistance of a fence-sitting follower is to involve her in the design of the new process. For example, you might ask a follower to join an improvement community figuring out how to do automated unit testing on your challenging legacy application or to work with others putting together a presentation for the sales group on how Scrum impacts your ability to put dates in contracts.

• Model the right behaviors yourself. Followers need someone to follow. Increase the odds that they follow someone who is exhibiting the right agile behavior by modeling those behaviors yourself. For example, given that collaboration is an essential part of Scrum, strive to demonstrate this in your interactions with others.

• Identify the true barrier. "ADAPTing to Scrum," determine whether a follower is resisting because she lacks the awareness, desire, or ability to use Scrum. Then provide the appropriate support to break through that barrier. If she isn't aware of the reasons for transitioning to Scrum, have a private conversation in which you share them. If she currently lacks the ability to be agile, look for an opportunity to pair her with someone who can help her learn those skills.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

Katherine worked as the director of metrics and measurement for a large division of a financial data provider. I had been told she was a supporter of the division's shift toward Scrum but that she had a few questions for me so that she could more effectively do her job of collecting process and product metrics. I have a natural interest in this subject, and such discussions are usually a great chance for me to learn something new I was looking forward to meeting with Katherine as a chance to discuss some creative, innovative metrics.

Was I ever wrong! Katherine had mastered the art of appearing to support the transition to Scrum while trying to hold onto the status quo. Three years prior to our meeting, software development within this organization had been characterized by missed deadlines and buggy software that didn't meet customer expectations. At that time, Katherine was the newly hired test manager. She instituted some new procedures that dramatically improved things. As a result, teams seemed to be meeting their deadlines (mainly because schedules were padded by what I considered astounding amounts) and quality improved (by creating a separate test group that would spend months testing after a product was handed over to them).

For her efforts in solving these problems, Katherine had been promoted and was now running what was essentially a project management office (PMO).As she told me more about her background and about how she had previously helped her company by introducing various process improvements, I was sure I had found an ally in transitioning her division to Scrum. Instead, what I found was someone who had built herself a very nice empire (through good effort directed at earlier company goals). She was now so enamored of her current status, the number of people reporting to her, and her level of prestige that she was unwilling to consider further changes. Moses could have come down from the mountaintop with the ideal process engraved on stone tablets, and Katherine would have resisted.

Katherine, like other diehards, was opposed to Scrum not because of anything inherent in it but because she did not want to let go of the current state. She was very actively resisting the change but always in ways that allowed her to claim to be supporting it.

A common technique of diehards, and one Katherine employed, is to stall the transition by controlling resources. This is possible because diehards are often found at the middle and upper levels of management where they have enough status to want to keep it. In Katherine's case, she controlled a shared pool of testers. This allowed her to harm the transition by profligately moving testers between projects. There were always plausible reasons: A critical project needed an additional tester, another project needed the expertise of a specific tester, and so on. Katherine's tactics had the effect of ensuring that no team retained the same personnel from start to finish and that many Scrum teams didn't have a tester for the first few sprints.

Many of the tools appropriate for overcoming the resistance of the saboteur will work with the diehard as well. Some additional tools you may want to employ with diehards include

• Align incentives. Diehards are tied to the status quo because of the benefits (either tangible or intangible) that it brings them. If you find a lot of resistance from diehards, consider all incentives that exist in the organization and make sure each aligns well with being agile. I am not referring solely to financial incentives. Nonfmancial incentives such as who gets promoted or otherwise recognized should also be reviewed. If having a large number of people reporting to you creates clout in your organization, for example, you shouldn't be surprised when people resist losing their direct reports.

• Create dissatisfaction with the status quo. Diehards like the status quo. They are not opposed to Scrum because of what it is; they are opposed to it because they like how things are. So, try to create dissatisfaction with the current state. I don't mean to go create a crisis, but if one looms, point it out. If market share is declining, make sure people know. If calls to tech support are on the rise, show people. If an industry newsletter recently heaped praise on a competitor's product, hang copies of the article where everyone can see them. This is consistent with the advice of Stewart Tubbs, author of a textbook on small-group interaction: "A prescient manager is always looking for ways for the organization to improve continuously. She or he is constantly on the lookout for ways to make the organization more effective, and looks to communicate these ideas as a way to generate dissatisfaction with the status quo".

• Acknowledge and confront fear. Diehards resist in part because of the uncertainty of what their jobs will look like with Scrum. They are usually very happy with their current positions. Fear of an uncertain future can be very powerful. How will my role change? How will I be evaluated? What will come next in my career? These are all powerful questions often in the mind of the diehard. If you know the answers and are in a position to give them, do so. If the answers are unknown, say so but commit—if you can and if you value the work of the diehard—to working with him to find the answers. You can also help calm these fears by clarifying what is expected not just of the diehard but of others with whom he may work.

In Katherine's case, her vice president (Christine) and I sought to find the right role for her in the new organization. We talked with her about our confidence that her past experience in guiding the company toward dramatic process improvements put her in a key position for helping the company again. Christine clarified Katherine's role in the new organization. Unfortunately, Katherine's sense of identity and self-worth were so tightly coupled to the process that she had helped put in place that she could not help the company move beyond it. In the end, she left the company.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

It can be easy to mistake a saboteur for a skeptic—after all, some amount of uncertainty about any change can be a good thing. I made the mistake of confusing a saboteur with a skeptic while teaching a class at a search engine company. Elena, a participant in the class, was asking a lot of good, challenging questions. I didn't know her role in the organization, but because many class participants were deferential to her, I figured she was important in one sense or another, and so I spent a lot of time answering her questions. If I was right and she was an opinion leader, and if I could convert her by overcoming her objections one by one, I knew that would be a big step forward for this company.

At the end of the day, I met with the director who had invited me to teach that class in her company. We talked about how the class went and I told her how I hoped I'd made progress helping Elena to see the light.The director said,"I should have warned you about her. She hates Scrum. She runs a shared user experience design group and is completely opposed to everything about Scrum. She's been fighting it since we started six months ago. I was surprised to see that she'd signed up for your class."

Elena was a saboteur—opposed to Scrum and actively resisting it. Like most saboteurs, she had been soliciting others to her cause. Despite mounting evidence within her company that Scrum was helping create better products more quickly, she continued to argue that it would not. I asked Elena directly why she was so strongly opposed. She said, "I have the best stateroom on the Titanic and I'm not moving!"

In addition to some of the tools offered for overcoming the resistance of skeptics, the following tools have proven useful with saboteurs:

• Success. As long as there is any doubt about whether Scrum is the appropriate approach, saboteurs will use those doubts to spread resistance. "Yes, it worked on our web projects," they may grudgingly offer, "but, it won't work on our back-end projects." Success on many different types of projects is a surefire way of weakening those arguments.

• Reiterate and reinforce the commitment. Saboteurs need to know that the company is committed to the transition. Any sign of weakness and—like a lion eyeing a tasty-looking antelope—the saboteur will attack. Faced with a large number of saboteurs, a strong message from as high up the executive chain as possible will at least let them know resistance is futile.

• Move them. If possible, find another team, project, or division and move the saboteur there. Unless you are a small organization or are doing an all-in transition, it is quite likely that a saboteur can continue to be a productive team member elsewhere—until Scrum starts to permeate that team, project, or division, that is.

• Fire them. This is the extreme end of moving someone. But if someone is opposed to a stated corporate direction and is actively resisting it, then this is quite possibly the appropriate action.

• Be sure the right people are talking. A thriving set of communities focused around topics of special interest can be invaluable in producing enough momentum to overcome resistance. Hearing how others within a community of practice are succeeding with Scrum can lessen a saboteur's resolve to continue resisting.

Elena was fortunate to work in a large organization in which she could be moved to a different department that was still taking a wait-and-see attitude toward Scrum. She eventually came around to the point where she is again a productive team member, though even today she will admit she is secretly waiting for a change back to the old way of working.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

Thad had no choice but to adopt Scrum. His company had been acquired and was being told by the new owners to begin using Scrum immediately. This wasn't a direction Thad would have chosen himself, and he had serious concerns about it. Would the daily scrums add value, especially with a product owner who worked from her home 600 miles away? How could a new product as complicated, large, and novel as theirs be done without a lengthy up-front design phase? He could see the value of iterating through the construction phase, but surely an up-front design was still needed.

Thad was a skeptic. I knew this from his willingness to admit that Scrum was fine for other domains, technologies, or environments—-just not his. Thad openly acknowledged the appropriateness of Scrum for web development but questioned it for his company's scientific applications.

As the most experienced member on his team and one of the longest-tenured developers in the organization, Thad was an opinion leader. Others looked to him to see how he would behave under the mandate to adopt Scrum. Thad exhibited a healthy amount of doubt; people should not be expected to change how they work without the opportunity to ask hard questions or be expected to fully embrace Scrum until they've worked on a Scrum team and experienced the benefits for themselves. Thad's uncertainty, however, went beyond doubt to the point where he was resisting the transition in small but important ways.

Because he didn't see the benefit of daily scrums, Thad consistently pushed to skip them. At the end of one meeting he said, "It sounds like we're all on stuff that will take at least today to finish. So let's skip tomorrow's daily scrum and just meet again the day after. Every other day is probably good enough anyway." Sometimes his ScrumMaster could successfully counter these arguments, but not always. After all, the ScrumMaster was new to Scrum, too.

Additionally, like many skeptics, Thad would sometimes claim to support a Scrum practice but would then continue to work as he always had. For instance, he said that he supported working iteratively and claimed to understand the value of having a potentially shippable product at the end of each sprint. In truth, though, Thad didn't believe that all parts of their product could be designed, coded, and tested within a single sprint. Consequently, he habitually pushed the team to bring more work than it could handle into each sprint. Overcommitting was his way of making sure that some features were worked on over at least two sprints.

Some of the tools that are useful in overcoming the resistance presented by skeptics include

• Let time run its course. If you can keep the transition effort moving forward, evidence of the benefits of Scrum will start to accumulate. Even if this evidence is merely anecdotal, it lessens the amount of resistance a skeptic can put up.

• Provide training. Some of a skeptic's resistance is a result of not having done something or not having seen it done before. Training—whether formal classroom training or as provided by an external coach brought in to work with the team—helps by giving the skeptic the experience of seeing firsthand how it can work.

• Solicit peer anecdotes. If you've never experienced something yourself but your friends or those you relate to have, their personal stories will resonate with you. If there are Scrum success stories from other teams in your organization, make sure the skeptics hear them. If Scrum is new to your organization, invite experienced agile outsiders in. Inviting a local software architect to speak at lunch about her company's success with Scrum will do wonders in persuading your own skeptical architects.

• Appoint a champion skeptic. In their book Fearless Change, Mary Lynn Manns and Linda Rising suggest designating someone as the company's "champion skeptic" (2004).The champion skeptic should be influential, respected, and well connected but should not be openly hostile to the change. The champion skeptic is invited to all meetings and is given a chance to point out problems. Use this information to sincerely address the concerns the champion skeptic brings up. Doing so demonstrates open-mindedness and prevents any one concern from escalating into a crisis.

• Push the issue. Put the skeptic in charge of some part of the transition. Suppose you are struggling with a skeptical tester who does not believe testing can be done in the same sprint as the design and programming of a feature. Challenge that tester to identify five ways to help bring the team closer to the goal of testing within the same sprint. The tester won't be able to come up empty for fear that the next person who takes on the task successfully identifies five items. Then, ask the team to either try all five things or to select the one or two ideas that seem most promising initially.

• Build awareness. Presumably you have chosen to do something as difficult as introduce Scrum because there is a compelling need to do so. Perhaps a new competitor has entered your space, perhaps your last product took a year too long to release, or perhaps you have any of a number of similar reasons. Make sure that those involved in the transition are aware of the better future that will follow a successful transition.

In Thad's case, we were able to overcome his skepticism by pushing the issue. We put a stop to his passive resistance to iterating by switching to shorter sprints. The team had been using four-week sprints but was bringing in about six weeks worth of work in each sprint planning meeting. I told them we were going to try two-week sprints until they got a handle on how much could actually be completed in a sprint. Thad didn't like this idea. In the next sprint planning meeting, to point out the foolishness of working in such short sprints, Thad pushed the team to commit to what he thought was a ridiculously small amount of work. It turned out to be the right amount; for the first time the team finished all its work inside one sprint. As team members came to see the value of completing what they committed to,Thad's subtle efforts to force the team to overcommit were thwarted by the team's new insistence that it bring in to the sprint only what it could handle.

Although pushing the issue helped in Thad's case, the biggest factor in eradicating his resistance was time. It just took time (and a mounting pile of anecdotal evidence that it could be done) to sway Thad.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

People resist changing to Scrum for many different reasons. Some may resist because they are comfortable with their current work and colleagues. It has taken years to get to their current levels in the organization, to be on this team, to work for that manager, or to know exactly how to do their jobs each day. Others may resist changing to Scrum because of a fear of the unknown. "Better the devil you know than the devil you don't" is their mantra. Still others may resist due to a genuine dislike or distrust of the Scrum approach. They may be convinced that building complex products iteratively without significant up-front design will lead to disaster.

Just as there are many reasons why some people will resist Scrum, there are many ways someone might resist. One person may resist with well-reasoned logic and fierce arguments. Another may resist by quietly sabotaging the change effort. "You think no documentation is a good idea? I'll show you no documentation," the passive resistor may think, proceeding to write nothing down, even bug reports the team has agreed should continue to be stored in the defect tracking system. Another may resist by quietly ignoring the change, working the old way as much as possible, and waiting for the next change du jour to come along and sweep Scrum away.

Each act of resistance carries with it information about how people feel about adopting Scrum. As a change agent or leader in the organization, your goal should be to understand the root cause of an individual's resistance, learn from it, and then help the person overcome it. There are many techniques you can use for doing this. But unless the technique is carefully chosen, it is unlikely to have the desired effect. To help select the right technique, I find it useful to think about how and why someone is resisting. We can group the reasons why someone is resisting Scrum into two general categories:

• They like the status quo.
• They don't like Scrum.

Reasons for resistance fall into the first category if they are actually a defense of the current approach.This type of resistance to changing to Scrum would likely result no matter what type of change was being contemplated. Reasons fall into the second category if they are arguments against the specific implications of beginning to work in an agile manner.Tables 6.2 and 6.3 provide some examples of different reasons for resistance and how each would be categorized.

Categorizing how individuals resist is even simpler: Is the resistance active or passive? Active resistance occurs when someone takes a specific action intended to impede or derail the transition to Scrum. Passive resistance occurs when someone fails to take a specific action, usually after saying he will. Combining the two general reasons people may resist Scrum with the two ways in which they will do it leads to the standard two-by-two matrix.

TABLE 6.2 People may resist Scrum because they like how things are today.
Examples of Liking the Status Quo
I like who I work with.
I like the power or prestige that comes with my current role.
This is the way I was trained to do it and the only way I know how.
I don't like change of any sort.
I don't want to start another change initiative because they always fail anyway.

TABLE 6.3 People may resist because they don't like Scrum.
Examples of Not Liking Scrum
I think S c r um is a fad and we'll just have to switch back in three years.
Scrum is a bad idea for our products.
I got into this field so that I could put headphones on and not talk to people.
Scrum doesn't work with distributed teams like ours.

Each quadrant of matrix is given a name descriptive of the person who resists in the way indicated by the labels on the axes. A skeptic is someone who does not agree with the principles or practices of Scrum but who only passively resists the transition. Skeptics are the ones who politely argue against Scrum, forget to attend the daily scrum a little too often, and so on. I am referring here to individuals who are truly trying to stop the transition, not people with the healthy attitude of "this sounds different from anything Eve done before but Em intrigued. Let's give it a try and see if it works."

Above the skeptics in matrix are the saboteurs. Like skeptics, saboteurs resist the transition more from a dislike of Scrum than support for whatever software development process exists currently. Unlike a skeptic, a saboteur provides active resistance by trying to undermine the transition effort, perhaps by continuing to write lengthy up-front design documents, and so on.

On the left side of matrix are those who resist because they like the status quo. They are comfortable with their current activities, prestige, and coworkers. In principle, these individuals may not be opposed to Scrum; they are, however, opposed to any change that puts their current situation at risk. Those who like the status quo and who actively resist changing from it are known as diehards. They often attempt to prevent the transition by rallying others to their cause.

The bottom left of matrix shows the followers, who like the status quo and resist changing from it passively. Followers are usually not enraged by the prospect of change, so they do little more than hope it passes like a fad. They need to be shown that Scrum has become the new status quo.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

Many of the specific arguments you'll hear against Scrum are predictable and common across many organizations. Others, of course, will be unique to your organization.
You can often anticipate the arguments you'll hear by thinking through the challenges presented by your organization, domain, technologies, products, culture, and people. In doing so, you'll find that many of the objections (both the universal and the specific ones) can be categorized as either waterfallacies or agile phobias. A waterfallacy is a mistaken belief or idea about agile or Scrum created from working too long on waterfall projects. Examples include

• Scrum teams don't plan, so we're unable to make commitments to customers.

• Scrum requires everyone to be a generalist.

• Our team is spread around the world. Self-organization clashes with some cultures, so we can't be agile.

• Our team is spread around the world, and Scrum requires face-to-face communication.

• Scrum ignores architecture, which would be disastrous for the type of system we build.

• Scrum is OK for simple websites, but our system is too complicated.

An agile phobia is a strong fear or dislike of agile practices, usually due to the uncertainty of change. Some of the agile phobias you are likely to encounter include the following:

• I'm afraid I'll have nothing to do.
• I'm afraid I'll be fired if the decisions we make don't work out.
• I'm afraid of conflict and of trying to reach consensus.
• I'm afraid people will see how little I really do.
• It's so much easier and safer when someone tells me exactly what to do.
• It's so much easier and safer when I can tell people exactly what to do.

Although a waterfallacy can often be countered with rational arguments, anecdotes, and evidence, an agile phobia is usually much more personal and emotional. Sometimes people just need to know that their objections have been heard.

Throughout this book I have tried to preempt as many waterfallacies and agile phobias as possible. Many chapters include "Objection" sidebars, which provide my advice on how to address common questions and misunderstandings about Scrum.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

The intersection of the four factors and the discussion of timing leave out probably the most important factor in the success of a pilot project—the individuals involved. I deliberately chose to leave people out of the discussion of selecting the right pilot project under the assumption that we can select the project and team independently. That is, we can select the best project as our Scrum pilot and can then look around and assemble the right team for that project. I understand this is an uncommon luxury in many organizations—the project and the team often come as a package, just like the ham and eggs in a Scrum team's favorite breakfast. If you cannot separate the decisions of the ideal pilot project and the ideal pilot team, simply consider all factors together in selecting the best available pilot.

Put initial teams together with an eye toward compatibility, constructive dissension among team members, willingness and ability to learn and adapt, technical skills, communication skills, and so on. Of these, the most important consideration in selecting a pilot team is the willingness of the individuals to try something different. Ideally, all will have moved through the awareness and desire steps of the ADAPT. When presented the opportunity to influence who will be on the pilot team, I look to create a combination of the following types of individuals:

• Scrum lobbyists. The project may not be big enough to include everyone who has been lobbying to adopt Scrum, but I want to be biased toward including as many of these individuals on the project as I can. It would be painful for them to have to be on the sidelines even though they'd still be hopeful for the project's success.

• Willing optimists. These individuals understand that a new development approach is needed but didn't go so far as to actively argue for a change to Scrum in the past. Knowing what they now do about Scrum, they believe it sounds promising and want to see it succeed.

• Fair skeptics. I don't want someone on the project who will work to sabotage the pilot or the teamwork necessary to become a Scrum team, but this does not mean I want to avoid all skeptics. It can be very beneficial to include a well-respected, vocal skeptic as long as the skeptic has demonstrated a past willingness to admit being wrong or change an opinion. These individuals can become some of the transition's strongest supporters when convinced of the benefits through hands-on experience.

Of course, all of this must be mixed with an eye toward combining the right set of skills for the project. If your pilot project's goal is to develop a video game, you had better put an animator on the team. I also look for individuals who have a track record of working together successfully. Sometimes you find an existing entire team that can become the pilot team. Other times, you can think back over the past few years and put together people who worked together well on past projects.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

I was about to start this section with something like, "Scrum pilot projects have become more and more rare over the past four years. The benefits of Scrum have become so recognized that companies are now forgoing pilot projects and jumping right in." And then I decided that perhaps I should look up the definition of pilot project. Perhaps, like inconceivable toVizzini in The Princess Bride, it did not mean what I thought it meant. What I found was that there are indeed two slightly different meanings. One is that a pilot project is a test, with the results used to determine if more of whatever is being tested will be done. This is the type of pilot project that most companies now bypass—they know they want to use Scrum; they don't need to "pilot it" to verify that.

The other definition I found is that a pilot project is undertaken to provide guidance to subsequent projects; it pilots the way in doing something new. It is this second meaning that I'm interested in—the pilot that leads the way rather than the one that is conducted as a test. As an industry we have enough evidence that Scrum works; what individual organizations need to learn is how to make Scrum work inside their organizations. So, they often conduct one or more pilots as learning projects.


Four Attributes of the Ideal Pilot Project
Selecting the right project as a pilot can be challenging. Jeff Honious, vice president in charge of innovation at Reed Elsevier, led his company's transition to Scrum. He and colleague Jonathan Clark wrote of their struggle to select the right pilot.

Finding the right project was the most critical and challenging task. We needed a meaty project that people would not dismiss as being a special case, yet we did not want a project to fill every possible challenge—too much was riding on its success. (2004)

Not every project is equally suited to be your first.The ideal pilot project sits at the confluence of project size, project duration, project importance, and the engagement of the business sponsor. You may find it impossible to identify the "perfect" pilot project. That's OK. Consider the projects you do have and make appropriate trade-offs between the four factors. It is far better to pick a project that is close enough and get started than it is to delay six or more months waiting for the perfect pilot to present itself.

Duration. If you select a project that is too short, skeptics will claim that Scrum works only on short projects. At the same time, if you select a project that is too long, you risk not being able to claim success until the project is over. Many traditionally managed projects claim to be on track 9 months in to a 12-month schedule, yet in the end are over budget and late, so a Scrum project proclaiming the same may not be very convincing. What I find best is to select a project whose length is near the middle of what is normal for an organization. Ideally and frequently this is around three or four months. This gives a team plenty of time to start getting good at working within sprints, to enjoy it, and to see the benefits for the team and for the product. A three- or four-month project is also usually sufficient for claiming that Scrum will lead to similar success on longer projects.

Size. Select a project that can be started with one team whose members are all collocated, if at all possible. Start with one team, even if the pilot project will grow to include more teams.Try to select a pilot project that will not grow to more than five or so teams, even if such projects will be common in your organization. Not only is coordinating work among that many Scrum teams more than you want to bite off initially, but you also probably wouldn't have time to grow from one team to more than five anyway if you are also looking for a project that can be completed in three or four months.

Importance. It can be tempting to select a low-importance, low-risk project. If things go badly, not much will be lost. And people may not even notice a failure on a low importance project. Don't give in to this temptation. Instead, pick an important project. An unimportant project will not get the necessary attention from the rest of the organization. Additionally, some of the things required of a team transitioning to Scrum are difficult; if the project isn't important, people may not do all that is required of them. Early agilist and inventor of the Adaptive Software Development process Jim Highsmith advises, "Don't start with an initial 'learning project' that is of marginal importance. Start on a project that is absolutely critical to your company; otherwise it will be too difficult to implement all the hard things Scrum will ask of you".

Business sponsor engagement. Adopting Scrum requires changes on the business side of the development equation, not just the technical side. Having someone on the business side who has the time and inclination to work with the team is critical. An engaged business sponsor can help the team if it needs to push against entrenched business processes, departments, or individuals. Similarly, there is no one more useful in promoting the success of the project afterward than a sponsor who got what was expected. One sponsor commenting to another that a recent project tried Scrum and delivered more than past projects did will do wonders in getting other sponsors to ask their teams to also try the new approach.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010

Getting started with Scrum is one thing; spreading it across the organization is another. Unless you have chosen an all-in transition, you will need to build upon the successes of the first few teams as you move Scrum into other teams. There are three general patterns you can use for spreading Scrum beyond the initial teams. The first two patterns involve taking a team that has begun to be successful with Scrum and then using its members to seed new teams. The third pattern takes a different approach and involves spreading Scrum using internal coaches.


Split and Seed
The split-ami-seed pattern is typically put into use after the first couple of teams have adopted Scrum and run at least a handful of sprints. By that point, team members are beginning to understand what it is like to work on a Scrum team. They certainly won't have figured everything out, but sprints should be ending with working software, and team members should be working together well. In short, the team probably has a long way to go to get good, but Scrum is starting to feel natural.

It is at this unlikely point that we split the team up.

In the split-and-seed pattern, one functioning Scrum team is split in two, with each half of the original team forming the basis of a new team. New people are then added to these splinter teams to form new Scrum teams. A large initial team could be used to seed as many as four new teams, especially if the initial team included some members with previous Scrum experience or a natural aptitude for it.

The new team members can be either newly hired employees or existing employees moving onto their first Scrum projects. The idea behind the split and-seed pattern is that newly formed, second-generation Scrum teams will have an easier time learning the mechanics and practices of Scrum because they will have guidance from the experienced members of the team. The new teams are left together for a few sprints until that team begins to jell and its new members have developed a feel for Scrum. Then, again, the functioning teams are broken up into smaller teams and new members are added to fill out the teams. This cycle is repeated until Scrum has been fully introduced.

In a large, enterprise rollout of Scrum, you do not need to leave each generation of teams together for the same number of sprints. You can instead split each team whenever it's ready.


Grow and Split
The grow-and-split pattern is a variation of the split-and-seed approach. It involves adding team members until the team is large enough that it can be comfortably split in two. Immediately after splitting, each of the new teams will probably be on the small end of the desirable size range of five to nine members. After allowing the new teams one sprint at this reduced size, new members are added until each team becomes large enough that it can also be split. This pattern repeats until the entire project or organization has transitioned.


Internal Coaching
Philips Research's Scrum adoption is an example of the third pattern for spreading Scrum: internal coaching. Philips had begun adopting Scrum and was facing a problem. Like many organizations, it had some teams that were excelling with their new agile approach and others that were struggling. Philips' Christ Vriens solved the problem by using internal coaching. On each team that was doing well, he identified one person who truly understood what it meant to be agile and designated that person as a coach to another team that had not yet progressed as far in its understanding and use of Scrum.

Coaches were given specific responsibilities, such as attend sprint planning, review, and retrospective meetings; attend one daily scrum each week; and be available for two hours each week to provide other assistance to the mentored team as needed. Coaches were not excused from their responsibilities on their original teams, but it was acknowledged that each coach would have fewer hours to contribute to those teams.



Reasons to Prefer Split and Seed
The split-and-seed pattern's advantages are rooted in its quick-spreading nature.

• You can add teams more quickly than with most other approaches. Each new team should ideally include at least 2 members of the previous team. This means that possibly as soon as after 2 or 3 sprints, a team of 8 people could conceivably be split into four 2-person groups used to seed a second set of teams. If each of those 4 teams had 8 people you would have 32 Scrum team members. A few sprints later these 32 people could be used to seed 16 more teams, each with 8 team members for a total of over 100 Scrum experienced people after only 5 or 6 sprints.

• Each team has someone with Scrum experience to help guide them. Only the very first teams to transition will be forced to do so without someone on the team with Scrum experience. All subsequent teams will benefit from having at least two (and hopefully three or four) team members with at least a couple of sprints of experience under their belts. This can help reduce the discomfort some people will feel about transitioning to something new and unfamiliar.



Reasons to Prefer Grow and Split
The grow-and-split pattern spreads Scrum a bit more slowly than does the splitand-seed approach but comes with some key advantages.

• You don't have to destroy any existing teams. The primary problem with the split-and-seed strategy is that teams who are just starting to jell and get a handle on Scrum are demolished to form the basis of new teams. Breaking up a good team is always something that should be done with caution. Growing the team before splitting it overcomes this shortcoming because the team is kept together until it is large enough to form two complete teams, each with agile experience.

• Team members feel more continuity from sprint to sprint. When using the split-and-seed pattern, teams are constantly being split and reformed before a true sense of team camaraderie is established. Because the growand-split approach divides a team only when it has gotten too big, members can stay together longer, and there is less feeling of disruption.



Reasons to Prefer Internal Coaching
The internal coaching approach is generally my preferred approach. Not surprisingly, there are a strong set of advantages to it, including the following:

• Well-running teams do not need to be split. A drawback to the prior patterns is that functioning teams are split to form the foundations of new teams. When using internal coaches, teams stay intact with only the minor disruption of an occasional outsider (the coach) joining the team.

• Coaches can be hand-selected for new teams. An approach like the splitand-seed pattern takes a whole-team approach to coaching: The new team is coached collectively by the seeding team members. Some of those individuals will be good in that role; some will not. With internal coaching, the most appropriate coach can be selected for each new team.

• Coaches can be moved from team to team. After awhile a team and its coach become stale. A fresh pair of eyes can be helpful in identifying new ways to improve. When internal coaches move from team to team they act like bees, pollinating each team with new ideas.



Choosing Your Approach
There are two driving factors in choosing among these three patterns for spreading Scrum: How quickly do we need to spread Scrum to additional teams, and do we have good internal coaches who can assist the new teams? The answers to these questions will be key to helping you choose the pattern that best fits your organization.

In general, consider using split and seed when you are in hurry. The splitand-seed approach can be one of the fastest ways to spread Scrum through an organization. The approach can be accelerated in a couple of different ways: First, you can split teams a bit earlier than might be ideal. Second, you can split teams into more new teams than might be ideal, perhaps four new teams instead of two, even if this means that some new teams get some less-than-ideal coaches from the earlier teams.

Be cautious, though, about using split and seed if the technology and domain cannot support moving people among teams. Changing team membership is always detrimental to productivity. That loss can be offset, however, by the benefits of quickly spreading Scrum through a large project or organization. However, in some cases, it is just not practical to move people between teams. For example, seeding a .NET team with Java programmers just because they have three sprints of Scrum experience would not be a good idea.

The grow-and-split pattern is perhaps the most natural approach, as it mirrors what would probably happen if no one intervened to help the spread of Scrum. In most organizations, people move between projects, carrying good practices with them. The grow-and-split approach is simply a more directed approach than letting this happen naturally, which would take much, much longer.

Consider using grow and split when there is not enough urgency to push you to the split-and-seed approach. Because growing and splitting a team is a less aggressive (and less risky) approach than splitting and seeding a team, it is often used in similar situations but when there is a bit less urgency. Also consider using grow and split when the team size is growing anyway. True to its name, the grow-and split approach works best when teams are expanding.

Internal coaching can be used as a spreading strategy on its own, or it can be used to augment either of the other approaches. This approach works best under certain conditions:

• When the group is large enough that good practices won't fully spread on their own. One of the strengths of this pattern is that coaches can move from one team to another, spreading good practices as they do so. If your organization is small enough that sharing good practices won't be a problem, then you may not need this approach.

• When splitting teams is not practical for your projects. If any of the drawbacks to splitting teams concern you, the internal coaching approach is a good antidote.

• When you have enough internal coaches or can bring in outside help. An ideal coach is someone who fundamentally understands Scrum and has probably worked in an agile way for years before even hearing the word. These individuals can be hard to identify in advance; they aren't necessarily the most experienced team members. If you don't have enough good coaches, consider using one of the other patterns initially. After enough teams have run a few sprints, you can begin to augment a seeding approach with internal coaches. You can also spread the coaches you do have out a bit more by having each coach assist more than one team. If budget allows, you can also bring in outside consultants until you have built up your internal coaching corps.
Iterating Toward Agility
Historically, when an organization needed to change, it undertook a "change program." The change was designed, had an identifiable beginning and ending, and was imposed from above. This worked well in an era when change was necessary only once every few years. Christopher Avery has written, "I think in the 1960s and 1970s this approach was probably more frequently successful than it has been in the 1990s and today because the frequency of change has intensified as competition has become global, and the model has broken down". Avery continues by saying that "if the changes are coming so fast and furious that programmed change won't work, perhaps we have to arrange ourselves (organizationally speaking) to digest many more smaller changes on a continual basis".

Whether you are just starting to adopt Scrum or you are at the point where you are ready to fine-tune your use of Scrum, you should manage the effort in an agile way. Following an iterative transition process—making small changes on a continual basis—is a logical way to adopt a development process that is itself iterative. Doing so will be much more likely to result in a successful and sustainable transition. This is why I believe that the effort of adopting Scrum is best managed using Scrum itself. With its iterative nature, fixed timeboxes, and emphasis on teamwork and action, it seems best suited to manage the enormous project of becoming and then growing agile with Scrum.

In 2004, the leaders of Shamrock Foods realized that change was coming too quickly in their industry. As one of the ten largest food distributors in the United States, Shamrock had for 20 years used a conventional, top-down strategic planning process, dedicating months each year to creating a 5-year plan that was out of date before the ink dried. To address this problem, CEO Kent McClelland abandoned the company's 20-year-old approach and began to apply a Scrum-based iterative strategic planning process.

Shamrock's process revolved around quarterly strategic "scrums" [sprints]: Team members met at an offsite location for a day to evaluate the company's performance against the action plans from the previous quarter. We asked them to identify the most important things they had learned about the company's strategy since the previous meeting and to suggest how those insights should be integrated in the strategy going forward. The group created new action plans for the upcoming period. In addition to the quarterly scrums [sprints], the participants met every year for three days, during which time people were asked to step further back and revisit the company's strategic assumptions.

Forty-five managers and employees participated in these sprints and were chosen to represent each division and functional area. At the start of each quarterly sprint, this group selected up to a handful of key areas in which they agreed the company should improve. These were referred to as themes. Because Shamrock was applying Scrum to an organizational improvement effort rather than software development, the themes represented broad business goals. Examples included increasing revenue on Shamrock's house brands, improving how it serviced large customers like Burger King, and improving the company's ability to recruit, retain, and develop good talent.

Many corporate improvement initiatives fail because plans are not made specific and actionable. Because they were using Scrum, Shamrock employees went beyond just identifying themes for improvement: "Planning participants created and prioritized a handful of specific and measurable strategic initiatives that would advance each strategic theme. Then they built detailed action plans and set measurable outcomes they thought could be achieved within 90 days".

Not only does the Shamrock story illustrate the broad applicability of Scrum, it serves as an example of how Scrum can be used to manage an organizational improvement effort. In this chapter, we look at how to use Scrum first to adopt Scrum and then to continuously improve by engaging communities of like-minded employees, such as the 45 people who guided Shamrock's improvement effort.

Source of Information : Pearson - Succeeding with Agile Software Development Using Scrum 2010


Subscribe to Developer Techno ?
Enter your email address:

Delivered by FeedBurner