SQL Server 2008 Security Catalog Views

SQL Server 2008 provides several security catalog views and a dynamic management view, all of which can be used to retrieve information about encryption functionality. The following views are available in SQL Server 2008:

• sys.asymmetric_keys: This catalog view returns information about the asymmetric key pairs installed in the current database. The information returned by this view includes the name, asymmetric key ID, private key encryption type, encryption algorithm used, public key, and additional information about each installed asymmetric key pair.

• sys.certificates: This catalog view returns information about the certificates installed in the current database. The information returned by this view is similar to that returned by the sys.asymmetric_keys view. It includes the name, certificate ID, private key encryption type, name of the certificate’s issuer, certificate serial number, and additional certificate-specific information (such as subject, start date, and expiration date).

• sys.crypt_properties: This catalog view returns a row for each cryptographic property associated with a securable in the database. The information returned about each securable includes the class of the securable, ID of the securable, encryption type used, and SHA-1 hash of the certificate or asymmetric key used to encrypt the securable.

• sys.dm_database_encryption_keys: This dynamic management view returns information about the encryption state of a database and the encryption keys used in the database. Some of the values returned in the encryption_state column of this view are 0 if no encryption is present, 1 if the database is unencrypted, 3 when the database is encrypted, or another value indicating a database encryption or decryption action is currently in progress.

• sys.key_encryptions: This catalog view returns a row for every key encryption, as specified by the CREATE SYMMETRIC KEY statement’s ENCRYPTION BY clause. Information returned includes the ID of the encrypted key, encryption type, and thumbprint of the certificate or symmetric key used to encrypt the key. A thumbprint, in terms of SQL Server 2008 security catalog views, is an SHA-1 hash of a certificate or an asymmetric key, or a GUID for a symmetric key. Several of the security catalog views return a thumbprint of certificates, asymmetric keys, or symmetric keys.

• sys.master_key_passwords: This catalog view returns a row for each DMK password added with the sp_control_dbmasterkey_password stored procedure. Each row returns an ID of the credential to which the password belongs and a GUID of the original database at creation time. The GUID is used by SQL Server to identify credentials that may contain passwords that protect the DMK in the event that automatic decryption fails. Passwords used to protect the DMKs are stored in the credential store.

• sys.openkeys: This catalog view returns information about all open encryption keys in the current session. Information returned includes the ID and name of the database that contains the key; IDs, names, and GUIDs of each open key; and the date and time the key was opened.

• sys.symmetric_keys: This catalog view returns a row for each symmetric key in the database. Information returned includes the name, ID, GUID, length, and algorithm of the symmetric key. Also returned are the ID of the principal who owns the key and the dates that the symmetric key was first created and last modified.

Securables in SQL Server 2008 are resources and objects for which the SQL Server database engine regulates authorization and access. Securables are divided into three scopes for which SQL Server can regulate access: Server, Database, and Schema. The Server scope includes securables like endpoints, logins, and databases. The Database scope includes users, roles, certificates, asymmetric key pairs, symmetric keys, schemas, and other Database-scoped securables. The Schema scope contains tables, views, functions, procedures, constraints, and other objects. Not all securables have cryptographic properties, but the sys.crypt_properties security catalog view returns information for those that do.

Source of Information : Apress Accelerated SQL Server 2008

Blogging has become an important way for people to share experiences with friends or the world at large. Web sites that host blogs provide word processing capabilities, but they are often simplistic. That’s where Word 2007 can come in. It may seem like a strange idea to use Word to write your blog posts, but there are a few reasons to consider it. If you are new to blogging and familiar with Word, you can stay with the application you know. Also, Word offers many more formatting features than are normally available in an online editor. Finally, Word lets you focus on what you want to say with a minimum of coding concerns.

Create A Post
Click the Office button, click New, and double-click New Blog Post. A blank document appears. If you haven’t already registered Word with a blog account, then the Register A Blog Account dialog box will open. You can choose Register Now or Register Later, but for now, click the Office Online link to see how the supported blog providers work with Word 2007. Word 2007 explicitly supports: WordPress (free; www.wordpress.com), Blogger (free; www.blogger.com), Windows Live Spaces (free; spaces.live.com), Microsoft Windows SharePoint Services (for businesses), Community Server (prices vary; www.communityserver.com), and TypePad (prices vary; www.typepad.com). If your blog provider isn’t listed, it still may work with Word 2007. Word supports most sites that use either the Atom or MetaWeblog API (application programming interface). We will look at setting up the provider later. Return to Word and click Register Later.

Blog Tabs
A blank page appears with the field Enter Post Title Here at the top. Notice that the Ribbon contains Blog Post and Insert tabs. The Blog area of the Blog Post tab contains the functions needed to publish a blog post created in Word and to create and manage your blog provider accounts. The Insert tab contains a few areas, the largest of which is Illustrations. Of the functions on this tab, you’ll probably most often use Picture, to insert images, and Hyperlink, to create links. Click in the Enter Post Title Here area and type a title for the blog post. Now click below the horizontal line and compose your blog. Use any of the formatting tools available on the Ribbon.

Post Your Blog
Now that you have something to post, you will need to upload it to your blog host. If you don’t have an account, we recommend WordPress or Blogger to get started, because both are free and easy to set up. We will use WordPress as an example, but the process is similar for all the services. You will need to know your blog’s username and password and, in some cases, the URL of your blog. Click the Blog Post tab on the Ribbon and then click the arrow under the Publish button and select Publish As Draft. Publishing as a draft lets you preview the blog entry before it goes live. You can use Publish to turn the blog post live right away. Click Register An Account in the Registering A Blog Account dialog box. Click the drop-down list and choose WordPress (or another provider you already have set up) and then click Next. The New WordPress Account dialog box opens. In the top box, type or paste the URL of the blog between the brackets and then delete the brackets. For example, if your blog address is http://betterblogss.wordpress.com, the entry should look like this: http://betterblogss.wordpress.com/xmlrpc.php. Type in your username and password and click the Picture Options button. Choose My Blog Provider from the drop-down list and click OK. Click OK again. Click Yes to confirm that you’d like to post the entry, and you should see a message at the top of your entry saying it has been posted. Now log on to your blog site and preview the post. It may look slightly different than it did in Word. This is because the templates that blog providers use have some rules of presentation that override Word’s. Make any necessary adjustments and then publish your post for the world to see.

Source of Information : Smart Computing / January 2009

MySQL Security - The Grant Tables

MySQL uses a series of grant tables to store users and their privileges. The tables are ordinary MyISAM tables* that live in the mysql database. Storing the security information in grant tables makes a lot of sense, but it also means that if the server isn’t configured correctly, any user can make security changes by altering the data in those tables!

MySQL’s grant tables are the heart of its security system. MySQL now gives you nearly full control of security with the GRANT, REVOKE, and DROP USER privileges. However, manipulating the grant tables used to be the only means of performing certain tasks. For instance, in old MySQL versions, the only way to remove a user completely was to DELETE that user from the user table, then FLUSH PRIVILEGES.

We don’t recommend changing the grant tables directly, but you should still understand how they work so you can debug unexpected behavior. We encourage you to examine the grant table structures with DESCRIBE or SHOW CREATE TABLE, especially after you use GRANT and REVOKE to change privileges. You’ll learn more from doing that than from reading about them. Here are the grant tables in the order in which MySQL consults them when checking whether a user is authorized to do a specific operation:

Each row contains a user account (the username, hostname, and encrypted password) and the user’s global privileges. MySQL 5.0 adds optional per-user limits, such as the number of connections the user is allowed to have.

Each row contains database-level privileges for a specific user.

Each row contains privileges to one database for a user connecting from a given host. The entries are “merged” with entries in db when checking database-level access. Though we list it as a grant table, the GRANT and REVOKE commands never modify the host table. You can only add and remove entries manually. We recommend that you do not use this table to prevent maintenance problems and confusing behavior later.

Each row contains table-level privileges for a specific user and table. It also contains view privileges.

Each row contains column-level privileges for a specific user and column.

procs_priv (new in MySQL 5.0)
Each row contains privileges for a specific user and stored routine (stored procedure or function).

Source of Information : OReIlly High Performance MySQL Second Edition

MySQL Security - Privileges

MySQL uses your account information (username, password, and location) to authenticate you. Once it has done so, it must decide what you’re allowed to do. It does this by consulting your privileges, which are usually named after the SQL queries they let you execute. For example, you need the SELECT privilege on a table to retrieve data from it.

There are two kinds of privileges: those that are associated with objects (such as tables, databases, and views), and those that aren’t. Object-specific privileges grant you access to specific objects. For example, they control whether you can retrieve data from a table, alter a table, create a view in a database, or create a trigger. MySQL 5.0 and newer have many additional object-specific privileges, because of the introduction of views, stored procedures, and other new features.

Global privileges, on the other hand, let you perform functions such as shutting down the server, executing FLUSH commands, running various SHOW commands, and viewing other users’ queries. In general, the global privileges let you do things to the server, and the object-based privileges let you do things to the server’s contents (although that distinction is not always sharply defined). Each global privilege has far-reaching security implications, so be very cautious when granting any of them!

MySQL privileges are Boolean: a privilege is either granted or not. Unlike some other database systems, MySQL doesn’t have the notion of explicitly denied privileges. Revoking a privilege doesn’t forbid the user to perform an action; it merely removes the privilege to perform the action if it exists. MySQL’s privileges are also hierarchical, with a twist or two.

Source of Information : OReIlly High Performance MySQL Second Edition

IIS 7 Managed Modules

In addition to native modules, IIS 7.0 ships with several modules developed using managed code. Some of the managed modules, such as UrlAuthorization, have a native module counterpart that provides a native alternative to the managed module. Although modules developed using native code are generally faster and more efficient with memory and other system resources, native code modules can often be a less time-consuming and flexible alternative to develop. Note that managed modules require that the ManagedEngine module be installed.

The following table lists the managed modules that ship with IIS 7.0:

AnonymousIdentification. Manages anonymous identifiers, which are used by features that support anonymous identification such as ASP.NET profile.

DefaultAuthentication. Provides an authentication object to the context when no other authentication method succeeds.

FileAuthorization.Verifies that a user has permission to access the requested file.

FormsAuthentication. Supports authentication by using Forms authentication.

OutputCache. A managed code alternative to the native HttpCacheModule.

Profile. Manages user profiles by using ASP.NET profile, which stores and retrieves user settings in a data source such as a database.

RoleManager. Manages a RolePrincipal instance for the current user.

Session. Supports maintaining the session state, which enables storage of data specific to a single client within an application on the server. Note that without this module, the session state will be unavailable in your applications.

UrlAuthorization. Determines whether the current user is permitted access to the requested URL, based on the user name or the list of roles that a user is a member of.

UrlMappingsModule. Supports configurable mapping of a real URL to a more user-friendly URL (that is, URL Rewrite).

WindowsAuthentication. Sets the identity of the user for an ASP.NET application when Windows authentication is enabled.

Almost all of the feature set of IIS that was implemented as part of the core web server system in previous versions is now delivered as a set of modular plug-in components.

Components can be installed or removed as needed, thus streamlining the server workload and customizing to the specific application. Since this modular structure is based on the worker process object, this customization can be applied to any level, from discrete applications to the global web server environment.

Source of Information : Wrox Professional IIS 7

IIS 7 Native Code Modules

HTTP Modules
The following modules provide HTTP-specific tasks related to client–server interaction.

HttpRedirectionModule. Supports configurable redirection for HTTP requests to a local resource.

ProtocolSupportModule. Performs protocol-related actions (such as setting response headers and redirecting headers), implements the Trace and Options HTTP verbs, and manages keep-alive support via configuration controls.

Security Modules
The following lists the modules that perform security-related functions. A separate module exists for each authentication mechanism, allowing you to select which authentication mechanisms are supported on your server and to remove those that you don’t need.

Note that you must install at least one authentication module. Without at least one authentication module, IIS cannot determine whether the request is authorized to access the relevant system resources. IIS checks for a valid user object after the authentication phase and returns a 401.2 error if it doesn’t find one.

AnonymousAuthentication Module. Performs Anonymous authentication when no other authentication method succeeds, or if no other authentication module is present. Typically, this module would be removed for an intranet or secured membership application.

BasicAuthenticationModule. Performs Basic authentication as described in RFC 2617.

DigestAuthentication Module. Performs Digest authentication as described in RFC 2617. The IIS host must be part of an Active Directory domain.

IISCertificateMapping AuthenticationModule. Maps SSL client certificates to a Windows account. SSL must be enabled with the requirement to receive client certificates for this module to work.

CertificateMapping AuthenticationModule. Similar to the previous module, but performs Certificate Mapping authentication using Active Directory. SSL must be configured for this module to work, and the IIS host must be a member of an Active Directory domain. Caution: Requests may be allowed if Active Directory Certificate Mapping is configured to protect a directory but the module is removed!

RequestFilteringModule. Performs URLScan tasks such as configuring allowed verbs and file extensions, setting limits, and scanning for bad character sequences. This module is the successor of the ISAPI filter UrlScan.DLL that shipped with IIS 5.0 and 6.0.

UrlAuthorizationModule. Performs URL authorization based on configuration rules.

WindowsAuthentication Module. Performs Windows authentication (NTLM or Kerberos).

IpRestrictionModule. Restricts access to IPv4 clients based on a list of addresses in the IISconfiguration.

Content Modules
The following modules provide functionality related to static web-site content, such as images and plain HTML:

DefaultDocumentModule. Displays a default document from a list of default files in the configuration when no explicit document has been identified in the request. If a matching default document is not found, a 404 result will be returned.

DirectoryListingModule. Lists the contents of a directory if no file is explicitly requested—for example, when the request is something like http://www.server1.com/path/ or just http://www.server1.com. Note that if the DefaultDocumentModule is installed, a default document match will be attempted first. If this module is not installed, and either the default document module is not installed or there is no matching default document found, a 404 (not found) error will result.

ServerSideIncludeModule. Implements server-side includes for those requests ending in .stm, .shtm, or .shtml.

StaticFileModule Delivers static file content such as plain HTML and images. The list of file extensions supported is determined by the staticContent/mimeMap configuration collection.If this module is not present, requests for static content return an HTTP 200 (OK) response, but the entity body (page) will be blank.

Compression Modules
The following two modules perform gzip compression in the request-processing pipeline. Most modern web browsers and search engine indexers support this compression technique.

DynamicCompressionModule. Applies gzip compression to outbound responses produced by applications.

StaticCompressionModule. Performs gzip compression of static content in memory as well as persistent in the file system.

Caching Modules
The following modules manage caching of responses to requests. Note that for user mode caching, the cache resources are defined under the user account mapped to the request, whereas the kernel mode cache is handled by the Http.sys identity.

FileCacheModule. Provides user mode caching for files and handles on files opened by the server engine and modules, reducing file access overheads and improving request delivery times.

HTTPCacheModule. Provides kernel mode and user mode caching in Http.sys and manages the output cache, including cache size and cache profiles as defined via configuration controls.

TokenCacheModule. Caches Windows security tokens for password-based authentication schemes (Anonymous, Basic, IIS client certificate). For example, a password-protected
HTML page that references 50 images that are also protected would normally result in 51 logon calls to the local account database, or, even worse, to an offbox domain controller. Using the TokenCacheModule, only one logon event is called and the result is cached, with the remaining reference requests authorized through that cached authentication token.

UriCacheModule. Provides user mode caching of URL information, such as configuration settings. With this module, the server will read configuration data only for the first request for a particular URL, and reuse it on subsequent requests until it changes.

Logging and Diagnostics Modules
The following modules provide support for functions related to web-site and web application diagnostics and logging. Logging includes ordinary web request logs, as well as application execution logging during run time or failure.

CustomLoggingModule. Provided for legacy support of custom logging modules such as ODBC support. This module also supports the ILogPlugin COM interface, but you should use the new Http Module API for any new development.

FailedRequestsTracing Module. Implements tracing of failed requests, taking definition and rules for failed requests via configuration.

HttpLoggingModule. Implements the standard web-site logging functions by Http.sys.

RequestMonitorModule. Implements the IIS 7.0 Runtime State and Control Interface \ (RSCA). RSCA allows its consumers to query for runtime information like currently executing request, start/stop state of a web site, or currently executing application domains.

TracingModule. Reports events to Microsoft Event Tracing for Windows (ETW).

CustomErrorModule. Sends rich HTML content to the client on server error, and allows you to customize that default content. Without this module, IIS will send blank pages with minimal information on any server error, including 404.

ConfigurationValidation Module. Validates configuration issues, such as when an application is running in Integrated mode but has handlers or modules declared in the system.web section, and displays relevant error information if a problem is detected.

Extensibility Support Modules
The following modules support extending the web server platform to produce dynamic content and special functionality:

IsapiModule. Implements functionality for ISAPI Extensions mapped in the section (modules=”IsapiModule”) or called by a specific URL request to the dll.

IsapiFilterModule. Implements ISAPI filter functionality, such as legacy mode ASP.NET or SharePoint.

ManagedEngine Provides integration of managed code modules in the IIS request-processing pipeline. If this module is not installed, then managed code modules will not work, even if they are installed.

CgiModule. Implements the Common Gateway Interface (CGI) to allow the execution of external programs like Perl, PHP, and console programs to build response output.

FastCgiModule. Supports FastCGI, which provides a high-performance alternative to CGI.

Source of Information : Wrox Professional IIS 7

MySQL Security Terminology

Who are you? MySQL authenticates you with a username, a password, and the host from which you are connecting. Knowing who you are is a prerequisite to determining your privileges.

What are you allowed to do? Shutting down the server, for example, requires that you have the SHUTDOWN privilege. In MySQL, authorization applies to globalprivileges, which aren’t associated with any specific schema objects (such as tables or databases).

Access control
What data are you allowed to see and/or manipulate? When you try to read or modify data, MySQL checks to see that you’ve been granted permission to see or change the columns you are selecting. Unlike global privileges, access controls apply to specific data, such as a particular database, table, or column.

Privileges and permissions
These terms mean roughly the same thing—a privilege or permission is how MySQL represents an authorization or access right.

Source of Information : OReIlly High Performance MySQL Second Edition

IIS Core Server and Modules

Unlike previous versions, with core server implementations that were successively larger and more complex, IIS 7.0 provides a sleek core server system cut down to the bare bones of a high-performance and robust processing engine. The basic functionality is broken into three basic components that act as the foundation of arguably the most innovative and flexible web server system ever released:

• Http.sys — The HTTP Listener does nothing more than listen on port 80 for inbound web requests and then pass those requests to the IIS 7.0 core.

• svchost.exe — The WWW Publishing Service provides the basic web server functions of identifying the specific web-site destination and managing resources and execution of the relevant worker process(es) used to handle the request.

• w3wp.exe — One or more worker processes that handle all the remaining request processing.

At first glance, the IIS 7.0 core server appears similar to previous versions. IIS 6.0 introduced the concept of independent worker processes to isolate independent applications running on the same physical server and thereby prevent a failure in one application from affecting any other.

If you are familiar with the structure of IIS 6.0, you may recognize the same basic structure of HTTP Listener, WWW Publishing Service, and application pool; but the major difference in IIS 7.0 lies in the implementation of the worker processes. Although IIS 7.0 also executes each request inside independent application pools, the way these processes handle the request is an entirely new approach.

The application pool process itself can now be considered as a simple workflow or processing pipeline. Each stage in the request life cycle is referred to as an event, and modules provide the relevant functionality for the event processing. For example, when the worker process reaches the authenticateRequest event stage, it will hand off processing to any active module providing that function.

HTTP Modules
Out of the box, IIS 7.0 ships with more than 40 individual modules. The default installation activates many of these, which, as you will discover below in this chapter, are not all required. In fact, you can obtain a perfectly functional web server using only a handful of the default modules. To understand how the core server works, it is useful to take a closer look at each of modules that ship with IIS 7.0. There are two basic categories:

• Native Code Modules — Generally, a binary .dll file, developed using languages such as VB and C++.

• Managed Code Modules — Developed using scripted and runtime interpreted languages, including C# and ASP.NET.

Source of Information : Wrox Professional IIS 7

Using SQL Server XML Datatype

SQL Server has an XML datatype you can use to natively store XML data in SQL Server databases. If you are still using SQL Server 2000, you can store XML, but it must be in a string-based column, or you must shred the data into relational columns using OPENXML or BulkLoad. By using a native XML type, SQL Server 2005 and 2008 can support richer operations against your XML data, such as constraints, cross-domain queries that combine relational data and XQuery, and XML indexes.

Another benefit of using the native XML datatype is that XML data is inherently different from relational data in its structure. XML data is in a hierarchical structure that can be recursive, and XML supports a different query language than relational systems.

There are many scenarios where using relational modeling is a better choice than XML, and vice versa. For example, if you have data that is very interrelated, such as customers, their orders, the products in the orders, and the stores that sell those products, you could try to implement a solution using XML, but it would be quite challenging. How do you structure your hierarchy? Do you want a customer to be a top-level node and then have orders for each customer appear underneath? How do you write a query that returns all customers with at least five orders, where each order is greater than $1,000, and the name of the store where the customers purchased the products? Another problem is that you will repeat data throughout the hierarchy, such as product names, product prices, and so on, because of the hierarchical nature of XML. Plus, if you want to delete a customer but not the products or orders under that customer, you can’t do so, because the orders and products are children under the customer element. On the other hand, using a relational model as a solution, you can quickly model your data and query the information. You may be thinking that in this scenario, you should just shred your XML data into the relational database, as you saw in the previous chapter. However, shredding has its own issues, in that you do not always get back what you put in, since you are not guaranteed the same XML when you reconstitute the shredded data. Shredding adds another layer of complexity in terms of code creation and maintenance. Also, any reasonably complex XML document will need to be shredded across many tables, requiring extensive join operations across those tables to reconstitute the XML. You’ll also end up with a complex, annotated schema full of tables and many foreign key relations into which to shred that XML.

Now, there are scenarios where modeling your data using XML is very useful. First, XML can be more flexible than relational models. So, if you need a free-form structure to store data, XML can be a good choice. Also, XML is self-describing and easily portable across applications or even platforms. Plus, if your data has sparse entries or needs rich multivalue functionality, XML is a good choice as your data format. Finally, if you truly have document-centric data such as Microsoft Office documents, you will want to store this information as XML, since Microsoft Office documents lack rigid structures. XML provides the flexibility to store and query the information in the documents in a rich way.

Even if you choose XML as the format for your data, you will need to decide between using the XML datatype, shredding your XML into relational columns, and storing the XML using the (n)varchar(max) or varbinary(max) type. If you care about the order of elements, and you want the ability to use XML programming paradigms such as XPath and XQuery, you will want to use the XML datatype to store your XML data. If your XML data is best represented using a relational model, you can shred your data into relational columns using annotated schemas, just as you could in SQL

Server 2000. Finally, if you need to preserve the XML data exactly as it was created, including whitespace and attribute ordering, then you will want to store the XML in an (n)varchar(max) or a varbinary(max) column. Some scenarios (such as legal documents) may require this.

Finally, SQL Server can support a hybrid model, whereby you may use the XML datatype but promote certain properties—for example, key document properties such as author, last modification time, or last save time—into relational columns, or you may shred your XML into relational columns but keep the original copy in an nvarchar column. SQL Server provides the flexibility to meet the needs of your application when working with XML data.

We want to make one thing very clear, though, since this will cause you issues in the long term if you do not remind yourself of it regularly: if your data is quite structured, in that your XML does not look hierarchical and is normalized, you should use the relational model. Do not use XML. XML is targeted at semistructured or unstructured data. If you need to dynamically add schemas or data on the fly that you never expected, XML is your best choice. Do not make the mistake of thinking everything is a nail to bang with the XML hammer in SQL Server.

Source of Information : Apress Accelerated SQL Server 2008

Evolution of SQL Server

SQL Server has evolved over the years into the product it is today. Summary of this process :

1988 - SQL Server - Joint application built with Sybase for use on OS/2.

1993 - SQL Server 4.2, a desktop database - A low-functionality, desktop database, capable of meeting the data storage and handling needs of a small department. The concept of a database that was integrated with Windows and had an easy-to-use interface proved popular.

1994 - Microsoft splits from Sybase.

1995 - SQL Server 6.05, a small business database - Major rewrite of the core database engine. First “significant” release. Improved performance and significant feature enhancements. Still a long way behind in terms of the performance and feature set of later versions, but with this version, SQL Server became capable of handling small ecommerce and intranet applications, and was a fraction of the cost of its competitors.

1996 - SQL Server 6.5 - SQL Server was gaining prominence such that Oracle brought out version 7.1 on the NT platform as direct competition.

1998 - SQL Server 7.0, a web database - Another significant rewrite to the core database engine. A defining release, providing a reasonably powerful and featurerich database that was a truly viable (and still cheap) alternative for small-to-medium businesses, between a true desktop database such as MS Access and the high-end enterprise capabilities (and price) of Oracle and DB2. Gained a good reputation for ease of use and for providing crucial business tools (e.g., analysis services, data transformation services) out of the box, which were expensive add-ons with competing databases.

2000 - SQL Server 2000, an enterprise database - Vastly improved performance scalability and reliability sees SQL Server become a major player in the enterprise database market (now supporting the online operations of businesses such as NASDAQ, Dell, and Barnes & Noble). A big increase in price (although still reckoned to be about half the cost of Oracle) slowed initial uptake, but the excellent range of management, development, and analysis tools won new customers. In 2001, Oracle (with 34% of the market) finally ceded its No. 1 position in the Windows database market (worth $2.55 billion in 2001) to SQL Server (with 40% of the market). In 2002, the gap had grown, with SQL Server at 45% and Oracle slipping to 27%.a

2005 - SQL Server 2005 - Many areas of SQL Server have been rewritten, such as the ability to load data via a utility called Integration Services, but the greatest leap forward was the introduction of the .NET Framework. This allowed .NET SQL Server–specific objects to be built, giving SQL Server the flexible functionality that Oracle had with its inclusion of Java.

2008 - SQL Server 2008 - The aim of SQL Server 2008 is to deal with the many different forms that data can now take. It builds on the infrastructure of SQL Server 2005 by offering new data types and the use of Language Integrated Query (LINQ). It also deals with data, such as XML, compact devices, and massive database installations, that reside in many different places. Also, it offers the ability to set rules within a framework to ensure databases and objects meet defined criteria, and it offers the ability to report when these objects do not meet this criteria.

Source of Information : Apress Beginning SQL Server 2008 for Developers From Novice to Professional

Why SQL Server 2008?

SQL Server faces competition from other databases, not only from other Microsoft products such as Microsoft Access and Microsoft Visual FoxPro, but also from competitors such as Oracle, Sybase, DB2, and Informix, to name a few. Microsoft Access is found on a large number of PCs. The fact that it is packaged with some editions of Office and has been around for a number of years in different versions of Office has helped make this database ubiquitous; in fact, a great number of people actually do use the software. Unfortunately, it does have its limitations when it comes to scalability, speed, and flexibility, but for many small, in-house systems, these areas of concern are not an issue, as such systems do not require major database functionality.

Now we come to the serious competition: Oracle and Sybase. Oracle is seen as perhaps the market leader in the database community, and it has an extremely large user base. There is no denying it is a great product to work with, if somewhat more complex to install and administer than SQL Server; it fits well with large companies that require large solutions. There are many parts to Oracle, which make it a powerful tool, including scalability and performance. It also provides flexibility in that you can add on tools as you need them, making Oracle more accommodating in that area than SQL Server. For example, SQL Server 2008 forces you to install the .NET Framework on your server whether you use the new .NET functionality or not. However, Oracle isn’t as user friendly from a developer’s point of view in areas like its ad hoc SQL Query tool and its XML and web technology tools, as well as in how you build up a complete database solution; other drawbacks include its cost and the complexity involved in installing and running it effectively. However, you will find that it is used extensively by web search engines, although SQL Server could work just as effectively. With the new functionality in SQL Server 2008, Oracle will be under pressure to expand its existing functionality to meet this challenge. SQL Server has always been a one-purchase solution, such that (providing you buy the correct version) tools that allow you to analyze your data or copy data from one data source such as Excel into SQL Server will all be “in the box.” With Oracle, on the other hand, for every additional feature you want, you have to purchase more options.

Then there is Sybase. Yes, it is very much like SQL Server with one major exception: it has no GUI front end. Sybase iAnywhere, which is mainly used for small installations, does have a front end, but the top-of-the-range Sybase does not. To purists, there is no need for one, as GUI front ends are for those who don’t know how to code in the first place—well, that’s their argument, of course, but why use 60+ keystrokes when a point, click, and drag is all that is required? Sybase is also mainly found on Unix, although there is a Windows version around. You can get to Sybase on a Unix machine via a Windows machine using tools to connect to it, but you still need to use code purely to build your database solution. It is very fast and very robust, and it is only rebooted about once, maybe twice, a year. Another thing about Sybase is that it isn’t as command- and featurerich as SQL Server. SQL Server has a more powerful programming language and functionality that is more powerful than Sybase.

Each database has its own SQL syntax, although they all will have the same basic SQL syntax, known as the ANSI-92 standard. This means that the syntax for retrieving data, and so on, is the same from one database to another. However, each database has its own special syntax to maintain it, and trying to use a feature from this SQL syntax in one database may not work, or may work differently, in another.

So SQL Server seems to be the best choice in the database marketplace, and in many scenarios it is. It can be small enough for a handful of users, or large enough for the largest corporations. It doesn’t need to cost as much as Oracle or Sybase, but it does have the ability to scale up and deal with terabytes of data without many concerns. As you will see, it is easy to install, as it comes as one complete package for most of its functionality, with a simple install to be performed for the remaining areas if required.

Now that you know the reasons behind choosing SQL Server, you need to know which versions of SQL Server are out there to purchase, what market each version is aimed at, and which version will be best for you, including which version can run on your machine.

Source of Information : Apress Beginning SQL Server 2008 for Developers From Novice to Professional

IIS 7.0 Key Characteristics

IIS 7.0 was built with the following characteristics in mind:

• Simple to use—As previously mentioned, IIS 7.0 does not use the same metabase scheme as previous versions of IIS, but rather uses a series of plain-text XML files for configuration. With IIS 7.0, the state is in the files, thus a change to the files results in an immediate change to the server, site, or application configuration.

• Securely built—The default configuration is set to allow only the system administrator to configure the server, sites, and applications. By using Feature Delegation, system administrators can securely make site and application administration available to down-level administrators without giving more permissions on the server than necessary. The system by default does not store sensitive information like passwords. However, if there is a need to store sensitive information, it is encrypted on the hard disk. In addition to these security features, applications can be isolated to prevent other applications from sharing or reading the settings.

• Extensible — Just as the IIS Manager is extensible, so is the IIS configuration. This is made easy because the schema of IIS is contained in XML files. To extend the schema, just place an XML file in the schemas folder. You’ll see below how the settings are arranged in “sections” within the configuration files.

• Low TCO—By changing to the XML file-based schema, IIS 7.0 is easier to deploy and manage. The file-based schema allows for web settings to be published in the same files as the web-site content. With this used in conjunction with Feature Delegation, the system administrator doesn’t have to be as involved with every site change made on the server. The web.config file can contain both the IIS settings and the ASP.NET settings for a web site permitting centralized control over the site settings. The file-based structure also makes it possible to use standard file systembased tools for maintenance (backup and restore) and security.

• Compatible with previous versions — Applications created for IIS 6.0 will continue to run on IIS 7.0 by calling interfaces such as Admin Base Objects (ABO), the ADSI provider, and the IIS 6.0 WMI provider. Because .NET Framework 2.0 was built into IIS 7.0 from the ground up, current .NET applications will continue to work by calling System.Configuration and System.Web.Configuration. The config files will continue to follow the structure of the web.config and machine.config files from IIS 6.0, as well as add IIS configuration settings to the files.

Source of Information : Wrox Professional IIS 7


The SHOW STATUS command shows server status variables in a two-column name/value table. Unlike the server variables we mentioned in the previous section, these are read-only. You can view the variables by either executing SHOW STATUS as a SQL command or executing mysqladmin extended-status as a shell command. If you use the SQL command, you can use LIKE and WHERE to limit the results; the LIKE does a standard pattern match on the variable name. The commands return a table of results, but you can’t sort it, join it to other tables, or do other standard things you can do with MySQL tables.

The behavior of SHOW STATUS changed greatly in MySQL 5.0, but you might not notice unless you’re paying close attention. Instead of just maintaining one set of global variables, MySQL now maintains some variables globally and some on a perconnection basis. Thus, SHOW STATUS contains a mixture of global and session variables. Many of them have dual scope: there’s both a global and a session variable, and they have the same name. SHOW STATUS also now shows session variables by default, so if you were accustomed to running SHOW STATUS and seeing global variables, you won’t see them anymore; now you have to run SHOW GLOBAL STATUS instead.*

In MySQL 5.1 and newer, you can select values directly from the INFORMATION_

There are hundreds of status variables in a MySQL 5.0 server, and newer versions include even more. Most either are counters or contain the current value of some status
metric. Counters increment every time MySQL does something, such as initiating a full table scan (Select_scan). Metrics, such as the number of open connections to the server (Threads_connected), may increase and decrease. Sometimes there are several variables that seem to refer to the same thing, such as Connections (the number of connection attempts to the server) and Threads_connected; in this case, the variables are related, but similar names don’t always imply a relationship.

Counters are stored as unsigned integers. They use 4 bytes on 32-bit builds and 8 bytes on 64-bit builds, and they wrap back to 0 after reaching their maximum values. If you’re monitoring the variables incrementally, you might need to watch for and correct the wrap; you should also be aware that if your server has been up for a long time, you might see lower values than you expect simply because the variable’s values have wrapped around to zero. (This is very rarely a problem on 64-bit builds.) The best way to look at many of these variables is to see how much their values change over the course of a few minutes. You can use mysqladmin extended-status –r -i 5 or innotop.

The following is an overview—not an exhaustive list—of the different categories of variables you’ll see in SHOW STATUS. For full details on a given variable, you should consult the MySQL manual, which helpfully documents them at http://dev.mysql.com/doc/en/mysqld-option-tables.html.

Source of Information : OReIlly High Performance MySQL Second Edition

InnoDB Concurrency Tuning

InnoDB is designed for high concurrency, but it’s not perfect. The InnoDB architecture still shows its roots in limited memory, single-CPU, single-disk systems. Some aspects of InnoDB’s performance degrade badly in high-concurrency situations, and your only recourse is to limit concurrency. You can often see whether InnoDB is having concurrency issues by inspecting the SEMAPHORES section of the SHOW INNODB STATUS output.

InnoDB has its own “thread scheduler” that controls how threads enter its kernel to access data, and what they can do once they’re inside the kernel. The most basic way to limit concurrency is with the innodb_thread_concurrency variable, which limits how many threads can be in the kernel at once. A value of 0 means there is no limit on the number of threads. If you are having InnoDB concurrency problems, this variable is the most important one to configure.

It’s impossible to name a good value for any given architecture and workload. In theory, the following formula gives a good value:

concurrency = Number of CPUs * Number of Disks * 2

But in practice, it can be better to use a much smaller value. You will have to experiment and benchmark to find the best value for your system.

If more than the allowed number of threads are already in the kernel, a thread can’t enter the kernel. InnoDB uses a two-phase process to try to let threads enter as efficiently as possible. The two-phase policy reduces the overhead of context switches caused by the operating system scheduler. The thread first sleeps for innodb_thread_sleep_delay microseconds, and then tries again. If it still can’t enter, it goes into a queue of waiting threads and yields to the operating system.

The default sleep time in the first phase is 10,000 microseconds. Changing this value can help in high-concurrency environments, when the CPU is underused with a lot of threads in the “sleeping before entering queue” status. The default value can also be much too large if you have a lot of small queries, because it adds 10 milliseconds to query latency.

Once a thread is inside the kernel, it has a certain number of “tickets” that let it back into the kernel for “free,” without any concurrency checks. This limits how much work it can do before it has to get back in line with other waiting threads. The innodb_concurrency_tickets option controls the number of tickets. It rarely needs to be changed unless you have a lot of extremely long-running queries. Tickets are granted per-query, not per-transaction. Once a query finishes, its unused tickets are discarded.

In addition to the bottlenecks in the buffer pool and other structures, there’s another concurrency bottleneck at the commit stage, which is largely I/O-bound because of flush operations. The innodb_commit_concurrency variable governs how many threads can commit at the same time. Configuring this option may help if there’s a lot of thread The InnoDB team is working on solving these issues, and there were major improvements in MySQL 5.0.30 and 5.0.32.

Source of Information : OReIlly High Performance MySQL Second Edition

MyISAM Concurrency Tuning

Simultaneous reading and writing has to be controlled carefully so that readers don’t see inconsistent results. MyISAM allows concurrent inserts and reads under some conditions, and it lets you “schedule” some operations to try to block as little as possible.

Before we look at MyISAM’s concurrency settings, it’s important to understand how MyISAM deletes and inserts rows. Delete operations don’t rearrange the entire table; they just mark rows as deleted, leaving “holes” in the table. MyISAM prefers to fill the holes if it can, reusing the spaces for inserted rows. If there are no holes, it appends new rows to the end of the table.

Even though MyISAM has table-level locks, it can append new rows concurrently with reads. It does this by stopping the reads at the last row that existed when they began. This avoids inconsistent reads.

However, it is much more difficult to provide consistent reads when something is changing the middle of the table. MVCC is the most popular way to solve this problem: it lets readers read old versions of data while writers create new versions.

MyISAM doesn’t support MVCC, so it doesn’t support concurrent inserts unless they go at the end of the table.

You can configure MyISAM’s concurrent insert behavior with the concurrent_insert variable, which can have the following values:

0 MyISAM allows no concurrent inserts; every insert locks the table exclusively.

1 This is the default value. MyISAM allows concurrent inserts, as long as there are no holes in the table.

2 This value is available in MySQL 5.0 and newer. It forces concurrent inserts to append to the end of the table, even when there are holes. If there are no threads reading from the table, MySQL will place the new rows in the holes. The table can become more fragmented than usual with this setting, so you may need to optimize your tables more frequently, depending on your workload.

You can also configure MySQL to delay some operations to a later time, when they can be combined for greater efficiency. For instance, you can delay index writes with the delay_key_write variable. This involves the familiar tradeoff: write the index right away (safe but expensive), or wait and hope the power doesn’t fail before the write happens (faster, but likely to cause massive index corruption in the event of a crash because the index file will be very out-of-date). You can also give INSERT, REPLACE, DELETE, and UPDATE queries lower priority than SELECT queries with the low_priority_updates option. This is equivalent to globally applying the LOW_PRIORITY modifier to UPDATE queries.

Finally, even though InnoDB’s scalability issues are more often talked about, MyISAM has also had problems with mutexes for a long time. In MySQL 4.0 and earlier, a global mutex protected any I/O to the key buffer, which caused scalability problems with multiple CPUs and multiple disks. MySQL 4.1’s key buffer code is improved and doesn’t have this problem anymore, but it still holds a mutex on each key buffer. This is an issue when a thread copies key blocks from the key buffer into its local storage, rather than reading from the disk. The disk bottleneck is gone, but there’s still a bottleneck when accessing data in the key buffer. You can sometimes work around this problem with multiple key buffers, but this approach isn’t always successful. For example, there’s no way to solve the problem when it involves only a single index. As a result, concurrent SELECT queries can perform significantly worse on multi-CPU machines than on a single-CPU machine, even when these are the only queries running.

Source of Information : OReIlly High Performance MySQL Second Edition

FTP IIS Status Codes

1xx — Positive Preliminary Reply
These status codes indicate that an action has started successfully, but the client expects another reply before it continues with a new command.
• 110 — Restart marker reply.
• 120 — Service ready in nnn minutes.
• 125 — Data connection already open; transfer starting.
• 150 — File status okay; about to open data connection.

2xx — Positive Completion Reply
An action has successfully completed. The client can execute a new command.
• 200 — Command okay.
• 202 — Command not implemented, superfluous at this site.
• 211 — System status, or system help reply.
• 212 — Directory status.
• 213 — File status.
• 214 — Help message.
• 215 — NAME system type, where NAME is an official system name from the list in the Assigned Numbers document.
• 220 — Service ready for new user.
• 221 — Service closing control connection. Logged out if appropriate.
• 225 — Data connection open; no transfer in progress.
• 226 — Closing data connection. Requested file action successful (for example, file transfer or file abort).
• 227 — Entering passive mode (h1,h2,h3,h4,p1,p2).
• 230 — User logged in, proceed.
• 250 — Requested file action okay, completed.
• 257 — “PATHNAME” created.

3xx — Positive Intermediate Reply
The command was successful, but the server needs additional information from the client to complete processing the request.
• 331 — User name okay, need password.
• 332 — Need account for login.
• 350 — Requested file action pending further information.

4xx — Transient Negative Completion Reply
The command was not successful, but the error is temporary. If the client retries the command, it may succeed.
• 421 — Service not available, closing control connection. This may be a reply to any command if the service knows it must shut down.
• 425 — Cannot open data connection.
• 426 — Connection closed; transfer aborted.
• 450 — Requested file action not taken. File unavailable (for example, file busy).
• 451 — Requested action aborted: Local error in processing.
• 452 — Requested action not taken. Insufficient storage space in system.

5xx — Permanent Negative Completion Reply
The command was not successful, and the error is permanent. If the client retries the command, it receives the same error.
• 500 — Syntax error, command unrecognized. This may include errors such as command line too long.
• 501 — Syntax error in parameters or arguments.
• 502 — Command not implemented.
• 503 — Bad sequence of commands.
• 504 — Command not implemented for that parameter.
• 530 — Not logged in.
• 532 — Need account for storing files.
• 550 — Requested action not taken. File unavailable (for example, file not found, no access).
• 551 — Requested action aborted: Page type unknown.
• 552 — Requested file action aborted. Exceeded storage allocation (for current directory or dataset).
• 553 — Requested action not taken. File name not allowed.

Source of Information : Wrox Professional IIS 7

HTTP IIS Status Codes

1xx — Informational
These status codes indicate a provisional response. The client should be prepared to receive one or more 1xx responses before receiving a regular response.
• 100 — Continue.
• 101 — Switching protocols.

2xx — Success
This class of status codes indicates that the server successfully accepted the client request.
• 200 — OK. The client request has succeeded.
• 201 — Created.
• 202 — Accepted.
• 203 — Non-authoritative information.
• 204 — No content.
• 205 — Reset content.
• 206 — Partial content.

3xx — Redirection
The client browser must take more action to fulfill the request. For example, the browser may have to request a different page on the server or repeat the request by using a proxy server.
• 301 — Moved permanently.
• 302 — Object moved.
• 304 — Not modified.
• 307 — Temporary redirect.

4xx — Client Error
An error occurs, and the client appears to be at fault. For example, the client may request a page that does not exist, or the client may not provide valid authentication information.
• 400 — Bad request.
• 401 — Access denied. IIS defines a number of different 401 errors that indicate a more specific cause of the error. These specific error codes are displayed in the browser but are not displayed in the IIS log:
• 401.1 — Logon failed.
• 401.2 — Logon failed due to server configuration.
• 401.3 — Unauthorized due to ACL on resource.
• 401.4 — Authorization failed by filter.
• 401.5 — Authorization failed by ISAPI/CGI application.
• 401.7 — Access denied by URL authorization policy on the Web server. This error code is specific to IIS 6.0.
• 403 — Forbidden. IIS defines a number of different 403 errors that indicate a more specific cause of the error.
• 403.1 — Execute access forbidden.
• 403.2 — Read access forbidden.
• 403.3 — Write access forbidden.
• 403.4 — SSL required.
• 403.5 — SSL 128 required.
• 403.6 — IP address rejected.
• 403.7 — Client certificate required.
• 403.8 — Site access denied.
• 403.9 — Too many users.
• 403.10 — Invalid configuration.
• 403.11 — Password change.
• 403.12 — Mapper denied access.
• 403.13 — Client certificate revoked.
• 403.14 — Directory listing denied.
• 403.15 — Client Access Licenses exceeded.
• 403.16 — Client certificate is untrusted or invalid.
• 403.17 — Client certificate has expired or is not yet valid.
• 403.18 — Cannot execute requested URL in the current application pool. This error code is specific to IIS 6.0.
• 403.19 — Cannot execute CGIs for the client in this application pool. This error code is specific to IIS 6.0.
• 403.20 — Passport logon failed. This error code is specific to IIS 6.0.
• 404 — Not found.
• 404.0 — (None) — File or directory not found.
• 404.1 — Web site not accessible on the requested port.
• 404.2 — Web service extension lockdown policy prevents this request.
• 404.3 — MIME map policy prevents this request.
• 405 — HTTP verb used to access this page is not allowed (method not allowed).
• 406 — Client browser does not accept the MIME type of the requested page.
• 407 — Proxy authentication required.
• 412 — Precondition failed.
• 413 — Request entity too large.
• 414 — Request-URI too long.
• 415 — Unsupported media type.
• 416 — Requested range not satisfiable.
• 417 — Execution failed.
• 423 — Locked error.

5xx — Server Error
The server cannot complete the request because it encounters an error.
• 500 — Internal server error.
• 500.12 — Application is busy restarting on the Web server.
• 500.13 — Web server is too busy.
• 500.15 — Direct requests for Global.asa are not allowed.
• 500.16 — UNC authorization credentials incorrect. This error code is specific to IIS 6.0.
• 500.18—URL authorization store cannot be opened. This error code is specific to IIS 6.0.
• 500.100 — Internal ASP error.
• 501 — Header values specify a configuration that is not implemented.
• 502 — Web server received an invalid response while acting as a gateway or proxy.
• 502.1 — CGI application timeout.
• 502.2 — Error in CGI application.
• 503 — Service unavailable. This error code is specific to IIS 6.0.
• 504 — Gateway timeout.
• 505 — HTTP version not supported.

Source of Information : Wrox Professional IIS 7

Administering MySQL - Transferring Large Files

Copying, compressing, and decompressing huge files (often across a network) are common tasks when administering MySQL, initializing servers, cloning slaves, and performing backups and recovery operations. The fastest and best ways to do these jobs are not always the most obvious, and the difference between good and bad methods can be significant. It’s common to begin with an uncompressed file, such as one server’s InnoDB tablespace and log files. You also want the file to be decompressed when you finish copying it to the destination, of course. The other common scenario is to begin with a compressed file, such as a backup image, and finish with a decompressed file. If you have limited network capacity, it’s usually a good idea to send the files across the network in compressed form. You might also need to do a secure transfer, so your data isn’t compromised; this is a common requirement for backup images.

Copying Files
The task, then, is to do the following efficiently:
1. (Optionally) compress the data.
2. Send it to another machine.
3. Decompress the data into its final destination.
4. Verify the files aren’t corrupted after copying.

A Naive Example
We begin with a naïve example of how to send an uncompressed file securely from one machine to another, compress it en route, and then decompress it. On the source server, which we call server1, we execute the following:

server1$ gzip -c /backup/mydb/mytable.MYD > mytable.MYD.gz
server1$ scp mytable.MYD.gz root@server2:/var/lib/myql/mydb/

And then, on server2:

server2$ gunzip /var/lib/mysql/mydb/mytable.MYD.gz

This is probably the simplest approach, but it’s not very efficient because it serializes the steps involved in compressing, copying, and decompressing the file. Each step also requires reads from and writes to disk, which is slow. Here’s what really happens during each of the above commands: the gzip performs both reads and writes on server1, the scp reads on server1 and writes on server2, and the gunzip reads and writes on server2.

A One-Step Method
It’s more efficient to compress and copy the file and then decompress it on the other end in one step. This time we use SSH, the secure protocol upon which SCP is based. Here’s the command we execute on server1:

server1$ gzip -c /backup/mydb/mytable.MYD ssh root@server2
"gunzip -c - > /var/lib/mysql/mydb/mytable.MYD"

This usually performs much better than the first method, because it significantly reduces disk I/O: the disk activity is reduced to reading on server1 and writing on server2. This lets the disk operate sequentially.

You can also use SSH’s built-in compression to do this, but we’ve shown you how to compress and decompress with pipes because they give you more flexibility. For example, if you didn’t want to decompress the file on the other end, you wouldn’t want to use SSH compression.

You can improve on this method by tweaking some options, such as adding –1 to make the gzip compression faster. This usually doesn’t lower the compression ratio much, but it can make it much faster, which is important. You can also use different compression algorithms. For example, if you want very high compression and don’t care about how long it takes, you can use bzip2 instead of gzip. If you want very fast compression, you can instead use an LZO-based archiver. The compressed data might be about 20% larger, but the compression will be around five times faster.

Avoiding Encryption Overhead
SSH isn’t the fastest way to transport data across the network, because it adds the overhead of encrypting and decrypting. If you don’t need encryption, you can just copy the “raw” bits over the network with netcat. You invoke this tool as nc for noninteractive operations, which is what we want.

Here’s an example. First, let’s start listening for the file on port 12345 (any unused port will do) on server2, and uncompress anything sent to that port to the desired data file:
server2$ nc -l -p 12345 gunzip -c - > /var/lib/mysql/mydb/mytable.MYD
On server1, we then start another instance of netcat, sending to the port on which
the destination is listening. The -q option tells netcat to close the connection after it
sees the end of the incoming file. This will cause the listening instance to close the
destination file and quit:

server1$ gzip -c - /var/lib/mysql/mydb/mytable.MYD nc -q 1 server2 12345

An even easier technique is to use tar, so filenames are sent across the wire, eliminating another source of errors and automatically writing the files to their correct locations. The z option tells tar to use gzip compression and decompression. Here’s the command to execute on server2:

server2$ nc -l -p 12345 tar xvzf -

And here’s the command for server1:

server1$ tar cvzf - /var/lib/mysql/mydb/mytable.MYD nc -q 1 server2 12345

You can assemble these commands into a single script that will compress and copy lots of files into the network connection efficiently, then decompress them on the other side.

Other Options
Another option is rsync. rsync is convenient because it makes it easy to mirror the source and destination and because it can restart interrupted file transfers, but it doesn’t tend to work as well when its binary difference algorithm can’t be put to good use. You might consider using it for cases where you know most of the file doesn’t need to be sent—for example, for finishing an aborted nc copy operation.

You should experiment with file copying when you’re not in a crisis situation, because it will take a little trial and error to discover the fastest method. Which method performs best will depend on your system. The biggest factors are how many disk drives, network cards, and CPUs you have, and how fast they are relative to each other. It’s a good idea to monitor vmstat -n 5 to see whether the disk or the CPU is the speed bottleneck.

If you have idle CPUs, you can probably speed up the process by running several copy operations in parallel. Conversely, if the CPU is the bottleneck and you have lots of disk and network capacity, omit the compression. As with dumping and restoring, it’s often a good idea to do these operations in parallel for speed. Again, monitor your servers’ performance to see if you have unused capacity. Trying to over-parallelize may just slow things down.

Source of Information : OReIlly High Performance MySQL Second Edition

The MySQL Query Cache

Many database products can cache query execution plans, so the server can skip the SQL parsing and optimization stages for repeated queries. MySQL can do this in some circumstances, but it also has a different type of cache (known as the query cache) that stores complete result sets for SELECT statements.

The MySQL query cache holds the exact bits that a completed query returned to the client. When a query cache hit occurs, the server can simply return the stored results immediately, skipping the parsing, optimization, and execution steps.
The query cache keeps track of which tables a query uses, and if any of those tables changes, it invalidates the cache entry. This coarse invalidation policy may seem inefficient—because the changes made to the tables might not affect the results stored in the cache—but it’s a simple approach with low overhead, which is important on a busy system.

The query cache is designed to be completely transparent to the application. The application does not need to know whether MySQL returned data from the cache or actually executed the query. The result should be the same either way. In other words, the query cache doesn’t change semantics; the server appears to behave the same way with it enabled or disabled.

Source of Information : OReIlly High Performance MySQL Second Edition

Alternatives to MySQL

MySQL is not necessarily the solution for every need. It’s often much better to do some work completely outside MySQL, even if MySQL can theoretically do what you want.

One of the most obvious examples is storing data in a traditional filesystem instead of in tables. Image files are the classic case: you can put them into a BLOB column, but this is rarely a good idea. The usual practice is to store images or other large binary files on the filesystem and store the filenames inside MySQL; the application can then retrieve the files from outside of MySQL. In a web application, you accomplish this by putting the filename in the element’s src attribute.

Full-text searching is something else that’s best handled outside of MySQL—MySQL doesn’t perform these searches as well as Lucene or Sphinx. The NDB API can also be useful for certain tasks. For instance, although MySQL’s NDB Cluster storage engine isn’t (yet) well suited for storing all of a highperformance web application’s data, it’s possible to use the NDB API directly for storing web site session data or user registration information. You can learn more about the NDB API at http://dev.mysql.com/doc/ndbapi/en/index.html. There’s also an NDB module for Apache, mod_ndb, which you can download at http://code.google.com/p/mod-ndb/.

Finally, for some operations—such as graph relationships and tree traversals—a relational database just isn’t the right paradigm. MySQL isn’t good for distributed data processing, because it lacks parallel query execution capabilities. You’ll probably want to use other tools for this purpose (possibly in combination with MySQL).

Source of Information : OReIlly High Performance MySQL Second Edition

Extending MySQL

If MySQL can’t do what you need, one possibility is to extend its capabilities. If you’re interested in exploring any of these avenues further, there are good resources online, and there are books available on many of the topics. When we say “MySQL can’t do what you need,” we mean two things: MySQL can’t do it at all, or MySQL can do it, but in a slow or awkward way that’s not good enough. Either is a reason to look at extending MySQL. The good news is that MySQL is becoming more and more modular and general-purpose. For example, MySQL 5.1 has a lot of useful plug-in functionality; it even allows storage engines to be plug-ins, so you don’t need to compile them into the server.

Storage engines are a great way to extend MySQL for a special purpose. Brian Aker has written a skeleton storage engine and a series of articles and presentations about how to get started writing your own storage engine. This has formed the basis for several of the major third-party storage engines. Many companies are writing their own internal storage engines now, as you’ll see if you follow the MySQL internals mailing list. For example, Friendster uses a special storage engine for social graph operations, and we know of another company that built a custom engine for fuzzy searches. A simple custom storage engine isn’t very hard to write. You can also use a storage engine as an interface to another piece of software. A good example of this is the Sphinx storage engine, which interfaces with the Sphinx fulltext search software.

MySQL 5.1 also allows full-text search parser plug-ins, and you can write UDFs, which are great for CPU-intensive tasks that have to run in the server’s thread context and are too slow or clumsy in SQL. You can use them for administration, service integration, reading operating system information, calling web services, synchronizing data, and much more.

MySQL Proxy is another option that’s great if you want to add your own functionality to the MySQL protocol. And Paul McCullagh’s scalable blob-streaming infrastructure project (http://www.blobstreaming.org) opens up a range of new possibilities for storing large objects in MySQL.

Because MySQL is free, open source software, you can even hack the server itself if it doesn’t do what you need. We know of companies that have extended the server’s parser grammar, for example. Third parties have submitted many interesting MySQL extensions in the areas of performance profiling, scalability, and new features in recent years. The MySQL developers are very responsive and helpful when people want to extend MySQL. They’re available via the mailing list internals@lists.mysql.com (see http://lists.mysql.com to subscribe), MySQL forums, or the #mysql-dev IRC channel on freenode.

Source of Information : OReIlly High Performance MySQL Second Edition

Why C# ?

Practice Makes Perfect
The designers of C# were able to draw upon the lessons learned from other OOPLs that preceded it. They borrowed the best features of C++, Java, Eiffel, and Smalltalk, and then added some capabilities and features not found in those languages. Conversely, the features that proved to be most troublesome in earlier languages were eliminated. As a result, C# is a powerful programming language that is also easy to learn. This is not to say that C# is a perfect language—no language is!—but simply that it has made some significant improvements over many of the languages that have preceded it.

C# Is Part of an Integrated Application Development Framework
The C# language is integrated into Microsoft’s .NET Framework—Microsoft’s powerful, comprehensive platform for developing applications and managing their runtime environment. The .NET Framework primarily supports the C#, C++, J#, and Visual Basic programming languages, but also provides a functionality called cross-language interoperability that allows objects created in different programming languages to work with each other. A core element of the .NET Framework is the common language runtime (CLR) that is responsible for the runtime management of any .NET Framework program. The CLR takes care of loading, running, and providing support services for the .NET Framework program.

The .NET Framework provides a high degree of interoperability between the languages it supports—C#, C++, Visual Basic, and JScript—through a Common Language Specification (CLS) that defines a common set of types and behaviors that every .NET language is guaranteed to recognize. The CLS allows developers to seamlessly integrate C# code with code written in any of the other .NET languages.

The .NET Framework also contains a vast collection of libraries called the .NET Framework Class Library (FCL) that provides almost all the common functionality needed to develop applications on the Windows platform. You’ll find that with the FCL a lot of programming work has already been done for you on topics ranging from file access to mathematical functions to database connectivity. The C# language and the .NET Framework provide one-stop shopping for all your programming needs. You can find out more about the .NET Framework here: http://msdn.microsoft.com/en-us/library/default.aspx.

C# Is Object-Oriented from the Ground Up
Before newer OOPLs such as C# and Java arrived on the scene, one of the most widely used OOPLs was C++, which is actually an object-oriented extension of the non-OOPL C. As such, C++ provides a lot of “back doors” that make it very easy to write decidedly “un-OO” code. In fact, many proficient C programmers transitioned to C++ as a better C without properly learning how to design an object-oriented application, and hence wound up using C++ for the most part as a procedural (non-OO) language. In contrast, C# was built from the ground up to be a purely OOPL.

• Primitive value types, such as int and double, inherit from the Object class.

• All the graphical user interface (GUI) building blocks—windows, buttons, text input fields, scroll bars, lists, menus, and so on—are objects.

• All functions are attached to objects and are known as methods—there can be no freefloating functions as there are in C/C++.

• Even the entry point for a C# program (now called the Main method) no longer stands alone, but is instead bundled within a class.

Because of this, C# lends itself particularly well to writing applications that uphold the object-oriented paradigm.

C# Is Free
One last valuable feature of C# that we’ll mention is that it’s free! You can download the C# compiler and all other libraries and utilities you’ll need from the Microsoft Developer Network (MSDN) web site at no cost.

Source of Information : Apress Beginning C Sharp 2008 Objects From Concept To Code


Let’s begin by considering how MyISAM performs I/O for its indexes. MyISAM normally flushes index changes to disk after every write. If you’re going to make many modifications to a table, however, it may be faster to batch these writes together.

One way to do this is with LOCK TABLES, which defers writes until you unlock the tables. This can be a valuable technique for improving performance, as it lets you control exactly which writes are deferred and when the writes are flushed to disk.

You can defer writes for precisely the statements you want. You can also defer index writes by using the delay_key_write variable. If you do this, modified key buffer blocks are not flushed until the table is closed.* The possible settings are as follows:

MyISAM flushes modified blocks in the key buffer (key cache) to disk after every write, unless the table is locked with LOCK TABLES.

Delayed key writes are enabled, but only for tables created with the DELAY_KEY_
WRITE option.

All MyISAM tables use delayed key writes.

Delaying key writes can be helpful in some cases, but it doesn’t usually create a big performance boost. It’s most useful with smaller data sizes, when the key cache’s read hit ratio is good but the write hit ratio is bad. It also has quite a few drawbacks:

• If the server crashes and the blocks haven’t been flushed to disk, the index will be corrupt.

• If many writes are delayed, it’ll take longer for MySQL to close a table, because it will have to wait for the buffers to be flushed to disk. This can cause long table cache locks in MySQL 5.0.

• FLUSH TABLES can take a long time, for the reason just mentioned. This in turn can increase the time it takes to run FLUSH TABLES WITH READ LOCK for an LVM snapshot or other backup operation.

• Unflushed dirty blocks in the key buffer might not leave any room in the buffer for new blocks to be read from disk. Therefore, queries might stall while waiting for MyISAM to free up some space in the key buffer.

In addition to tuning MyISAM’s index I/O, you can configure how MyISAM tries to recover from corruption. The myisam_recover option controls how MyISAM looks for and repairs errors. You have to set this option in the configuration file or at the command line. You can view, but not change, the option’s value with this SQL statement (this is not a typo—the system variable has a different name from the corresponding command-line option):

mysql> SHOW VARIABLES LIKE 'myisam_recover_options';

Enabling this option instructs MySQL to check MyISAM tables for corruption when it opens them, and to repair them if problems are found. You can set the following values:

DEFAULT (or no setting)
MySQL will try to repair any table that is marked as having crashed or not marked as having been closed cleanly. The default setting performs no other actions upon recovery. In contrast to how most variables work, this DEFAULT value is not an instruction to reset the variable to its compiled-in value; it essentially means “no setting.”

Makes MySQL write a backup of the data file into a .BAK file, which you can examine afterward.

Makes recovery continue even if more than one row will be lost from the .MYD file.

Skips recovery unless there are delete blocks. These are blocks of deleted rows
are still occupying space and can be reused for future INSERT statements. This can be useful because MyISAM recovery can take a very long time on large tables.

You can use multiple settings, separated by commas. For example, BACKUP,FORCE will force recovery and create a backup.

We recommend that you enable this option, especially if you have just a few small
MyISAM tables. Running a server with corrupted MyISAM tables is dangerous, as they can sometimes cause more data corruption and even server crashes. However, if you have large tables, automatic recovery might be impractical: it causes the server to check and repair all MyISAM tables when they’re opened, which is inefficient. During this time, MySQL tends to block connections from performing any work. If you have a lot of MyISAM tables, it might be a good idea to use a less intrusive process that runs CHECK TABLES and REPAIR TABLES after startup. Either way, it is very important to check and repair the tables.

Enabling memory-mapped access to data files is another useful MyISAM tuning option. Memory mapping lets MyISAM access the .MYD files directly via the operating system’s page cache, avoiding costly system calls. In MySQL 5.1 and newer, you can enable memory mapping with the myisam_use_mmap option. Older versions of MySQL use memory mapping for compressed MyISAM tables only.

Source of Information : OReIlly High Performance MySQL Second Edition

MySQL Tuning Memory Usage

Configuring MySQL to use memory correctly is vital to good performance. You’ll almost certainly need to customize MySQL’s memory usage for your needs. You can think of MySQL’s memory consumption as falling into two categories: the memory you can control, and the memory you can’t. You can’t control how much memory MySQL uses merely to run the server, parse queries, and manage its internals, but you have a lot of control over how much memory it uses for specific purposes. Making good use of the memory you can control is not hard, but it does require you to know what you’re configuring.

You can approach memory tuning in steps:
1. Determine the absolute upper limit of memory MySQL can possibly use.

2. Determine how much memory MySQL will use for per-connection needs, such as sort buffers and temporary tables.

3. Determine how much memory the operating system needs to run well. Include memory for other programs that run on the same machine, such as periodic jobs.

4. Assuming that it makes sense to do so, use the rest of the memory for MySQL’s caches, such as the InnoDB buffer pool.

How much memory can MySQL use?
There is a hard upper limit on the amount of memory that can possibly be available to MySQL on any given system. The starting point is the amount of physically installed memory. If your server doesn’t have it, MySQL can’t use it.

You also need to think about operating system or architecture limits, such as restrictions 32-bit operating systems place on how much memory a given process can address. Because MySQL runs in a single process with multiple threads, the amount of memory it can use overall may be severely limited by such restrictions—for example, 32-bit Linux kernels limit the amount of memory any one process can address to a value that is typically between 2.5 and 2.7 GB. Running out of address space is very dangerous and can cause MySQL to crash.

There are many other operating system-specific parameters and oddities that must be taken into account, including not just the per-process limits, but also stack sizes and other settings. The system’s glibc libraries can also impose limits per single allocation. For example, you might not be able to set innodb_buffer_pool larger than 2 GB if that’s all your glibc libraries support in a single allocation.

Even on 64-bit servers, some limitations still apply. For example, many of the buffers, such as the key buffer, are limited to 4 GB on a 64-bit server. Some of these restrictions are lifted in MySQL 5.1, and there will probably be more changes in the future because MySQL AB is actively working to make MySQL take advantage of more powerful hardware. The MySQL manual documents each variable’s maximum values.

Per-connection memory needs
MySQL needs a small amount of memory just to hold a connection (thread) open. It also requires a certain base amount of memory to execute any given query. You’ll need to set aside enough memory for MySQL to execute queries during peak load times. Otherwise, your queries will be starved for memory, and they will run poorly or fail.

It’s useful to know how much memory MySQL will consume during peak usage, but some usage patterns can unexpectedly consume a lot of memory, which makes this hard to predict. Prepared statements are one example, because you can have many of them open at once. Another example is the InnoDB table cache.

You don’t need to assume a worst-case scenario when trying to predict peak memory consumption. For example, if you configure MySQL to allow a maximum of 100 connections, it theoretically might be possible to simultaneously run large queries on all 100 connections, but in reality this probably won’t happen. For example, if you set myisam_sort_buffer_size to 256M, your worst-case usage is at least 25 GB, but this level of consumption is highly unlikely to actually occur.

Rather than calculating worst cases, a better approach is to watch your server under a real workload and see how much memory it uses, which you can see watching the process’s virtual memory size. In many Unix-like systems, this is reported in the VIRT column in top, or VSZ in ps. The next chapter has more information on how to monitor memory usage.

Reserving memory for the operating system
Just as with queries, you need to reserve enough memory for the operating system to do its work. The best indication that the operating system has enough memory is that it’s not actively swapping (paging) virtual memory to disk.

You should not need to reserve more than a gigabyte or two for the operating system, even for machines with a lot of memory. Add in some extra for safety, and add in some more if you’ll be running periodic memory-intensive jobs on the machine (such as backups). Don’t add any memory for the operating system’s caches, because they can be very large. The operating system will generally use any leftover memory for these caches, and we consider them separately from the operating system’s own needs in the following sections.

Allocating memory for caches
If the server is dedicated to MySQL, any memory you don’t reserve for the operating system or for query processing is available for caches.

MySQL needs more memory for caches than anything else. It uses caches to avoid disk access, which is orders of magnitude slower than accessing data in memory. The operating system may cache some data on MySQL’s behalf (especially for MyISAM), but MySQL needs lots of memory for itself too.

The following are the most important caches to consider for the majority of installations:
• The operating system caches for MyISAM data
• MyISAM key caches
• The InnoDB buffer pool
• The query cache

There are other caches, but they generally don’t use much memory. It is much easier to tune a server if you’re using only one storage engine. If you’re using only MyISAM tables, you can disable InnoDB completely, and if you’re using only InnoDB, you need to allocate only minimal resources for MyISAM (MySQL uses MyISAM tables internally for some operations). But if you’re using a mixture of storage engines, it can be very hard to figure out the right balance between them. The best approach we’ve found is to make an educated guess and then benchmark.

Source of Information : OReIlly High Performance MySQL Second Edition

MySQL Components

The MySQL database system consists of several components.You should be familiar with what these components are and the purpose of each, so that you understand both the nature of the system you’re administering and the tools available to help you do your job. If you take the time to understand what you’re overseeing, your work will be much easier.
To that end, you should acquaint yourself with the following aspects of MySQL.

The MySQL server. The server, mysqld, is the hub of a MySQL installation; it performs all manipulation of databases and tables. On Unix, several related scripts are available to assist in server startup. mysqld_safe is a related program used to start the server, monitor it, and restart it in case it goes down.The mysql.server script is useful on versions of Unix that use run-level directories for starting system services. If you run multiple servers on a single host, mysqld_multi can help you manage them more easily.On Windows, you have the choice of running the server from the command line or as a Windows service.

The MySQL clients and utilities. Several MySQL programs are available to help you communicate with the server. For administrative tasks, some of the most important ones are listed here:

• mysql—An interactive program that enables you to send SQL statements to the server and to view the results.You can also use mysql to execute batch scripts (text files containing SQL statements).

• mysqladmin—An administrative program for performing tasks such as shutting down the server, checking its configuration, or monitoring its status if it appears not to be functioning properly.

• mysqldump and mysqlhotcopy—Tools for backing up your databases or copying databases to another server.

• mysqlcheck and myisamchk—Programs that help you perform table checking, analysis, and optimization, as well as repairs if tables become damaged. Mysqlcheck works with MyISAM tables and to some extent with tables for other storage engines. myisamchk is for use only with MyISAM tables.

The server’s language, SQL.You should be able to talk to the server in its own language. As a simple example, you might need to find out why a user’s privileges aren’t working the way you expect them to work. There is no substitute for being able to go in and communicate with the server directly, which you can do by using the mysql client program to issue SQL statements that let you examine the grant tables.

If you don’t know any SQL, be sure to acquire at least a basic understanding of it.A lack of SQL fluency will only hinder you in your administrative tasks, whereas the time you take to learn will be repaid many times over. A real mastery of SQL takes some time, but the basic skills can be attained quickly. For instruction in SQL and the use of the mysql command-line client.

The MySQL data directory. The data directory is where the server stores its databases and status files. It’s important to understand the structure and contents of the data directory so that you know how the server uses the filesystem to represent databases and tables, as well as where the server logs are located and what they contain.You should also know your options for managing allocation of disk space across filesystems should you find that the filesystem on which the data directory is located is becoming too full.

Source of Information : MySQL (4th Edition)

Symmetric keys are at the bottom of the SQL Server encryption key hierarchy. A symmetric key is used to encrypt other symmetric keys or data. Because symmetric key encryption is so much faster than asymmetric encryption and does not suffer the same data-length limitations as SQL Server’s asymmetric encryption implementations, Microsoft recommends encrypting your data exclusively with symmetric keys.

While asymmetric encryption requires two keys (a public key/private key pair), symmetric encryption requires only a single key to both encrypt and decrypt your data. Symmetric encryption is performed using block cipher algorithms, which encrypt your data in blocks of a constant size, and stream cipher algorithms, which encrypt your data in a continuous stream. Block cipher algorithms have a set encryption key size and encryption block size.

You can calculate the size of the cipher text based on the length of the plain text using one of the following formulas:

• For 8-byte block ciphers like the Data Encryption Standard (DES) family, use length of ciphertext = 8 * ( ( length of plaintext + 8 ) / 8 ) + 36.

• For 16-byte block ciphers like the Advanced Encryption Standard (AES), use length of ciphertext = 16 * ( ( length of plaintext + 16 ) / 16 ) + 44.

For either formula, add 20 bytes to the total length if you use an authenticator.

SQL Server provides the following statements to manage symmetric keys:

• CREATE SYMMETRIC KEY: Creates a symmetric key to be used for encryption. Symmetric keys can be encrypted by certificates, asymmetric keys, passwords, or even other symmetric keys.

• ALTER SYMMETRIC KEY: Allows you to change the method of securing your symmetric keys.

• DROP SYMMETRIC KEY: Drops a symmetric key from the database. Symmetric keys cannot be dropped while they are open.

• OPEN SYMMETRIC KEY: Opens and decrypts a symmetric key for use.

• CLOSE SYMMETRIC KEY: Closes a symmetric key that was previously opened.

• CLOSE ALL SYMMETRIC KEYS: Closes all symmetric keys currently open in the current session.

SQL Server does not provide backup or restore statements for symmetric keys. Because symmetric keys are stored in the current database, they are backed up during the normal database backup process. You can also re-create a symmetric key from scratch with the CREATE SYMMETRIC KEY statement. In order to re-create a symmetric key from scratch, you must supply a KEY_SOURCE and IDENTITY_VALUE. The KEY_SOURCE is a value SQL Server hashes and performs bitwise manipulations on to generate a symmetric encryption key. If not specified, SQL Server randomly generates a KEY_SOURCE. The IDENTITY_VALUE is a value SQL Server uses to generate a key GUID. Copies of the key GUID are stored with the data the key is used to encrypt. In order to re-create a symmetric key, you must supply the same KEY_SOURCE and IDENTITY_VALUE originally used to create the key. SQL Server guarantees that supplying duplicate IDENTITY_VALUE and KEY_SOURCE values will generate an identical key.

The following example creates a symmetric key and then drops it:

ENCRYPTION BY PASSWORD = '$#ad%61*(;dsPSlk';


Of course, creating a symmetric key is not very useful if you can’t use it to encrypt things. And as we mentioned, symmetric keys in SQL Server can be used to protect other symmetric keys or data. To protect a symmetric key with another symmetric key, use the ENCRYPTION BY SYMMETRIC KEY clause of the CREATE SYMMETRIC KEY statement. To encrypt and decrypt data, use the EncryptByKey and DecryptByKey functions. The following example creates a symmetric key, which is used to encrypt another symmetric key, which is used in turn by EncryptByKey and DecryptByKey to encrypt and decrypt some data.

USE AdventureWorks;

-- Create a symmetric key to encrypt a symmetric key
ENCRYPTION BY PASSWORD = '$#ad%61*(;dsPSlk';

-- Open the key-encrypting key
DECRYPTION BY PASSWORD = '$#ad%61*(;dsPSlk';

-- Create a symmetric key to encrypt data

-- Open the data-encrypting key

-- Initialize the plain text
DECLARE @plain_text NVARCHAR(512);
SET @plain_text = N'"Those who would give up Essential Liberty to purchase a ' +
N'little Temporary Safety, deserve neither Liberty nor Safety." - Ben Franklin'
PRINT @plain_text;

-- Encrypt the data
DECLARE @cipher_text VARBINARY(1024);
SET @cipher_text = EncryptByKey(Key_GUID(N'SymData'), @plain_text);
PRINT @cipher_text;

-- Decrypt the data
SET @plain_text = CAST(DecryptByKey(@cipher_text) AS NVARCHAR(512));
PRINT @plain_text;

-- Close the data-encrypting key

-- Close the key-encrypting key

-- Drop the symmetric keys

The EncryptByKey function requires the key GUID of the symmetric key to encrypt your data. The symmetric key GUID can be retrieved by passing the name of the key to the Key_GUID function. The plain_text passed into the function is char, varchar, nchar, nvarchar, binary, or varbinary data. The return value of EncryptByKey is varbinary(8000). Block mode ciphers on SQL Server, like Triple DES and AES, automatically use an encryption mode known as Cipher Block Chaining (CBC) mode and random initialization vectors (IVs) to further obfuscate your encrypted data. In addition, the EncryptByKey function also accepts an optional authenticator value to help defeat whole value substitutions of your data. The authenticator value passed in is a sysname, which is synonymous with nvarchar(128). When an authenticator value is provided, it is encrypted together with the plain text to even further obfuscate your data. The authenticator value can be used to “tie” your encrypted data to a specific row. If you do use an authenticator, the add_authenticator parameter to EncryptByKey must be set to 1.

The DecryptByKey function accepts your encrypted data as a varbinary(8000), and returns the decrypted plain text as a varbinary(8000). If your original data was varchar or nvarchar, then you will need to CAST or CONVERT the result back to its original datatype. If you used an authenticator value when you encrypted the plain text, you must supply the same authenticator value to decrypt your cipher text. Note that you don’t need to supply the Key_GUID when you call DecryptByKey. This is because SQL Server stores the key GUID with the encrypted data during the encryption process.

When you use the EncryptByKey and DecryptByKey functions, and the symmetric key you are using to encrypt or decrypt data is protected by another key, you must explicitly open the symmetric key with the OPEN SYMMETRIC KEY statement. SQL Server provides the following additional functions that automatically open and decrypt your symmetric key before decrypting your data:

• DecryptByKeyAutoAsymKey: Decrypts your data with a symmetric key that is protected by an asymmetric key. This function automatically opens and decrypts your symmetric key with its associated asymmetric key.

• DecryptByKeyAutoCert: Decrypts your data using a symmetric key that is protected by a certificate. This function automatically opens and decrypts your symmetric key with its associated certificate.

Source of Information : Apress Accelerated SQL Server 2008

Subscribe to Developer Techno ?
Enter your email address:

Delivered by FeedBurner