Thursday, March 5, 2015

Handling NULL Character \x00 when Exporting to File Using BCP

This article discusses the behavior of the BCP utility when extracting data from SQL Server to files and more specifically to CSV files.

Imagine that you have the below table:

As you can see, records 2 and 3 contain null values.

Then you export the table contents to CSV using BPC:

bcp "SELECT [code],[description],[price] FROM [TestDB1].[dbo].[product]" queryout "c:\tmp\testExtract.csv" -c -C 1253 -t ; -S ".\SQL2K14" -T


As you can see on the above screenshot, the output seems correct. The records' NULL values have been replaced with empty strings.

Now consider that the original table, instead of NULL values has empty strings:

Let's try again to export the new table contents to CSV using BPC:

bcp "SELECT [code],[description],[price] FROM [TestDB1].[dbo].[product]" queryout "c:\tmp\testExtract.csv" -c -C 1253 -t ; -S ".\SQL2K14" -T


As you can see, the empty strings have been replaced by NULLs which correspond to the hexadecimal character \x00.

From the MSDN Library article on BCP, we read the following:

  • out copies from the database table or view to a file. If you specify an existing file, the file is overwritten. When extracting data, note that the bcp utility represents an empty string as a null and a null string as an empty string.
It seems from the above example, that the queryout keyword has the same behavior regarding null and empty strings.

Now, if you are going to feed the CSV file to another application for parsing and processing it, if the application does not explicitly handle possible occurrences of the NULL character then most probably an error will be occurred.

To this end, always have in mind the above behavior when extracting data to files using BCP.

Read more on this article...

Wednesday, February 18, 2015

Searching for Keywords in SQL Server Jobs

Hi friends all around the globe, it's been a while since my last article but this is a very busy period with many other community activities of which you will soon enjoy the outcome! These activities have to do with heavy software development as well as the authoring of my third eBook with title "Developing SQL Server". More updates on everything will be provided along the way.

This article is actually a T-SQL tip that can become quite handy. It tackles the issue of finding which SQL Server Agent jobs are affected due to schema changes (i.e. table rename, etc.).

Consider the following example where you have three tables:

You also have a SQL Server Agent jobs that reference the above tables.

Then, for any reason, you want to rename 'table2' to 'table2New'

What happens to the SQL Server Agent job the next time it runs? Of course it fails.

When you rename an object in SQL Server using the sp_rename stored procedure, the Database Engine warns you with the below message:
Caution: Changing any part of an object name could break scripts and stored procedures.

In order not to get into problems with such actions, especially in the case of large databases, you must first research if there are any functions, stored procedures, jobs, etc. that might be affected.

The below script will return all jobs that include one or more steps that reference a given string pattern (this can be a table name, stored procedure name, function name, etc.). In this case we search for 'table2':

declare @pattern nvarchar(max)
set @pattern='table2'

select j.[job_id],j.[name],j.[enabled],j.[description],s.command,j.[date_created],j.[date_modified]
from msdb.dbo.sysjobs j
inner join msdb.dbo.[sysjobsteps] s on j.job_id=s.job_id 
where s.command like '%'+@pattern+'%'

A subsequent article will contain a master procedure that can be used for searching for a given pattern in all the programmability objects in SQL Server.

Read more on this article...

Wednesday, December 31, 2014

Selected Tweets of Year 2014!

So here I am, writing the last post for 2014! Don't worry new posts for 2015 are coming up soon! :) With only few hours left to welcome the new year, my thoughts go back deep into 2014 trying recollect the highlights of this year through my personal aspect. Beyond the personal highlights I would like to add a note regarding the Community-related ones.

In 2014 I launched Artemiou Data Tools: a brand new community initiative aiming to provide high-quality, simple and still straight-to-the point tools. All tools have and will always have a Community Edition free for all! New websites, new content, new articles, new software and much more is what I recollect for 2014. Another major highlight is my participation to the MS Conference in Cyprus on March 21 2014 as an invited speaker where I talked about #SQLServer 2014 and the In-Memory OLTP Engine - that session was amazing!

Throughout all these years, the interaction with you, the community goes even stronger. This interaction really excites me and gives me strength to move on and come up with more initiatives! The discussions after my sessions in conferences or via your comments in my articles, the email exchange with your questions and topics for discussion and all the rest methods of communication is a unique experience for me and I am proud that I am part of this great SQL Server community.

The year 2015 will be even more powerful! I am planning of releasing the In-Memory OLTP Simulator (stay tuned on this one - it is something really amazing), a new eBook, website, other new tools, articles and much more!

Wishing you a Happy New Year 2015 always in good health and a constant quest for knowledge!

Below are my selected Tweets for 2014!

Blog Posts/Articles and Announcements

  • The mobile version of my official website is now live! Check it out via a mobile device at:  #MVPBuzz #Community
  • Join me at MS Conference on March 21 where I will talk about #SQLServer 2014 and the In-Memory OLTP Engine - #MVPBuzz
  • Artemiou Data Tools just got its own Twitter page! Feel free to follow and support my initiative! @ArtDataTools #MVPBuzz #Software #fb
  • My new website is now live! Its purpose is to be used as a hub for SQL Server knowledge and data tools.  #MVPBuzz #fb

  • [News] I am launching soon two sub-page systems within my official website: (i) Artemiou SQL Books,  (ii) Artemiou Data Tools #Community #fb
  • Congratulations to the winners of CY Windows Hackathon for the amazing apps and congrats to all for the great ideas and effort! Keep it up!
  • Today it's my #twitterversary! 5 years of tweeting!
  • The data export tool just became a multi-exporter! New release date: Thursday, June 26!
  • Tomorrow, @ArtDataTools releases a brand new free data tool that allows the user to easily export data from #SQLServer to Excel! #MVPBuzz
  • Are you ready for tomorrow's public release of #SQLServer 2014? ;) Are you ready for the In-Memory OLTP Engine? :) #MVPBuzz
  • Written 10K lines of C# / SQL code so far...the In-Memory OLTP Simulator for #SQLServer started taking form! More news soon! #MVPBuzz #fb
  • Working on my session for #Microsoft Conference 2014 in Cyprus...with 4 hours of sleep and my 5th cup of coffee :) #MVPBuzz #SQLServer
  • [News] My personal website enriched with my SQL Books and Data Tools launches next week on Feb 5! #MVPBuzz #SQLServer #Community #fb
  • Artemiou Data Tools: My collection of S/W tools that I develop and have to do with data processing and SQL Server interaction #MVPBuzz #fb
  • Artemiou SQL Books: My current and upcoming books on different SQL Server topics #Community #fb #MVPBuzz

  • UI design takes so much time! Can't I just write the code? :)
  • Sometimes, exception handling in software development takes much, but much longer than the actual development! :) #MVPBuzz
  • Setting up a Win Srv 2K12 R2 VM in Hyper-V Manager on my Windows 8.1 box while developing! Extremely fast, extremely smooth! #MVPBuzz
  • Just used Telerik's free tool JustDecompile. Cool tool! #Community
  • Doing some serious development in Visual Studio 2013 running on Windows 8.1 x64 with Update 1. This is smooth! :) #MVPBuzz
  • Into deep development on In-Memory OLTP Simulator. Developing a plug-in engine... #MVPBuzz #SQLServer #fb
  • #Microsoft Conference was a big success today! Thank you for attending my session on #SQLServer In-Memory OLTP Engine! #MVPBuzz
  • Everything is ready for my session at the MS Conference 2014! My In-Mem OLTP Simulator for #SQLServer is ready for some demos! #MVPBuzz
  • Just backed up and restored a SQL 2014 database from Windows Azure storage to my local SQL Server installation! Pretty cool! :) #MVPBuzz

Read more on this article...

Friday, December 12, 2014

How to Import and Export Unstructured Data in SQL Server - FileTables: The Movie!

A few weeks ago I was contacted by Webucator, an online training provider that requested my permission to include one of my articles into a free video-training series called "SQL Server Solutions from the Web". This is one of my articles on FileTables with title: “How to Import and Export Unstructured Data in SQL Server – FileTables”. As I really liked this initiative by Webucator, I gave them my permission to turn my article into a video and include it in the series and here's the result which I find it very cool!

I really liked the video which by the way is a high-quality, clean video with straight to the point explanations, useful annotations and above all, with content that highlights the most important points of my blog article.
If you want to see FileTables in action then this is a good opportunity!

The free SQL Server Solutions from the Web course is listed on Webucator's SQL Server training page where you can find the full list of available SQL Server-related training classes provided by Webucator.

Last but not least, a big thank you to Webucator for being an active supporter of the technical community. Keep it up guys!

Read more on this article...

Monday, December 1, 2014

CY Hackathon Nov 29-30 2014: A Short Review

On the weekend of Nov 29-30, Windows Hackathon took place in Cyprus. The contestants had almost two days to develop a universal application for Windows Store and Windows Phone. They were given an open data API that they had to use. Beyond that, everything was up to their imagination which as their presentations proved, it has no limits!

I had the joy to participate to the three-judge committee where we had to judge the apps for four aspects:
  • Functionality
  • Idea
  • User Interface
  • Universal nature (whether the app could run on both Windows Phone and Windows 8.1 PCs)
The event was a huge success. With more than 50 participants, many teams where formed and developed some really great apps.

The three winning teams presented three very interesting apps that have a great potential ahead. In addition, all the participants’ ideas were very good and proved that the level of software development in Cyprus is very high.

It is really exciting to see that so many people stayed for almost two days without sleep just to follow their passion, to put their ideas in motion and develop cool applications.

My advice to all of them, and in the end of the day to all of us -including me-, is to follow their passion, chase their ideas, make a plan and develop that great app that will change the world!

Once again congratulations to all participants as well as to the Computer Science Department for hosting the event and a huge bravo to Microsoft Cyprus for organizing and sponsoring such an exciting event.

Read more on this article...

Thursday, October 30, 2014

Listing all Tables of a Linked Server's Database

There are many scenarios in the DBMSs world where you might need to use a linked server between two different SQL Server instances in order to execute direct queries between them. However, the majority of T-SQL auto-complete tools have difficulties to list the objects (i.e. tables) of the linked server's database.

A way to see these objects, is to browse via SSMS's Object Explorer. What about however if you want to have a quick view of all the available tables in the linked server's database while writing your T-SQL script?

The answer is easy! You can do this by using the Information Schema Views. Below you will find a stored procedure that I created which can help you to list all the tables of linked server's database.

Here's the code:

-- Sample Code for Blog Post: Listing all Tables of a Linked Server's Database
-- Blog URL:
-- Author: Artemakis Artemiou, SQL Server MVP
-- Email Address:
-- Official Website:
-- Disclaimer: This is a sample code. Do not use it in Production Systems before properly
-- testing it. You bear sole responsibility for usage of any content of this T-SQL code. 
-- You are responsible for any loss of data, loss of peace or any damages due to usage of 
-- this code. Always take backup of your data.

CREATE PROCEDURE [dbo].[DBTableInfo] (@LinkedServerName varchar(50), @DBName varchar(50))

--variables declaration
DECLARE @LinkedServerNameFiltered VARCHAR(50)
DECLARE @LinkedServerNameFound int
DECLARE @DBsFound int
DECLARE @DBNameFiltered varchar(50)
DECLARE @Command nvarchar(500)

--check if specified linked server exists
SET @LinkedServerNameFiltered=QUOTENAME(@LinkedServerName);
SET @LinkedServerNameFound=0;
SET @LinkedServerNameFound=(SELECT COUNT(*) FROM sys.servers s WHERE;

--report findings
IF @LinkedServerNameFound=0
PRINT 'Error: Linked server ' + @LinkedServerNameFiltered+ ' not found.';

--check if specified database exists
DECLARE @QUERY nvarchar(250);
SET @DBNameFiltered = QUOTENAME(@DBName,'''');
SET @DBsFound=0;
SET @QUERY='(SELECT @DBs=COUNT(*) FROM ' + @LinkedServerNameFiltered +'.[master].sys.sysdatabases s WHERE'+@DBNameFiltered+')';
EXEC sp_executesql @QUERY, N'@DBs int OUTPUT', @DBs=@DBsFound OUTPUT;

--report findings
IF @DBsFound=0
PRINT 'Error: Database ' + @DBNameFiltered + ' not found.';

--construct dynamic T-SQL statement
SET @DBNameFiltered = QUOTENAME(@DBName);
SET @LinkedServerNameFiltered=QUOTENAME(@LinkedServerName);

SET @Command= 'SELECT TABLE_SCHEMA as TableSchema, TABLE_NAME as TableName,(''SELECT TOP 10 * FROM '+ @LinkedServerNameFiltered +'.'+ @DBNameFiltered +'.'' + QUOTENAME(TABLE_SCHEMA) + ''.'' + QUOTENAME(TABLE_NAME)) as SampleQuery
FROM ' + @LinkedServerNameFiltered+'.' + @DBNameFiltered+'.INFORMATION_SCHEMA.TABLES i WHERE i.TABLE_TYPE LIKE ''%TABLE%'' ORDER BY 1';

--execute the command
EXEC sp_executesql @command;


You can call the stored procedure as follows:



The stored procedure displays three columns for each record: (i) Table Schema, (ii) Table Name, (iii) A sample query that can be executed and returns the top 10 rows for the specific table.

Sample output of the stored procedure:

Figure 1: Sample Output of the DBTableInfo Stored Procedure.

You can also download the stored procedure using the following link.

Read more on this article...

Sunday, August 31, 2014

In-Memory Optimization in SQL Server: A Simple Experiment

In one of my previous articles I briefly introduced In-Memory Optimization in SQL Server 2014. In this article we will conduct a simple experiment utilizing this new technology and discussing the benefits. The experiment will be organized in the following steps:
  1. Create the In-Memory processing enabled database
  2. Create the tables (disk-based, memory-enabled)
  3. Populate the tables with sample data
  4. Prepare the processing code for three scenarios (execution modes):
    • Disk-based processing
    • In-Memory optimization
    • In-Memory optimization with natively compiled stored procedure
  5. Run all three execution modes and measure execution times
  6. Discuss findings
Enough words, let’s proceed with some T-SQL code!

1. Create the In-Memory processing enabled database

--Creates a memory-enabled database by adding a memory-optimized filegroup with a specified directory.
--Note: This assumes that directory c:\tmp exists (the subdirectory is created by SQL Server).
ADD FILE ( name = [inMemOLTP_fg_dir],
   filename= 'c:\tmp\InMemOLTPDir')

2. Create the tables (disk-based, memory-enabled)
--Create sample tables representing a scenario involving 200000 Products

--Create traditional table (disk-based)
CREATE TABLE tblDiskBased(
code VARCHAR(50),
descr VARCHAR(250),
price float

--Create a durable (data will be persisted) memory-optimized table
--Alternative choice: SCHEMA_ONLY = Data will be lost if the server turns off unexpectedly
CREATE TABLE tblMemOptimized(
code VARCHAR(50),
descr VARCHAR(250),
price float

--Create a durable (data will be persisted) memory-optimized table
--Alternative choice: SCHEMA_ONLY = Data will be lost if the server turns off unexpectedly
--Note: This table will be used by the natively-compiled stored procedure
CREATE TABLE tblMemOptimized2(
code VARCHAR(50),
descr VARCHAR(250),
price float

So far: We have created a memory enabled database and data structures. Now, it's time to populate the tables with sample data.

3. Populate the tables with sample data
--Populate the tables with sample data (200K records)
SET @i=0
WHILE (@i<200000)
INSERT INTO tblMemOptimized
SELECT CAST(@i as varchar(6)) as id, ('Code '+ (CAST (@i AS varchar(6)))) as code, ('Description '+ (CAST (@i AS varchar(6)))) as descr, (10 + (500-10)*RAND()) as price
SET @i=@i+1

--Populate the tables with sample data (200K records)
INSERT INTO tblDiskBased
SELECT * FROM tblMemOptimized;

INSERT INTO tblMemOptimized2
SELECT * FROM tblMemOptimized;

All three tables now have same data (200K rows each). You can easily check it out!

4. Prepare the processing code for three scenarios (execution modes):
We will wrap each code block into a stored procedure for easier execution.
The scenario is to decrease each product's price by 5%.

a. Disk-based processing
--Disk-based processing
CREATE PROCEDURE updateDiskBased as
UPDATE dbo.tblDiskBased
SET price=price-(price*0.05)

b. In-Memory optimization
CREATE PROCEDURE updateMemoryOptimized as
UPDATE dbo.tblMemOptimized
SET price=price-(price*0.05)

c. In-Memory optimization with natively compiled stored procedure
--Natively-Compiled Stored Procedure
CREATE PROCEDURE updateMemoryOptimizedNatively
with native_compilation, schemabinding, execute as owner
begin atomic with (transaction isolation level = snapshot,language = N'English')
UPDATE dbo.tblMemOptimized2
SET price=price-(price*0.05)

So far: We have created a memory enabled database and data structures. We have populated the tables with sample data (all three tables have identical data for experiment consistency purposes) and we have prepared the stored procedures that target each one of the three tables and perform the processing, that is the update of the 'price' column by decreasing it by 5%.

Now it's time to run each stored procedure and measure the execution times.

But before executing the stored procedures let's check a sample (TOP 5) from each table:

Figure 1: Top 5 Records from each table before update.

5. Run all three execution modes and measure execution times
Ok, now it's time to execute the three stored procedures, thus update all tables and measure execution times and here they are:

Figure 2: Execution times for three modes.

Additionally, let's check once more some records just to confirm that the output is the same in all three tables:

Figure 3: Top 5 Records from each table after update.

6. Discussion
First of all, from the above figure we can see that the output of the three update operations is exactly the same which is something that validates the correctness of the three different execution modes.

As you can see from the screenshot with execution times (Figure 2), the slowest execution was the one that involved the disk-based table (elapsed time: 1924 ms) having also the highest CPU time (328 ms).

The fastest execution was the one of the memory-optimized table with the natively compiled stored procedure where the elapsed time was 329 ms and the CPU 93 ms!

The memory optimized execution was faster than the disk-based with 404 ms elapsed time and 250 ms CPU time but it was not faster than the memory-optimized scenario combined with the natively compiled stored procedure.

In-Memory optimization in SQL Server is a great new technology which can significantly improve performance in many cases. Especially in cases of certain workloads such as staging processes in Data Warehouse systems, high data insert rates (i.e. smart metering) and low latency execution scenarios (i.e. investment trading), In-Memory optimization can boost performance. Of course, before applying it on Production systems, a proper analysis is required along with testing.

During the last six months I have been working developing a simulator for In-Memory optimization in SQL Server. I expect to make it publicly available by the end of September 2014. So make sure that you check my official website often for updates!

Read more on this article...

Thursday, July 31, 2014

The Role of Technical Communities

Everything in our world is governed by technology. From large corporations to small stores and simple individuals, everyone uses a computational system for performing the most complex or even the simplest task. Even though the level of technology’s involvement in our lives can be characterized by many as a controversial topic, it is a fact that technology is everywhere.

The creation of new technology as well as the deployment of existing technology, in many cases, is not trivial. To this end, we find ourselves many times in the position of seeking deep technical information on how to perform certain, rather complex tasks or solve difficult technical issues. The “traditional” way of getting access to technology-related information is the different user manuals, white papers and related documentation issued by the creator of the original technology. However, most of the times, we are looking for something more. Even though the documentation can be comprehensive, is never enough. What we really seek is the experience and feedback of another person that achieved the same or at least similar task before us. That’s where the role of technical communities comes in. But before discussing the role of technical communities we should first try to define them.

So what a technical community really is? Is it a mailing list on which you can post questions on various technical issues you might be dealing with and wait for answers? Is it a platform on which you collaborate by answering various technical questions? Is it a forum, a blog maybe?

Well, a Technical Community is all the above plus much more. First of all, it is not just the platform that makes a technical community. The platforms and related tools make it easier to exchange knowledge and they have a significant role within a community. Though, what really makes a technical community is the people. It is the people who share a passion for technology and have the willingness to exchange knowledge without requesting anything in return. It is the people who spend hours and days and nights from their free time for helping others just because it feels really good doing so. It is the people who constantly get more educated on their area of expertise and present in events, trying to help other people who are interested on those topics for learning even more, answering their questions, discuss with them and guide them.

The role of the community is to serve as an “environment” that helps its members to develop their skills through knowledge exchange via presentations, hands-on labs, forums, online and offline discussions and many other activities. It helps its members to catch up with the latest technologies of common interest, easier and more efficiently. It is a “place” where each individual puts effort and time for helping others because he believes in people. A community is all about people and their need to belong to a broader group of people within which they can share knowledge, exchange ideas, and share the passion and excitement on technology.

Now imagine a world without technical communities. Imagine when searching the Web for a solution to a specific problem and end up only with sets of technical documentation and nothing from fellow people who might have experienced the same problem and found a solution “that just worked”. Would this be enough? For a limited number of cases yes, but for the majority no, at least not that easy. This is the role of technical communities, to complete the global quest for knowledge by enabling its members to share knowledge just the way each one of them perceives it. This exchange of knowledge creates interesting discussions via which all the participants end up understanding the solutions to different technical problems and challenges.
Read more on this article...

Tuesday, July 22, 2014

Introducing In-Memory Optimization in SQL Server

In-Memory OLTP (codenamed “Hekaton”) is the most significant feature shipped with SQL Server 2014. It is Microsoft’s new technology which is fully integrated into SQL Server’s Database Engine. In-Memory OLTP introduces new data structures for optimizing the performance of OLTP workloads. These data structures are called “memory-optimized” tables and along with natively compiled stored procedures, they offer a huge performance boost when it comes to processing large amounts of information, especially in data environments with high levels of concurrency. In certain cases, the execution time can be up to 50 times faster.

As the name of the technology implies, the In-Memory OLTP Engine is optimized for OLTP workloads. Some examples of best-suited workloads are:

  • Staging tables (i.e. DW)
  • High data insert rate (i.e. smart metering)
  • Low latency execution (i.e. investment trading)

When used in such scenarios, the performance gain of using the In-Memory OLTP Engine can be huge.

Memory-optimized tables store their data into memory using multiple versions of each row’s data. This technique is characterized as “Non-blocking multi-version optimistic concurrency control” and eliminates both Locks and Latches thus achieving breakthrough performance.

The list of main features of memory-optimized tables are:

  • Rows in the table are read from and written to memory
  • The entire table resides in memory
  • Non-blocking multi-version optimistic concurrency control
  • Durable & non-durable
  • A second copy maintained on disk for durability (if enabled)
  • Data in memory-optimized tables is only read from disk during database recovery
  • Interoperable with disk-based tables

Another feature of the In-Memory OLTP Engine is “natively compiled stored procedures”. A natively-compiled stored procedure is a SQL Server object that can access only memory-optimized data structures such as memory-optimized tables, table variables, etc. The main features of a natively-compiled stored procedure are:

  • It is compiled to native code (DLL) upon its creation (the interpreted stored procedures are compiled at first execution)
  • Aggressive optimizations take time at compile time
  • Can only interact with memory-optimized tables

The call to a natively compiled stored procedure is actually a DLL Entry Point which is something that offers very fast execution times!

When you properly combine memory-optimized tables with natively compiled stored procedures for heavy workloads like DW ETL processes, high data insert rate processes and processes that demand low latency execution you can expect some serious speedups.

Subsequent posts will dive into the In-Memory OLTP Engine of SQL Server 2014 and illustrate via different examples the benefits of using this technology.
Read more on this article...

Monday, June 30, 2014

New Artemiou Data Tools Release: SQL Multi-Exporter

Today I released a brand new data tool via my "Artemiou Data Tools" initiative. The tool is called SQL Multi-Exporter.

SQL Multi-Exporter allows the user to connect to a SQL Server instance and by issuing a query against a database, to export the results to an XML, CSV or HTML file with a single click.

SQL Multi-Exporter (Community Edition) Main Screen

Currently, in the Community Edition of the tool, there are 3 available output export for the retrieved information:
  • Export to XML file.
  • Export to CSV file (multiple delimiters available).
  • Export to HTML.
Some more screenshots:
Figure 2: Connecting to a SQL Server Instance and Running a Query.

Figure 3: Preview the Retrieved Information and then Export.

For all the data-junkies out there, a Ultimate Edition of the tool is coming out soon with a richer set of features that will skyrocket your interaction with data!
For more information and the download link please visit the tool's page.
Feel free to use the tool and provide your valuable feedback!
Read more on this article...

Tuesday, June 17, 2014

Modelling Database Creation with the Model System Database in SQL Server

The Model database in SQL Server is used as the template for all the user databases that are created on a SQL Serve instance. Most of the times we do not modify the Model database however there are cases where it might come in handy.

For example consider the following scenario: For audit purposes, you want to set your SQL Server instance, each time a user database is created, to automatically create an audit-related table as well as a relevant stored procedure that will allow each application that uses a database to be able to store in a table (in each database) the log-in and log-out times for each user.

To do this, we can create the relevant table and stored procedure in the Model database:

USE [model]

CREATE TABLE [dbo].[AuditLog](
[userID] [int] NOT NULL,
[loginTime] [datetime] NOT NULL,
[logoutTime] [datetime] NOT NULL

  @userID int,
  @loginTime datetime,
  @logoutTime datetime
  INSERT INTO dbo.AuditLog
VALUES (@userID, @loginTime, @logoutTime)

Let's take a look at the Model database:

Figure 1: The Model database after adding the table and stored procedure.

OK, so we have added the table and stored procedure in the Model database. Now let's create three empty sample databases and check if the same tables and stored procedures are created automatically upon their creation:


Time to check the three newly created databases and see if the "AuditLog" table and "logUser" stored procedures were automatically created:

Figure 2: SampleDB1
Figure 3: SampleDB2

Figure 4: SampleDB3

As you can see, all three databases were created fully based on the (modified) Model system database, thus the "AuditLog" table and "logUser" stored procedure were automatically created.

You can use the Model database for other things as well, as the above was just a simple example. However always have in mind when creating objects in the Model database that they will be automatically created for all user databases that will be created on that instance so be careful!
For more info on the Model database, please visit the following MSDN Library link.

P.S. Many thanks to my buddy Nakis for this great discussion. Always a pleasure man! :)

Read more on this article...