Monday, December 14, 2009

Data Types in SQL Server 2008


I Read a great article on database journal lately about data types in 2008
, i am sharing some basic info.

Numeric data types 


An Integer is a counting number with no decimal point or fractional piece. All negative numbers, positive numbers, and zero are integers. SQL Server breaks integers into four sizes:

BigInt: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807

Int: -2,147,483,648 to 2,147,483,648

SmallInt: -32,768 to 32,767

TinyInt: 0 to 255


Here as a part of software design you need to clear what to use in which cases, it may be a simple choice to pick the largest range, but looking deep you need to understand first Bigint uses double the space as int, and the other being the application consuming the database gets what it expects and there are not overflow errors.

for decimal value numbers, there are  Decimal, Numeric, Money, and SmallMoney. The types Decimal and Numeric are functionally the same thing. Meaning, they will work, calculate and behave identically, the only difference is in the mathematical definition and not in the way SQL Server utilizes them.

Money and SmallMoney are really Decimals with a fixed amount of four decimal places to the right. SmallMoney can be valued from - 214,748.3648 to 214,748.3647 while the range of Money is from -922,337,203,685,477.5808 to 922,337,203,685,477.5807. One of the reasons for using Money rather than a Decimal includes the display option of dollar signs and commas after three digits.


Strings  


Here we will discuss Char, VarChar, and Text. As most of you might me aware of difference between char and varchar, where when a char(50) and varchar(50) columns are defined, char takes up 50 Bytes in respective of size of content saved in the column where varchar is flexible and takes up bytes required to save the length of the content. 


Char must be used in case of where it is know or understood that length of the text would be fixed like a char(1) column would make much more sense in case of a Sex column where only M/F is expected to saved instead of varchar, where as something like name should be varchar. 


Varchar has a max limit of 8000, though a new introduction of Varchar(max) has made it more flexible as the size can be unlimited in this case, and the main benefit of this is that as in case of Text column string operations cannot be done, while here all the string operations can be handled easily. Also the thing to note here is that Varchar(max) is all set to replace text in the future versions of SQL server as Text is supported in 2008 only as a part of backward compatibility. 


I will adding more to this, hope you like it.... please Digg, Kick or share this if you like this.     


kick it on DotNetKicks.com

Friday, November 27, 2009

Resetting DNN passwords



Yesterday me and my team came across a strange problem, we had a client DNN deployment on our server for demo, and for long the client was not interested or was too busy and suddenly he needs the see the site the problem the original developer was not available and no one else know the host or admin password 


BIG PROBLEM.....


Luckily one of my team member were able to find a post with solution, i am re- posting the procedure.




CREATE procedure [dbo].[uap_ResetPassword]
@UserName NVarChar(255),
@NewPassword NVarChar(255)
as
begin
Declare @PasswordSalt NVarChar(128)
Declare @ApplicationID NVarChar(255)
Declare @ApplicationName NVarChar(255)


Set @ApplicationID = (SELECT [ApplicationID] FROM aspnet_Users WHERE UserName=@UserName)
Set @ApplicationName = (SELECT [ApplicationName] FROM aspnet_Applications WHERE ApplicationID=@ApplicationID)
Set @PasswordSalt = (SELECT PasswordSalt FROM aspnet_Membership WHERE UserID IN (SELECT UserID FROM aspnet_Users WHERE UserName=@UserName))


–select @ApplicationID, @ApplicationName
declare @RetVal as int
Exec @RetVal = dbo.aspnet_Membership_ResetPassword @ApplicationName, @UserName, @NewPassword, 10, 10, @PasswordSalt, -5
return @RetVal

end


Orignal post: http://www.devprise.com/2006/09/26/how-to-reset-a-dnn-password-at-the-database/


Thanks Tony...





kick it on DotNetKicks.com

Thursday, October 29, 2009

SQL Server Database analysis scripts







While browsing the net my team came across couple of very good analytical scripts, i would like to share this with everyone. often there is a scenario where you wonder how to find out what script or procedure is consuming most memory or have the most reads, this help to identify the possible bottle necks in the website.

The query below is really useful

1) Following Query will return top 10 queries having maximum order by number of executions:




SELECT TOP 10 qt.TEXT AS 'SP Name',
qs.execution_count AS 'Execution Count',
qs.execution_count/DATEDIFF(Second, qs.creation_time, GETDATE()) AS 'Calls/Second',
qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',
qs.total_worker_time AS 'TotalWorkerTime',
qs.total_physical_reads AS 'PhysicalReads',
qs.creation_time 'CreationTime'
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
WHERE qt.dbid = (
SELECT dbid
FROM sys.sysdatabases
WHERE name = 'DatabaseName')
ORDER BY qs.total_physical_reads DESC
Though this is not as useful as query one, but can be handy in case of really large databases, where you need to identify what table is consuming how much memory, and may be the the data in some tables is not all that relevant now and can be deleted conditionally, help improve the performance.

2) Query to check the size of each table:



EXEC sp_MSforeachtable @command1="EXEC sp_spaceused '?'"
One major point, you need admin/sa access to run these scripts
Look forward to your comments on the same



kick it on DotNetKicks.com

Tuesday, July 21, 2009

Performance optimizing the SQL Store Procedure

I have noted below some of the key features that can be easily implemented and made a habit during the course of deveopement that can in future boost your performance.

NO COUNT
By default, every time a stored procedure is executed, a message is sent from the server to the client indicating the number of rows that were affected by the stored procedure. Rarely is this information useful to the client. By turning off this default behavior, you can reduce network traffic between the server and the client, helping to boost overall performance of your server and applications.
To turn this feature off on at the stored procedure level, you can include the statement:
SET NOCOUNT ON at the beginning of each stored procedure you write. This statement should be included in every stored procedure you write.
If you want this feature turned off for your entire server, you can do this by running these statements at your server:
SP_CONFIGURE 'user options', 512 RECONFIGURE
but i don't recommend this as in certain cases you might need that all important row count.

Keep Transact-SQL transactions as short as possible within a stored procedure. This helps to reduce the number of locks, helping to speed up the overall performance of your SQL Server application.

Three ways to help reduce the length of a transaction are to:
1) break up the entire job into smaller steps (or multiple stored procedures or user defined functions) so each step can be committed as soon as possible
2) take advantage of SQL Server statement batches, which acts to reduce the number of round-trips between the client and server.
3) if there is a certain set of statement or calculations that is being done repeatedly in procedures use user defined functions to reduce code size and improve performance
Procedure Naming
If you are creating a stored procedure to run in a database other than the Master database, don't use the prefix "sp_" in its name. This special prefix is reserved for system stored procedures. Although using this prefix will not prevent a user defined stored procedure from working, what it can do is to slow down its execution ever so slightly.
The reason for this is that by default, any stored procedure executed by SQL Server that begins with the prefix "sp_", is first attempted to be resolved in the Master database. Since it is not there, time is wasted looking for the stored procedure.
If SQL Server cannot find the stored procedure in the Master database, then it next tries to resolve the stored procedure name as if the owner of the object is "dbo". Assuming the stored procedure is in the current database, it will then execute. To avoid this unnecessary delay, don't name any of your stored procedures with the prefix "sp_"
Before you are done with your stored procedure code, review it for any unused code, parameters, or variables that you may have forgotten to remove while you were making changes, and remove them. Unused code just adds unnecessary bloat to your stored procedures, although it will not necessarily negatively affect performance of the stored procedure.

Executing Strings
When you need to execute a string of Transact-SQL, you should use the sp_executesql stored procedure instead of the EXECUTE statement. Sp_executesql offers two major advantages over EXECUTE. First, it supports parameter substitution, which gives your more options when creating your code. Second, it creates query execution plans that are more likely to be reused by SQL Server, which in turn reduces overhead on the server, boosting performance.
Sp_executesql executes a string of Transact-SQL in its own self-contained batch. When it is run, SQL Server compiles the code in the string into an execution plan that is separate from the batch that contained the sp_executesql and its string.


kick it on DotNetKicks.com

Thursday, July 2, 2009

Benefits of Stored Procedures - Explained (Part 1)

Network Latency


One of the biggest advantages of using stored procedures over not using stored procedures is the ability to significantly reduce network traffic. And the more network traffic that can be reduced, the better the overall performance of your SQL Server-based applications.


case 1.
when we call a stored procedure from your application we just make a small RPC call while image if the stored procedure encapsulates 200~500 lines of SQL code, and if the same was to be moved to application code all the SQL in form of string will first have to sent across the network creating number of network packets and now imagine if 100 users are calling the same code at once you have one very busy network.


case 2.
when we are executing a number of statements based on some business logic, it is best all the business processing is done at stored procedure level. eg. lets say we have master detail relashionship where master data and detail data is to updated or inserted in one go. now if we were to use client side or application to handle all this we will first have to createa a master insert statement (T-SQL) , that would be sent across the network like explained above and then we will need to retrieve the master id just created with the insert statement. Then we proceed to creating a update or insert statement for the details table and now following the same process of sending the T-SQL to database and then having the needful done. while it would be just one call to a stored procedure with all the required parameter to handle all the above defined.


It is recommend to move as much of business logic to the stored procedure in that you would be moving all the processing to the sql server reducing the round trip network traffic.



In case you have any questions or comments please post them i will try and respond.






kick it on DotNetKicks.com

Monday, June 29, 2009

Benefits of Stored Procedures

Before I go on to explain the benefits, let me explain why I am writing this. We use stored procedures as a standard practice, not thinking twice of the very reason behind this.

I realized that we follow standard practice without questioning it, not understanding its perks and pit falls, so I decided to list out the benefits I understand and share it with my team via this blog. Benefits are listed below I will explain the points one by one in the future.

Stored procedures offer many benefits, including:

  • Reduced network traffic and latency, boosting application performance.

  • Stored procedure execution plans can be reused, staying cached in SQL Server's memory, reducing server overhead.

  • Client execution requests are more efficient. For example, if an application needs to INSERT a large binary value into an image data column not using a stored procedure, it must convert the binary value to a character string (which doubles its size), and send it to SQL Server. When SQL Server receives it, it then must convert the character value back to the binary format. This is a lot of wasted overhead. A stored procedure eliminates this issue as parameter values stay in the binary format all the way from the application to SQL Server, reducing overhead and boosting performance.

  • Stored procedures help promote code reuse. While this does not directly boost an application's performance, it can boost the productivity of developers by reducing the amount of code required, along with reducing debugging time.

  • Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients (assuming you keep the parameters the same and don't remove any result sets columns). This saves developer time.

  • Stored procedures provide better security to your data. If you use stored procedures exclusively, you can remove direct SELECT, INSERT, UPDATE, and DELETE rights from the tables and force developers to use stored procedures as the method for data access.
Warning: using stored procedure does not garuntee fast speed, there has to some level of code design, to make the procedure fast and reusable.

Please add comments to blog, i would like to hear what you this of this.

Wednesday, June 24, 2009

Use view to improve query performance


Recently I came across two such scenarios where a large dataset (1,000,000 + rows) was being retrieved using a complex set of joins in stored procedure, we did all the regular stuff
1. Checked Indexes - made sure they are on the right columns and verified the fill factor
2. Re indexed the database for better performance
3. Checked and removed the non essential joins
4. Verified join types

we did have some improvement in performance, but nothing like we had with using views instead of complex select query, we were able to get performance levels increased by 10 t0 40 times in specific cases.

Now the question is why?
I read the following while searching on another topic and it struck me, a view is an optimized query object, SQL server itself chooses the best execution plan and then when we do a filter on the view the nested loops are decreased considerably as only one select is being filtered rather that all the tables in the join..... It really helps

From the Database Management System (DBMS) perspective, a view is a description of the data (a form of metadata). When a typical view is created, the metadata is defined by encapsulating a SELECT statement that defines a result set to be represented as a virtual table. When a view is referenced in the FROM clause of another query, this metadata is retrieved from the system catalog and expanded in place of the view's reference. After view expansion, the SQL Server query optimizer compiles a single execution plan for the executing query. The query optimizer searches though a set of possible execution plans for a query, and chooses the lowest-cost plan it can find, based on estimates of the actual time it will take to execute each query plan.

Friday, June 19, 2009

FTP without FTP client on windows XP

I am sure everyone has used FTP and FTP Clients one time or the other. what happens when you want to connect to a ftp and dont have a ftp client, i just got to know of connecting to FTP with DOS, so i am sharing the info

1) Open a command windows (start->run_.cmd)
2) Type ftp
3) Type open ftp.ftp.somesite.com
4) Type username: ftpuser
5) Type the password: ftppassword
6) Type “dir” to see the content
7) Type “binary” Or "ASCII" to change to download Type
8) Type: “get filename” to download

also note that the filenames are case sensitive here so be careful.

Thursday, April 30, 2009

UPDATE script with Joins


I know this is basic but we tend to forget the basic stuff, today when a client asked me to update the date from one table into another table with 100's of records i for a moment thought that’s manual data entry and then it struck me a simple update statement will do it. Guess this is bound to happen if you work more at client end and not development.

well here is a sample of what i did and i was amazed that if i had opted for manual data entry it would have taken me hours. and the script took me 5~6 minutes.

here is what i did

update Table1 set Table1.col1= Table2.col2 from Table2 where Table1.Id= Table2.id


of course I could have used Inner join or any other standard join as and when required.