1.现流行的Ajax框架技术简介
现有的Ajax从框架角度分级的话,可以有以下分类:
·零级,完成base工作,包括扩展原有对象的方法,Ajax通讯部分,比较精简。
·一级,完成effect工作,包括增加常用效果转换函数,如tween、drag、maskLayer、fade等的特效。
· 二级,完成component工作,包括对话框、列表、树、日历等的组件。
·三级,完成application工作,包括完整的前端平台,允许用户定义能实现一定功能的模块。
一些框架只做零级,如Prototype.js,一些框架做到一级,如jQuery ,一些框架做到了三级,如Asp.net Ajax 和EXT。
选择合适的Ajax框架能提升程序效率,减少开发工作量,优化显示效果等作用。
(1)Asp.net Ajax 简介
主页:http://www.asp.net/cn/
设计思想
ASP.NET Ajax集成了一套客户端脚本库使得与功能丰富的、基于服务器开发平台的 ASP.NET结合在一起。其服务器端编程模型相对于客户端编程模型较为简单,而且容易与现有的ASP.NET程序相结合,通常实现复杂的功能只需要在页 面中拖几个控件,而不必了解深层次的工作原理,除此之外服务器端编程的ASP.NET AJAX Control Toolkit含有大量的独立AJAX控件和对ASP.NET原有服务器控件的AJAX功能扩展。
(2)Jquery简介
主页:http://jquery.com/
设计思想
简洁的思想:几乎所有操作都是以选择DOM元素(有强大的Selector)开始,然后是对其的操作(Chaining等特性)。本身特别容易上手, 代码非常简单,非常贴近原生态的JavaScript,文件也非常小。
(3)Extjs 简介
主页:http://extjs.com/
设计思想
100%面向对象和组件化的思想,一致的语法,全局的命名空间。文档的完整,规范,方便。所有的组件(widgets)都可直接使用,而无需进行设置
2.性能比较
Ajax 应用程序的性能取决于 Web 应用程序的几个方面:
·服务器响应时间
·网络传输时间
·客户机 JavaScript 的处理时间
可以用不同的框架制作出程序样例,使用FireBug测试其性能
从生成机制来说
Asp.net Ajax 服务器控件会生成其控制代码,生成的js文件可能稍微大些
MicrosoftAjax原始文件比较小,在82k ,加载速度比较快
Jquery原始文件比较小,在40k左右,加载速度比较快
Ext2.0原始文件500多k,加载速度稍慢。
当然可以通过性能优化,减少不必要的加载项,使用缓存等技术提升Ajax的性能。
3.易用性比较
(1)Asp.net Ajax的使用
和VS2008 开发环境集成,开发和调试非常方便。
通常实现复杂的功能只需要在页面中拖几个控件,而不必了解深层次的工作原理,除此之外服务器端编程的ASP.NET AJAX Control Toolkit含有大量的独立AJAX控件和对ASP.NET原有服务器控件的AJAX功能扩展,实现起来也非常简单。网上有大量的书籍,源码和视频 资料,同时也是微软官方社区支持的。
(2)Jquery 的使用
微软的visual studio 2008 sp1支持对jquery的动态提示,只要在代码页导入对应的vsdoc脚本就可以实现智能提示。该框架也有大量的示例它的文档说明很全,而且各种应用也说得很详细,同时还有许多成熟的插件可供选择.
(3)Extjs的使用
微软的visual studio 2008 同样支持支持对Extjs的动态提示,只要在代码页导入对应的vswd-ext脚本就可以实现智能提示。该框架也有大量的源示例,它的文档说明很全,而且各种应用也说得很详细.
Tuesday, October 5, 2010
开发.net(Dot Net)程序常用软件
1)vs2008( Microsoft Visual Studio 2008)
功能:开发.net程序。
2)TestDriven.NET
功能:用于单元测试。
下载网址:请到网址 http://testdriven.net/download.aspx 下载其中的 个人版(Personal Version ),请勿下载企业版(Enterprise Version)。个人版功能完全够用。
3)firebug
功能:调试客户端js,监控客户端网页的性能,数据的发送和接受状况,检查客户端错误。
下载网址:这个需要先下载安装 firefox 浏览器,下载网址:http://www.mozillaonline.com/
安装firefox完毕之后,打开firefox,到 “工具->附件组件->获取附加组件” 中搜索 firebug,出现结果之后,点击安装即可。
4)IETester
功能:模拟ie5.5/ie6/ie7/ie8等多种ie浏览器,用于浏览器兼容测试。
下载网址:http://www.my-debugbar.com/wiki/IETester/HomePage
5)notepad++
功能:轻量级编辑器,在需要简单的修改一下文本(网页或配置文件)的情况下使用,避免每次都耗费很长时间启动vs2008这个庞然大物。这个软件不是必须的,你可以使用其他编辑器,比如 editplus 或 vim 等代替。
下载网址:http://notepad-plus-plus.org/download
*关于重构。直接使用vs2008的方法就是在vs2008中,右键代码部分,就会出现右键菜单,其中比较常用的是:重命名、提取方法、提取接口。如下图所示:
功能:开发.net程序。
2)TestDriven.NET
功能:用于单元测试。
下载网址:请到网址 http://testdriven.net/download.aspx 下载其中的 个人版(Personal Version ),请勿下载企业版(Enterprise Version)。个人版功能完全够用。
3)firebug
功能:调试客户端js,监控客户端网页的性能,数据的发送和接受状况,检查客户端错误。
下载网址:这个需要先下载安装 firefox 浏览器,下载网址:http://www.mozillaonline.com/
安装firefox完毕之后,打开firefox,到 “工具->附件组件->获取附加组件” 中搜索 firebug,出现结果之后,点击安装即可。
4)IETester
功能:模拟ie5.5/ie6/ie7/ie8等多种ie浏览器,用于浏览器兼容测试。
下载网址:http://www.my-debugbar.com/wiki/IETester/HomePage
5)notepad++
功能:轻量级编辑器,在需要简单的修改一下文本(网页或配置文件)的情况下使用,避免每次都耗费很长时间启动vs2008这个庞然大物。这个软件不是必须的,你可以使用其他编辑器,比如 editplus 或 vim 等代替。
下载网址:http://notepad-plus-plus.org/download
*关于重构。直接使用vs2008的方法就是在vs2008中,右键代码部分,就会出现右键菜单,其中比较常用的是:重命名、提取方法、提取接口。如下图所示:
盲婚記 三 生命在於運動 (3)
再怎么护,二孬还是离家千万里的孤身男青年。好在那个年代,单位是什么都管的,组织是可以相信的,姑娘都是单纯的。工会主席张智善跑断了腿,终于为这群远道而来极需要拓展人际关系的大学生们安排了和纺织厂姑娘的联谊舞会。
二孬同志无一例外的错过了。像是受了诅咒一样,每逢舞会开始之前,他总会碰上些不大不小的事儿。感冒,拉肚子,跌伤了,吃撑了,出差了。。。二孬后来常常对女儿讲,我这是一心一意等你娘呢。墨馨牵牵嘴角,拿眼角扫扫二孬说,你那是专心等我娘呢。
不管等的是谁的娘,二孬总之是错过了一次次的相亲大会。对此,他只有在星期天来了找不到同屋跟他出去逛的时候感叹一下,谈乱(恋)爱的男女咋就见不够阿?(请自动忽略二孬不标准带河南口音的普通话,他到现在也没咬准过这个恋字。)
二孬同志无一例外的错过了。像是受了诅咒一样,每逢舞会开始之前,他总会碰上些不大不小的事儿。感冒,拉肚子,跌伤了,吃撑了,出差了。。。二孬后来常常对女儿讲,我这是一心一意等你娘呢。墨馨牵牵嘴角,拿眼角扫扫二孬说,你那是专心等我娘呢。
不管等的是谁的娘,二孬总之是错过了一次次的相亲大会。对此,他只有在星期天来了找不到同屋跟他出去逛的时候感叹一下,谈乱(恋)爱的男女咋就见不够阿?(请自动忽略二孬不标准带河南口音的普通话,他到现在也没咬准过这个恋字。)
精力过于旺盛了,就得找个出口平衡一下。二孬得解决之道是,吃,工作。只是以前总是呼朋引伴,现在是孤家寡人,吃的没那么香甜,只好随便吃20根油条,3碗豆浆凑合一顿。好在食堂和门口饭馆对他无底洞的实力都有所了解,终于没有人像看猩猩一样围观他吃饭了。
出差自然也是他的家常便饭。没办法,别人都要见女朋友嘛。于是二孬成天的往大森林和人烟稀少的小镇跑,泡得蓬头垢面野人一样。张智善看了直摇头,立马就杠上了二孬的领导——王老丹。老丹是部队转业的,岂有不明白用兵的道理?像二孬这样头脑简单,随喊随到,舍他还有谁?打死不肯让他暂时转内业。张智善虽然胖,可心不粗。脑筋一转,他很快就让老丹意识到一个问题,留人要留根。除非在这儿扎下根,否则他迟早都会想办法调离这里的。
于是老丹转变了态度,开始积极配合张智善解决这个老大难的个人问题,不,是小大难。别忘了,二孬是提前参加高考,提早参加工作的,只是个人情感发育显然稍稍滞后,尚未意识到周围的人都已成家这个惨淡的现实。出差自然也是他的家常便饭。没办法,别人都要见女朋友嘛。于是二孬成天的往大森林和人烟稀少的小镇跑,泡得蓬头垢面野人一样。张智善看了直摇头,立马就杠上了二孬的领导——王老丹。老丹是部队转业的,岂有不明白用兵的道理?像二孬这样头脑简单,随喊随到,舍他还有谁?打死不肯让他暂时转内业。张智善虽然胖,可心不粗。脑筋一转,他很快就让老丹意识到一个问题,留人要留根。除非在这儿扎下根,否则他迟早都会想办法调离这里的。
Saving windows 2008 / windows 7 performance monitor settings
There are actually a few ways you can achieve this. The first one is the one I find myself using the most, although the latter is really the way I was familiar with in previous OSes. You can copy the properties of a given perfmon session by clicking on the Copy Properties icon ( ) next to the highlight icon above the graph. Next, open up Notepad and paste into it. Save this file somewhere handy. Next time you want to get back to the settings you were using, just open the file, copy all of the text in it, and click on the Paste Properties icon (). If you’re interested, here’s what the data looks like (click to enlarge):
Another way you can save your settings it through Data Collector Sets. Once you have one defined and have run it to completion, you’ll have Reports you can view. You can copy and paste counters from one report to another. So, if you define a Data Collector Set and add some performance counters to it, then run it, you’ll have some reports to look at. You can look at any of these reports and simply highlight the counters you want, Copy, and then go back up to Performance Monitor (the live view) and Paste, and you’ll have the counters. This is what I do now in practice, since the counters I want to watch are in user defined Data Collector Sets now already (so I can view a whole day or arbitrary period in a day after-the-fact).
The third solution is to use the MMC. Run mmc from Start-Run, then go to File – Add or Remove Snap-ins. Add Performance to the selected snap-ins:
Add whatever counters you want, then simply save it. Back when I used to do this all the time I would have a perfmon.msc file on my server’s desktop that had the counters I was most interested in.
Close the MMC and you should be able to open the saved .msc file by simply double-clicking on it. This is exactly the behavior that I was used to on previous versions of Windows Server. Apparently the change is that perfmon no longer launches the performance monitor tool inside an MMC instance, but rather as a standalone process (which doesn’t directly support Save).
Windows/SQL server Physical Disk I/O Bottlenecks check and Resolution
PhysicalDisk Object: Avg. Disk Queue If your disk queue length frequently exceeds a value of 2 during peak usage of SQL Server, you might have an I/O bottleneck.
Avg. Disk Sec/Read is the average time, in seconds, of a read of data from the disk. The following list shows ranges of possible values and what the ranges mean:
•Less than 10 ms - very good
•Between 10 - 20 ms - okay
•Between 20 - 50 ms - slow, needs attention
•Greater than 50 ms – Serious I/O bottleneck
Physical Disk: %Disk Time is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests. A general guideline is that if this value is greater than 50 percent, there is an I/O bottleneck.
Avg. Disk Reads/Sec is the rate of read operations on the disk. Ensure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
Avg. Disk Writes/Sec is the rate of write operations on the disk. Ensure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
•Raid 0 -- I/Os per disk = (reads + writes) / number of disks
•Raid 1 -- I/Os per disk = [reads + (2 * writes)] / 2
•Raid 5 -- I/Os per disk = [reads + (4 * writes)] / number of disks
•Raid 10 -- I/Os per disk = [reads + (2 * writes)] / number of disks
For example, you might have a RAID-1 system with two physical disks with the following values of the counters.
Disk Reads/sec 80
Disk Writes/sec 70
Avg. Disk Queue Length 5
In that case, you are encountering (80 + (2 * 70))/2 = 110 I/Os per disk and your disk queue length = 5/2 = 2.5, which indicates a borderline I/O bottleneck.
You can also identify I/O bottlenecks by examining the latch waits. These latch waits account for the physical I/O waits when a page is accessed for reading or writing and the page is not available in the buffer pool. When the page is not found in the buffer pool, an asynchronous I/O is posted and then the status of the I/O is checked. If the I/O has already completed, the worker proceeds normally. Otherwise, it waits on PAGEIOLATCH_EX or PAGEIOLATCH_SH, depending upon the type of request. You can use the following DMV query to find I/O latch wait statistics.
Select wait_type,
waiting_tasks_count,
wait_time_ms
from sys.dm_os_wait_stats
where wait_type like 'PAGEIOLATCH%'
order by wait_type
A sample output follows.
wait_type waiting_tasks_count wait_time_ms signal_wait_time_ms
-----------------------------------------------------------------------
PAGEIOLATCH_DT 0 0 0
PAGEIOLATCH_EX 1230 791 11
PAGEIOLATCH_KP 0 0 0
PAGEIOLATCH_NL 0 0 0
PAGEIOLATCH_SH 13756 7241 180
PAGEIOLATCH_UP 80 66 0
When the I/O completes, the worker is placed in the runnable queue. The time between I/O completions until the time the worker is actually scheduled is accounted under the signal_wait_time_ms column. You can identify an I/O problem if your waiting_task_counts and wait_time_ms deviate significantly from what you see normally. For this, it is important to get a baseline of performance counters and key DMV query outputs when SQL Server is running smoothly. These wait_types can indicate whether your I/O subsystem is experiencing a bottleneck, but they do not provide any visibility on the physical disk(s) that are experiencing the problem.
You can use the following DMV query to find currently pending I/O requests. You can execute this query periodically to check the health of I/O subsystem and to isolate physical disk(s) that are involved in the I/O bottlenecks.
select
database_id,
file_id,
io_stall,
io_pending_ms_ticks,
scheduler_address
from sys.dm_io_virtual_file_stats(NULL, NULL)t1,
sys.dm_io_pending_io_requests as t2
where t1.file_handle = t2.io_handle
A sample output follows. It shows that on a given database, there are three pending I/Os at this moment. You can use the database_id and file_id columns to find the physical disk the files are mapped to. The io_pending_ms_ticks values represent the total time individual I/Os are waiting in the pending queue.
Database_id File_Id io_stall io_pending_ms_ticks scheduler_address
----------------------------------------------------------------------
6 1 10804 78 0x0227A040
6 1 10804 78 0x0227A040
6 2 101451 31 0x02720040
Resolution
When you see an I/O bottleneck, your first instinct might be to upgrade the I/O subsystem to meet the workload requirements. This will definitely help, but before you go out and invest money in hardware, examine the I/O bottleneck to see whether it is the result of poor configuration and/or query plans. We recommend you to follow the steps below in strict order.
1.Configuration: Check the memory configuration of SQL Server. If SQL Server has been configured with insufficient memory, it will incur more I/O overhead. You can examine the following counters to identify memory pressure:
•Buffer Cache hit ratio
•Page Life Expectancy
•Checkpoint pages/sec
•Lazywrites/sec
For more information about memory pressure, see Memory Bottlenecks earlier in this paper.
2.Query Plans: Examine execution plans and see which plans lead to more I/O being consumed. It is possible that a better plan (for example, index) can minimize I/O. If there are missing indexes, you may want to run Database Engine Tuning Advisor to find missing indexes.
The following DMV query can be used to find which batches or requests are generating the most I/O. Note that we are not accounting for physical writes. This is okay if you consider how databases work. The DML and DDL statements within a request do not directly write data pages to disk. Instead, the physical writes of pages to disks is triggered by statements only by committing transactions. Usually physical writes are done either by checkpoint or by the SQL Server lazy writer. You can use a DMV query like the following to find the five requests that generate the most I/Os. Tuning those queries so that they perform fewer logical reads can relieve pressure on the buffer pool. This enables other requests to find the necessary data in the buffer pool in repeated executions (instead of performing physical I/O). Hence, overall system performance is improved.
Here is an example of a query that joins two tables with a hash join.
create table t1 (c1 int primary key, c2 int, c3 char(8000))
create table t2 (C4 int, c5 char(8000))
go
--load the data
declare @i int
select @i = 0
while (@i < 6000)
begin
insert into t1 values (@i, @i + 1000, 'hello')
insert into t2 values (@i,'there')
set @i = @i + 1
end
--now run the following query
select c1, c5
from t1 INNER HASH JOIN t2 ON t1.c1 = t2.c4
order by c2
Run another query so that there are two queries to look at for I/O stats
select SUM(c1) from t1
These two queries are run in the single batch. Next, use the following DMV query to examine the queries that generate the most I/Os
SELECT TOP 5
(total_logical_reads/execution_count) AS avg_logical_reads,
(total_logical_writes/execution_count) AS avg_logical_writes,
(total_physical_reads/execution_count) AS avg_phys_reads,
execution_count,
statement_start_offset as stmt_start_offset,
(SELECT SUBSTRING(text, statement_start_offset/2 + 1,
(CASE WHEN statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(MAX),text)) * 2
ELSE statement_end_offset
END - statement_start_offset)/2)
FROM sys.dm_exec_sql_text(sql_handle)) AS query_text,
(SELECT query_plan from sys.dm_exec_query_plan(plan_handle)) as query_plan
FROM sys.dm_exec_query_stats
ORDER BY (total_logical_reads + total_logical_writes)/execution_count DESC
You can, of course, change this query to get different views on the data. For example, to generate the five requests that generate the most I/Os in single execution, you can order by:
(total_logical_reads + total_logical_writes)/execution_count
Alternatively, you may want to order by physical I/Os and so on. However, logical read/write numbers are very helpful in determining whether or not the plan chosen by the query is optimal. The output of the query is as follows.
avg_logical_reads avg_logical_writes avg_phys_reads
----------------- ------------------ ---------------
16639 10 1098
6023 0 0
execution_count stmt_start_offset
--------------- -----------------
1 0
1 154
Query_text Query_plan
----------------------------------- -----------
select c1, c5 from t1 INNER HASH JOIN …
select SUM(c1) from t1
The output tells you several important things. First, it identifies the queries that are generating the most I/Os. You can also look at the SQL text to see whether the query needs to be re-examined to reduce I/Os. Verify that the query plan is optimal. For example, a new index might be helpful. Second, the second query in the batch does not incur any physical I/Os because all the pages needed for table t1 are already in the buffer pool. Third, the execution count can be used to identify whether it is a one-off query or the one that is executed frequently and therefore needs to be looked into carefully.
3. Data Compression: Starting with SQL Server 2008, you can use the data compression feature to reduce the size of tables and indexes, thereby reducing the size of the whole database. The compression achieved depends on the schema and the data distribution. Typically, you can achieve 50-60% compression. We have seen up to 90% compression in some cases. What it means to you is that if you are able to compress you active data 50%, you have in effect reduced your I/O requirements by half. Data compression comes at the cost of additional CPU, which needs to be weighed in for your workload. Here are some general strategies.
First, why isn’t compressing the whole database blindly such a good idea? Well, to give you an extreme example, if you have a heavily used table T with 10 pages in a database with millions of pages, there is no benefit in compressing T. Even if SQL Server could compress 10 pages to 1 page, you hardly made a dent in the size of the database, but you did add some CPU overhead instead. In a real-life workload, the choices are not this obvious, but this example shows that you must look before you compress. Our recommendation is this: Before you compress an object (for example, a table index or a partition), look at its size, usage, and estimated compression savings by using the sp_estimate_data_compression_savings stored procedure.
Let us look at each of these in some detail:
• If the size of the object is much smaller than the overall size of the database, it does not buy you much.
• If the object is used heavily both for DML and SELECT operations, you will incur additional CPU overhead that can impact your workload, especially if it makes it CPU bound. You can use sys.dm_db_index _operational_stats to find the usage pattern of objects to identify which tables, indexes, and partitions are being hit the most.
• The compression savings are schema-dependent and data-dependent, and in fact, for some objects, the size after compression can be larger than before, or the space savings can be insignificant.
If you have a partitioned table where data in some partitions is accessed infrequently, you may want to compress those partitions and associated indexes with page compression. This is a common scenario with partitioned tables where older partitions are referenced infrequently. For example, you might have a table in which sales data is partitioned by quarters across many years. Commonly the queries are run on the current quarter; data from other quarters is not referenced as frequently. So when the current quarter ends, you can change the compression setting for that quarter’s partition.
For more information about data compression, see the SQL Server Storage Engine Blog (http://blogs.msdn.com/sqlserverstorageengine/archive/tags/Data+Compression/default.aspx) and SQL Server 2008 Books Online.
4. Upgrading the I/O Subsystem: If you have confirmed that SQL Server is configured correctly and examined the query plans and you are still experiencing I/O bottlenecks, the last option left is to upgrade your I/O subsystem to increase I/O bandwidth:
• Add more physical drives to the current disk arrays and/or replace your current disks with faster drives. This helps to boost both read and write access times. But don't add more drives to the array than your I/O controller can support.
• Add faster or additional I/O controllers. Consider adding more cache (if possible) to your current controllers.
Avg. Disk Sec/Read is the average time, in seconds, of a read of data from the disk. The following list shows ranges of possible values and what the ranges mean:
•Less than 10 ms - very good
•Between 10 - 20 ms - okay
•Between 20 - 50 ms - slow, needs attention
•Greater than 50 ms – Serious I/O bottleneck
Physical Disk: %Disk Time is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests. A general guideline is that if this value is greater than 50 percent, there is an I/O bottleneck.
Avg. Disk Reads/Sec is the rate of read operations on the disk. Ensure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
Avg. Disk Writes/Sec is the rate of write operations on the disk. Ensure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
•Raid 0 -- I/Os per disk = (reads + writes) / number of disks
•Raid 1 -- I/Os per disk = [reads + (2 * writes)] / 2
•Raid 5 -- I/Os per disk = [reads + (4 * writes)] / number of disks
•Raid 10 -- I/Os per disk = [reads + (2 * writes)] / number of disks
For example, you might have a RAID-1 system with two physical disks with the following values of the counters.
Disk Reads/sec 80
Disk Writes/sec 70
Avg. Disk Queue Length 5
In that case, you are encountering (80 + (2 * 70))/2 = 110 I/Os per disk and your disk queue length = 5/2 = 2.5, which indicates a borderline I/O bottleneck.
You can also identify I/O bottlenecks by examining the latch waits. These latch waits account for the physical I/O waits when a page is accessed for reading or writing and the page is not available in the buffer pool. When the page is not found in the buffer pool, an asynchronous I/O is posted and then the status of the I/O is checked. If the I/O has already completed, the worker proceeds normally. Otherwise, it waits on PAGEIOLATCH_EX or PAGEIOLATCH_SH, depending upon the type of request. You can use the following DMV query to find I/O latch wait statistics.
Select wait_type,
waiting_tasks_count,
wait_time_ms
from sys.dm_os_wait_stats
where wait_type like 'PAGEIOLATCH%'
order by wait_type
A sample output follows.
wait_type waiting_tasks_count wait_time_ms signal_wait_time_ms
-----------------------------------------------------------------------
PAGEIOLATCH_DT 0 0 0
PAGEIOLATCH_EX 1230 791 11
PAGEIOLATCH_KP 0 0 0
PAGEIOLATCH_NL 0 0 0
PAGEIOLATCH_SH 13756 7241 180
PAGEIOLATCH_UP 80 66 0
When the I/O completes, the worker is placed in the runnable queue. The time between I/O completions until the time the worker is actually scheduled is accounted under the signal_wait_time_ms column. You can identify an I/O problem if your waiting_task_counts and wait_time_ms deviate significantly from what you see normally. For this, it is important to get a baseline of performance counters and key DMV query outputs when SQL Server is running smoothly. These wait_types can indicate whether your I/O subsystem is experiencing a bottleneck, but they do not provide any visibility on the physical disk(s) that are experiencing the problem.
You can use the following DMV query to find currently pending I/O requests. You can execute this query periodically to check the health of I/O subsystem and to isolate physical disk(s) that are involved in the I/O bottlenecks.
select
database_id,
file_id,
io_stall,
io_pending_ms_ticks,
scheduler_address
from sys.dm_io_virtual_file_stats(NULL, NULL)t1,
sys.dm_io_pending_io_requests as t2
where t1.file_handle = t2.io_handle
A sample output follows. It shows that on a given database, there are three pending I/Os at this moment. You can use the database_id and file_id columns to find the physical disk the files are mapped to. The io_pending_ms_ticks values represent the total time individual I/Os are waiting in the pending queue.
Database_id File_Id io_stall io_pending_ms_ticks scheduler_address
----------------------------------------------------------------------
6 1 10804 78 0x0227A040
6 1 10804 78 0x0227A040
6 2 101451 31 0x02720040
Resolution
When you see an I/O bottleneck, your first instinct might be to upgrade the I/O subsystem to meet the workload requirements. This will definitely help, but before you go out and invest money in hardware, examine the I/O bottleneck to see whether it is the result of poor configuration and/or query plans. We recommend you to follow the steps below in strict order.
1.Configuration: Check the memory configuration of SQL Server. If SQL Server has been configured with insufficient memory, it will incur more I/O overhead. You can examine the following counters to identify memory pressure:
•Buffer Cache hit ratio
•Page Life Expectancy
•Checkpoint pages/sec
•Lazywrites/sec
For more information about memory pressure, see Memory Bottlenecks earlier in this paper.
2.Query Plans: Examine execution plans and see which plans lead to more I/O being consumed. It is possible that a better plan (for example, index) can minimize I/O. If there are missing indexes, you may want to run Database Engine Tuning Advisor to find missing indexes.
The following DMV query can be used to find which batches or requests are generating the most I/O. Note that we are not accounting for physical writes. This is okay if you consider how databases work. The DML and DDL statements within a request do not directly write data pages to disk. Instead, the physical writes of pages to disks is triggered by statements only by committing transactions. Usually physical writes are done either by checkpoint or by the SQL Server lazy writer. You can use a DMV query like the following to find the five requests that generate the most I/Os. Tuning those queries so that they perform fewer logical reads can relieve pressure on the buffer pool. This enables other requests to find the necessary data in the buffer pool in repeated executions (instead of performing physical I/O). Hence, overall system performance is improved.
Here is an example of a query that joins two tables with a hash join.
create table t1 (c1 int primary key, c2 int, c3 char(8000))
create table t2 (C4 int, c5 char(8000))
go
--load the data
declare @i int
select @i = 0
while (@i < 6000)
begin
insert into t1 values (@i, @i + 1000, 'hello')
insert into t2 values (@i,'there')
set @i = @i + 1
end
--now run the following query
select c1, c5
from t1 INNER HASH JOIN t2 ON t1.c1 = t2.c4
order by c2
Run another query so that there are two queries to look at for I/O stats
select SUM(c1) from t1
These two queries are run in the single batch. Next, use the following DMV query to examine the queries that generate the most I/Os
SELECT TOP 5
(total_logical_reads/execution_count) AS avg_logical_reads,
(total_logical_writes/execution_count) AS avg_logical_writes,
(total_physical_reads/execution_count) AS avg_phys_reads,
execution_count,
statement_start_offset as stmt_start_offset,
(SELECT SUBSTRING(text, statement_start_offset/2 + 1,
(CASE WHEN statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(MAX),text)) * 2
ELSE statement_end_offset
END - statement_start_offset)/2)
FROM sys.dm_exec_sql_text(sql_handle)) AS query_text,
(SELECT query_plan from sys.dm_exec_query_plan(plan_handle)) as query_plan
FROM sys.dm_exec_query_stats
ORDER BY (total_logical_reads + total_logical_writes)/execution_count DESC
You can, of course, change this query to get different views on the data. For example, to generate the five requests that generate the most I/Os in single execution, you can order by:
(total_logical_reads + total_logical_writes)/execution_count
Alternatively, you may want to order by physical I/Os and so on. However, logical read/write numbers are very helpful in determining whether or not the plan chosen by the query is optimal. The output of the query is as follows.
avg_logical_reads avg_logical_writes avg_phys_reads
----------------- ------------------ ---------------
16639 10 1098
6023 0 0
execution_count stmt_start_offset
--------------- -----------------
1 0
1 154
Query_text Query_plan
----------------------------------- -----------
select c1, c5 from t1 INNER HASH JOIN …
select SUM(c1) from t1
The output tells you several important things. First, it identifies the queries that are generating the most I/Os. You can also look at the SQL text to see whether the query needs to be re-examined to reduce I/Os. Verify that the query plan is optimal. For example, a new index might be helpful. Second, the second query in the batch does not incur any physical I/Os because all the pages needed for table t1 are already in the buffer pool. Third, the execution count can be used to identify whether it is a one-off query or the one that is executed frequently and therefore needs to be looked into carefully.
3. Data Compression: Starting with SQL Server 2008, you can use the data compression feature to reduce the size of tables and indexes, thereby reducing the size of the whole database. The compression achieved depends on the schema and the data distribution. Typically, you can achieve 50-60% compression. We have seen up to 90% compression in some cases. What it means to you is that if you are able to compress you active data 50%, you have in effect reduced your I/O requirements by half. Data compression comes at the cost of additional CPU, which needs to be weighed in for your workload. Here are some general strategies.
First, why isn’t compressing the whole database blindly such a good idea? Well, to give you an extreme example, if you have a heavily used table T with 10 pages in a database with millions of pages, there is no benefit in compressing T. Even if SQL Server could compress 10 pages to 1 page, you hardly made a dent in the size of the database, but you did add some CPU overhead instead. In a real-life workload, the choices are not this obvious, but this example shows that you must look before you compress. Our recommendation is this: Before you compress an object (for example, a table index or a partition), look at its size, usage, and estimated compression savings by using the sp_estimate_data_compression_savings stored procedure.
Let us look at each of these in some detail:
• If the size of the object is much smaller than the overall size of the database, it does not buy you much.
• If the object is used heavily both for DML and SELECT operations, you will incur additional CPU overhead that can impact your workload, especially if it makes it CPU bound. You can use sys.dm_db_index _operational_stats to find the usage pattern of objects to identify which tables, indexes, and partitions are being hit the most.
• The compression savings are schema-dependent and data-dependent, and in fact, for some objects, the size after compression can be larger than before, or the space savings can be insignificant.
If you have a partitioned table where data in some partitions is accessed infrequently, you may want to compress those partitions and associated indexes with page compression. This is a common scenario with partitioned tables where older partitions are referenced infrequently. For example, you might have a table in which sales data is partitioned by quarters across many years. Commonly the queries are run on the current quarter; data from other quarters is not referenced as frequently. So when the current quarter ends, you can change the compression setting for that quarter’s partition.
For more information about data compression, see the SQL Server Storage Engine Blog (http://blogs.msdn.com/sqlserverstorageengine/archive/tags/Data+Compression/default.aspx) and SQL Server 2008 Books Online.
4. Upgrading the I/O Subsystem: If you have confirmed that SQL Server is configured correctly and examined the query plans and you are still experiencing I/O bottlenecks, the last option left is to upgrade your I/O subsystem to increase I/O bandwidth:
• Add more physical drives to the current disk arrays and/or replace your current disks with faster drives. This helps to boost both read and write access times. But don't add more drives to the array than your I/O controller can support.
• Add faster or additional I/O controllers. Consider adding more cache (if possible) to your current controllers.
Subscribe to:
Posts (Atom)