Thursday, July 28, 2011

How to run two dropbox accounts on one computer (Windows 7)

There is really no doubt about the usefulness of Dropbox. You can backup your files to the cloud and sync your documents across various computers/operating system/devices. While everything is good about it, the bad thing is that you can’t access multiple Dropbox accounts on the same computer simultaneously.
There are plenty of reasons why most of us have different accounts (or need to access different accounts simultaneously). It could be:
1. You are using a shared computer at home and each and every users want to have their Dropbox folder accessible when they are using it.

2. You have setup various Dropbox accounts for different purpose (Home, Work, Leisure etc) and you want to access to the office document when you are working from Home.
3. 2GB is just not enough for you and you are not willing to pay for the extra storage space. The best is to create more Dropbox accounts since it is free to open an account (I don’t really encourage this. Dropbox is a great service. Do pay for it if you need more storage space).
Whatever the reasons, if you have multiple Dropbox accounts, you got to find a way to access them all in the same computer at the same time. In Windows 7, there is a series of steps that you have to do to get this working, but it will be easy if you follow the tutorial here. Here’s how you can do so:
Note: This tutorial assumes that you have administrator rights to your Windows and you have already installed your primary Dropbox account in your system.

Create a new user account

1. Go to Control Panel ->Add or remove user accounts
dropbox-add-new-user
2. Click on the Create a new account link
dropbox-new-user-link
3. Give your account a name and create the account.
dropbox-new-user-name
4. Next, click on the New user account and select CreatePassword.
dropbox-create-password
Once you are done setting the password, close your Control Panel and switch over to the newly created user account. Go to Start ->Switch User.
dropbox-switch-user

Dropbox configuration at the new user account

In your new user account, open a Windows Explorer and create a folder in the C: drive to contain your Dropbox files.
Download and install a brand new copy of Dropbox (You will be asked the Admin password when you carry out the installation).
When it prompts you to choose a folder, make sure you choose the folder (in the C: drive) that you have just created. This will ensure that the folder is accessible by you in your primary account.
Now, log out of your current window session and log back into your primary account.

Accessing multiple instances of Dropbox from your primary account

Open a window explorer and navigate to
C:\Users\new-user\AppData\Roaming\Dropbox\bin
dropbox-navigate-to-user-account
Right click the Dropbox.exe file and select Send to -> Desktop
dropbox-sendto-desktop
Now, go to your desktop and find the newly created shortcut. Right click the shortcut icon and select Properties.
dropbox-shortcut-properties
Append the following to the front of the target field:
runas /user:username
where username is the new user account that you have created just now.
dropbox-runas-user
Click Ok. Now double click the shortcut to run the second instances of Dropbox. A window will appear to prompt you for password.
dropbox-prompt-password
You should be able to see the 2nd dropbox icon at your taskbar.
dropbox-taskbar




If you dont want to type in a password each time do this:
1) Download this and extract to system32 folder (make sure you uncheck read only in properties)
http://download.sysinternals.com/Files/PsTools.zip
2) set the 'Target' on your shortcut to:
C:\Windows\System32\PsExec.exe -d -u USERNAME -p PASSWORD c:\DROPBOX FOLDER EXTENTION\Dropbox.exe
3) Test it works, then add to your windows startup folder. 

That’s it.

Saturday, July 23, 2011

Dell PowerEdge T110 Bios to Enable VT and EVC

To enable 64-Bit support for Virtual Machines in VMware ESX, you must Enable Virtualization Technology and to configure the ESX host for Enhanced vMotion Compatibility, you must Enable Execute Disable. These settings can be found under the CPU info section of the Dell BIOS. (F2 to enter Dell Bios on the PowerEdge)
image image

Wednesday, July 20, 2011

三十而立、四十为惑。五十而知天命...

“二十弱冠、三十而立、四十为惑。五十而知天命,六十花甲,七十古来稀,八十耄耋;当而立感叹弱冠,当不惑感叹而立...”

人初生叫婴儿,不满周岁称襁褓(qiǎngbǎo [swadding clothes])。
2至3岁称孩提。
女孩7岁称髫(tiáo 头发下垂的样子)年。男孩7岁称韶年。
10岁以下称黄口。
13岁至15岁称舞勺之年。
15岁至20岁称舞象之年。
女孩12岁称金钗之年。
女孩13岁称豆蔻年华。
女孩15岁称及笄之年。
16岁称碧玉年华;
20岁称桃李年华。
24岁称花信年华;女子出嫁称票?梅之年。
男子20岁称弱冠。
30岁称而立之年。
40岁称不惑之年。
50岁称知命之年。
60岁称花甲或耳顺之年。
70岁称古稀之年。
80岁称杖朝之年。
80至90岁称合?之年,耄耄(mao4 die2)之年。
100岁乐期颐

Tuesday, July 19, 2011

How to verify an Active Directory installation

http://support.microsoft.com/kb/298143

It is important to verify a successful installation of Active Directory. After you have performed an upgrade, you can verify the promotion of a server to a domain controller by verifying the following items:
  1. Default containers: These are created automatically when the first domain is created. Open Active Directory Users and Computers, and then verify that the following containers are present: Computers, Users, and ForeignSecurityPrincipals.
  2. Default domain controllers organizational unit: This holds the first domain controller, and additionally serves as the default container for new Windows 2000 domain controllers. Open Active Directory Users and Computers, and then verify this organizational unit.
  3. Default-First-Site-Name: During the promotion of a server to domain controller, the Dcpromo.exe program determines the site of which the domain controller can become a member. If the domain controller that is being created is the first in a new forest, a default site named "Default-First-Site-Name" is created and the domain controller becomes a member of this site until the appropriate subnets and sites are configured. You can verify this item by using Active Directory Sites and Services.
  4. Active Directory database: The Active Directory database is your Ntds.dit file. Verify its existence in the %Systemroot%\Ntds folder.
  5. Global catalog server: The first domain controller becomes a global catalog server, by default. To verify this item:
    1. Click Start, point to Programs, click Administrative Tools, and then click Active Directory Sites and Services.
    2. Double-click Sites to expand it, expand Servers, and then select your domain controller.
    3. Double-click the domain controller to expand the server contents.
    4. Below the server, an NTDS Settings object is displayed. Right-click the object, and then click Properties.
    5. On the General tab, you can observe a global catalog check box, which should be selected, by default.
  6. Root domain: The forest root is created when the first domain controller is installed. Verify your computer network identification in My Computer. The Domain Name System (DNS) suffix of your computer should match the domain name that the domain controller belongs to. Also, ensure that your computer registers the proper computer role. To verify this role, use the net accounts command. The computer role should say "primary" or "backup" depending on whether it is the first domain controller in the domain.
  7. Shared system volume: A Windows 2000 domain controller should have a shared system volume located in the %Systemroot%\Sysvol\Sysvol folder. To verify this item, use the net share command. The Active Directory also creates two standard policies during the installation process: The Default Domain policy and the Default Domain Controllers policy (located in the %Systemroot%\Sysvol\Domain\Policies folder). These policies are displayed as the following globally unique identifiers (GUIDs):
    {31B2F340-016D-11D2-945F-00C04FB984F9} representing the Default Domain policy
    {6AC1786C-016F-11D2-945F-00C04fB984F9} representing the Default Domain Controllers policy
  8. SRV resource records: You must have a DNS server installed and configured for Active Directory and the associated client software to function correctly. Microsoft recommends that you use Microsoft DNS server, which is supplied with Windows 2000 Server as your DNS server. However, Microsoft DNS server is not required. The DNS server that you use must support the Service Resource Record (SRV RR) Requests for Comments (RFC) 2052, and the dynamic update protocol (RFC 2136). Use the DNS Manager Microsoft Management Console (MMC) snap-in to verify that the appropriate zones and resource records are created for each DNS zone. Active Directory creates its SRV RRs in the following folders:
    • _Msdcs/Dc/_Sites/Default-first-site-name/_Tcp
    • _Msdcs/Dc/_Tcp
    In these locations, an SRV RR is displayed for the following services:
    • _kerberos
    • _ldap

Sunday, July 17, 2011

部分重要经济指标说明(转)

世界上绝大多数交易者都会在交易时采用这些宏观指标。 所发表数据的“质
量”或影响力可能在不同时间有所不同。 如果指标数据代表了新信息或者据此数据能
得出其它报告或数据无法得出的结论,那么该指标数据就是十分重要的.另外,如果人
们可以用某个指标来更好地预测未来趋势,那么这个指标就是极具价值的。

CCI - 消费者信心指数(Consumer Confidence Index) [高]
美国会议委员会会在每个月的最后一个星期二,美国东部时间早上10:00公布当月数据
        CCI是从5,000个美国家庭的抽样调查中得出,是反映消费者信心强弱的最准确
指标。消费者信心指数表明,当经济能确保带来更多的工作机会、更高的工资和较低的
利率时,消费者的信心和购买力就会增加。 受访者回答的问题包括其收入状况,对当
前市场的看法以及增加收入的可能性。信心指数是美联储决定利率的重要依据之一。由
于个人消费开支占美国经济总量的三分之二,信心指数对市场的影响很大。
CPI - 消费者物价指数(Consumer Price Index); 核心消费者物价指数(Core-CPI)
[高]
美国劳动统计局会在每月20号左右,美国东部时间早上8:30公布上月数据
        CPI被广泛用于衡量通货膨胀,也是衡量政府政策有效性的重要指标之一。
CPI反映了一定时期内居民购买一篮子商品(和服务)的价格变化(不含税)。 CPI是
最受关注的经济指标之一,也是非常重要的市场驱动力。 CPI上升可能引起通货膨胀。
Core-CPI(CPI中扣除食品、能源和受季节波动影响的费用项目)能更好的反映物价形
式。
就业报告(Employment Report )[高]
美国劳工部会在每月的第一个星期五,美国东部时间早上8:30公布上月数据
        通过对375,000个企业和60,000个家庭的调查来收集数据。报告显示: 经济中
新增或减少的就业岗位,每小时平均工资和每周平均工作时间。 此报告是最重要的经
济报告之一,它不仅披露了最新的实时信息,而且同NFP(非农业就业人数变化报告)一
起反映了完整的国家经济状况。
报告还会分别介绍不同部门(生产、服务、建筑、采矿、公共事业)的情况。
就业情况报告(Employment Situation Report) [高]
美国劳动统计局会在每月的第一个星期五,美国东部时间早上8:30公布上月数据
        就业情况报告是一个月度指标,有两个主要组成部分。 第一部分是失业人数和
新增就业机会,报告会介绍失业率及失业率的变化情况。报告的第二部分介绍有关每周
平均工作时间和每小时平均工资等方面的信息,数据反映了劳动力供求关系而这也是通
货膨胀的决定因素之一。 劳动局的调查涵盖了全美的250余个地区,几乎包括了每个主
要产业。该指标作为金融市场上最受关注的指标之一,总会对市场构成影响。 因内容
时效性强,发布及时,就业报告深受投资者重视。它是反映当前经济健康状况的最佳指
标之一。
FOMC(联邦公开市场委员会)会议: 利率声明(Rate announcement)[高]
由美国联邦银行的代表们参加的会议,一年举行8次。 每次会议中(大约在东部时间14
:15)会公布有关基准利率的决定。
        美联储(美国联邦储备局)负责调控美国的货币政策,监管银行,为政府机构
和公民提供服务并保持国家的金融稳定。
        美国有12个联邦区域(每个区域由几个州组成)在联邦委员会中由地区专员代
表其所在区域。
        一个国家的利率实际上就是持有该货币的投资回报率。 货币利率越高,就会
有越多的人持有、买入该货币从而加强了货币价值。 它是一个非常重要的指标会影响
通货膨胀率和市场驱动力。
        FOMC关于利率的声明是极为重要的,会议的其他内容(通常在2周后发布)对
市场投资者同样重要。
GDP - 国内生产总值(Gross Domestic Product) [高]
BEA(美国商务部经济分析局) 会在每个季度的最后一天,美国东部时间早上8:30公布上
季度数据。
        美国商务部,分3阶段公布GDP: 预估值;初值;终值。
        GDP从总体上衡量市场活动。它反映了在一定时期内一经济体中所生产出的全
部最终产品和服务的货币价值。这包括消费、政府采购、投资和贸易收支。 GDP可能是
表现一个国家经济健康状况的最佳指标。它通常按年衡量,但也会发表季度统计数据。
        美国商务部在每季度的最后一天会发布一份“预估值报告"。该报告发布后的
一个月之内发布“初值报告",之后再过一个月发布“终值报告"。 越新的GDP数据对市
场而言相对越为重要。它反映了一个国家经济增长(或衰退)的速度。
ISM 美国供应管理协会(Institute for Supply Management) 制造业指数(
Manufacturing Index)[高]
美国供应管理协会在每月的第一个工作日,美国东部时间早10:00公布上月数据
        制造业ISM报告基于400余家工业企业采购经理每月对问题的回答编辑而成。它
反映了5个主要经济领域(新用户订单占30%,制造业占25%,就业占20%,供货订单占15
%,库存占10%)的复合平均值。 若此数据高于50,则说明经济活动扩张;若其低于50
,则经济紧缩。
MCSI - 密西根消费者信心指数(Michigan Consumer Confidence Index)[高]
密西根大学在每月的第一天公布上月数据
        由密西根大学调查得出的消费者信心指数, 对投资者来说变得越来越有用。它
提供了消费者是否愿意消费的信息。
NFP - 非农就业人数变化(Changes in non-farm payrolls) [高]
美国劳工部在每月的第一个星期五,美国东部时间早上8:30公布上月数据
        数据旨在反映美国任何行业带薪雇员的总人数变化情况,但不包括以下人员:
        - 一般政府雇员;
        - 私人家庭雇员;
        - 为个人提供帮助的非营利机构雇员;
        - 农业人口。
        在参与创造美国国内生产总值的雇员中,非农雇员占80%,它可帮助政府决策
制定者和经济学家把握当前经济状况和预测未来经济活动水平。由于预测值常常与实际
值有较大的偏差,该数字的公布往往会造成大幅的市场波动。
PMI - 采购经理人指数(Purchasing Managers Index) [高]
美国供应管理协会在每月的第一个工作日,美国东部时间早上10:00公布上月数据
        PMI 是基于五项主要指标的综合指数,包括: 新订单指标、库存指标、生产
指标、供货商交货指标和就业环境指标。每一指标的侧重不同,同时数据会根据季节因
素进行调整。 采购经理人协会将调查来自全国20个不同行业的300余名采购经理人。
若PMI指数超过50,则表明制造业正在扩张;若其低于50,则意味着产业萎缩。 由于
PMI报告是反映工厂生产的最佳指标,它对金融市场也就显得尤为重要。此指标通常被
用于监测通货膨胀压力和生产经营活动。 在监测通货膨胀方面PMI没有CPI敏感,但由
于PMI数据是在每月的第一个工作日公布,它的时效性较强。 若PMI中提到了预期之外
的变化,那么通常随着它的发表,市场很快就会作出回应。报告中特别重要的地方是新
订单的增加,它预测了未来几个月的生产活动。
零售销售数据(Retail Sales Data);扣除汽车外的零售销售(Retail Sales less
Automotives)[高]
美国商务部统计局在每月12号左右,美国东部时间早上8:30公布上月数据
        零售销售是带动美国经济的主要动力之一,它跟踪公司在零售领域内的销售,
衡量了消费者在零售销售中(不含服务费用)的总花费。零售收入占据了美国经济的主
要部分(三分之二)。 普查局的调查涵盖了上百家提供零售业务的不同规模的公司和
企业。 每月发布的数据显示了上月变化的百分比。 负数表明着与前几个月相比销售下
降。这个指标是一股重要的市场驱动力,因为它被作为衡量消费者活动和信心的标准,
较高的销售数据代表着经济活动的增强。 上月的零售销售数据会在本月的前两周内公
布,它的时效性较强。
短观调查报告(Tankan Survey)[高]
BoJ (日本银行);每年4次分别在4月、7月、10月和12月中旬;格林威治时间晚上10:50发

        由日本中央银行发布的日本企业经济状况调查,可作为制定货币政策的参考。
参与调查的上千家企业都具有雄厚的资本,一些影响力很大的公司也被包含在内。 问
题包括对当前行业状况和趋势的看法,以及对下一季度和下一年商业活动的预期。它被
认为是日元货币对的主要市场驱动力。
长期债券交易TIC(美国财政部国际资本流动)数据 [高]
美国财政部在每月的第十二个工作日左右,美国东部时间早上9:00公布隔月数据
        TIC数据提供了有关美国处理当前帐户赤字采取的主要方法: 向外国卖出长期
债券,或者转移债务。重要的是,我们须知道解决赤字还有其他方法:从外国银行借款
或者吸引净FDI(海外直接投资)流入。 但由于FDI是负值,而银行流入的资金又较少,所
以美国主要通过向海外出售长期债券获取所需资金。TIC数据是衡量某一货币在国际投
资市场受信任程度的最佳工具。该数据可以导致大幅的市场波动。
贸易收支(Trade Balance)[高]
美国商务部在每月的第二周,美国东部时间早上8:30公布隔月数据
        贸易收支是一个国家国际收支的最大组成部分。它反映了一个国家出口的产品
和服务的总值与其进口的产品和服务的总值之间的差额。如果一个国家的进口大于出口
,会出现贸易逆差;反之,则会出现贸易顺差。该数据可以导致大幅的市场波动。

Thursday, July 14, 2011

Using RS Scripter to backup/restore SSRS's report/subscription/roles...

f you are using SSRS then chances are high to come across a scenario where you want to deploy reports developed on development machine to the production server, there are various ways to do this and one of them is to use RS Scripter tool. Here are the steps to do this -

1. Creating Report Deployment Script

  1. Download the Reporting Services Scripter - http://www.sqldbatips.com/showarticle.asp?ID=62
  2. Update the servers.xml; change the reportservice value if your ReportServer name is different.


  3. Run the RSScripter.exe.
  4. Click on Options, and select the options selected in images below( for more information about Scripting Options check the readme file - http://www.sqldbatips.com/samples/code/RSScripter/readme.htm#_scripting_options) – clip_image001
    clip_image002
    clip_image003
  5. Change the SQL 2005 RS EXE Location in the Global tab as per the location of SQL server and Apply – clip_image004
  6. Select SQL 2005 from the Report Server dropdown and click on Get Catalog, all the reports, data source, Schedules, Roles etc present in report server will be displayed –
    rs1
  7. Select all the reports, data source, schedules and roles you want to transfer.
    rs2
Change the output directory path (Should not be machine/user specific e.g. Desktop) e.g. C:\AClickReports and click on Script. The script for loading the specified items will be generated to the specified folder.

2. Deploying the Reports

  1. Ensure that you have IIS and dot net framework 2.0 is installed on report server.
  2. While installing SQL server 2005 select reporting services option.
  3. Follow the report server configuration steps and make sure "Report Server Configuration Manager" is all green ticks? clip_image008
  4. Follow these steps to publish reports –
    1. Extract the AClickReports folder to the report server, C: Drive.
    2. Open the “RS Scripter Load All Items.cmd” file located in AClickReports folder for editing.
    3. Change the SCRIPTLOCATION Script variable value to the location where AClickReports folder is copied (Should be same to the output directory path selected in step 3.8)
        SET SCRIPTLOCATION=C:\ AClickReports\
      clip_image009
    4. Change the RS Script variable value to the location where Microsoft SQL Server is installed (if it is different from the default C drive).
      SET RS="C:\Program Files\Microsoft SQL Server\90\Tools\Bin\RS.EXE"
    5. Run the RS Scripter Load All Items.cmd batch file, it will publish the reports to the report server.
  5. Follow these steps to verify that reports are deployed correctly
    1. Go to http://localhost/Reports/Pages/Folder.aspx, clip_image011
    2. Open the Data Sources you should see data sources selected in step 1.7 clip_image013
    3. Open the data source and update the data source details as per the configuration of report server(if they are different) and apply. rs3
    4. Go back to Home page and click on the report folder selected in step 1.7 , select one of the reports and click on Edit button, then click on the Data Sources link ; Make sure that the data source is configured properly. clip_image017
    5. Repeat the steps from 2.5.2 to 2.5.4 for other data sources and report folders.

统计数据库各个表所占空间的sql script

create table #t(name varchar(255), rows bigint, reserved varchar(20), data varchar(20), index_size varchar(20), unused varchar(20))
exec sp_MSforeachtable "insert into #t exec sp_spaceused '?'"
select * from #t
drop table #t

How to easily identify a scheduled SQL Server Reporting Services report

Problem
Our company has hundreds of SQL Server Reporting Services reports with daily, weekly, monthly, quarterly, yearly and adhoc schedules with various rendering formats like web archive, excel, pdf, csv, xml and tiff sent to multiple users, departments and groups. Sometimes we have to manually run a subscription to resend reports on an adhoc basis or re-send the reports when the actual subscriptions failed or when the data had to be corrected.
In this tip I will show you an easy way to identify the scheduled Reporting Services report, so you can run that scheduled job to reproduce the reports and delivery of the reports.
Solution
SQL Server Reporting services has come a long way from the first time it was released for SQL Server 2000 SP4. Unfortunately there are still some rough corners where it could be improved and one such place is subscriptions.
Manually initiating a subscription is a tedious task in SQL Server Reporting Services. Each subscription in Reporting Services is setup as a SQL Server Agent job, but the job names are NOT intuitive, rather named as a Unique Identifier as shown below.
Here is a screenshot of some of the subscriptions on a Reporting Services instance.
Fortunately, SQL Server Reporting Services comes with a rich set of meta-data that is stored in tables and for today's tip we will take advantage of these tables. There are two different methods to accomplish this task and both of them are outlined below.
Option 1:
This method looks at the dbo.ReportSchedule, dbo.Subscriptions, dbo.Catalog tables of the ReportServer database along with the dbo.sysjobs table of the msdb to figure out the SQL Agent job name. Additional information is also pulled to isolate the correct report when a similarly named report is in multiple folders/paths.
SELECT

            b.name AS JobName

            e.name

            e.path

            d.description

            a.SubscriptionID

            laststatus

            eventtype

            LastRunTime

            date_created

            date_modified

    FROM ReportServer.dbo.ReportSchedule a JOIN msdb.dbo.sysjobs b

            ON a.ScheduleID b.name

            JOIN ReportServer.dbo.ReportSchedule c

            ON b.name c.ScheduleID

            JOIN ReportServer.dbo.Subscriptions d

            ON c.SubscriptionID d.SubscriptionID

            JOIN ReportServer.dbo.Catalog e

            ON d.report_oid e.itemid

    WHERE e.name 'Sales_Report'
            
From the above resultset, grab the uniqueidentifier for the JobName column and filter the job in SSMS as shown below. Once the correct job is identified, run the SQL Server Agent job to deliver the subscription.

Option 2:
This option is better than the first option and completely eliminates the manual step in Option 1 of having to find and run the SQL Agent job.
In this option, we will generate the T-SQL that is used inside the SQL Agent job directly from the tables and then just run the SQL. It's a pure SQL based solution.
After you run this query, get the column that has the EXEC command for the report you want to re-run and paste the EXEC command into a query window to execute the code to re-run the report.
SELECT

    'EXEC ReportServer.dbo.AddEvent @EventType=''TimedSubscription'', @EventData=''' CAST(a.SubscriptionID AS VARCHAR(40)) + '''' AS ReportCommand

            b.name AS JobName

            a.SubscriptionID

            e.name

            e.path

            d.description

            laststatus

            eventtype

            LastRunTime

            date_created

            date_modified

    FROM ReportServer.dbo.ReportSchedule a JOIN msdb.dbo.sysjobs b

            ON a.ScheduleID b.name

            JOIN ReportServer.dbo.ReportSchedule c

            ON b.name c.ScheduleID

            JOIN ReportServer.dbo.Subscriptions d

            ON c.SubscriptionID d.SubscriptionID

            JOIN ReportServer.dbo.Catalog e

            ON d.report_oid e.itemid

    WHERE e.name 'Sales_Report'
            

SQL Server script to rebuild all indexes for all tables and all databases

Problem

One of the main functions of a DBA is to maintain database indexes.  There have been several tips written about different commands to use for both index rebuilds and index defrags as well as the differences between index maintenance with SQL Server.  In addition, other tips have been written about using maintenance plans to maintain indexes on all databases.  One of the issues with maintenance plans is that they don't always seem to be as reliable as you would hope and you also sometimes get false feedback on whether the task actually was successful or not.  In this tip we look at a simple script that could be used to rebuild all indexes for all databases.

Solution

The one nice thing about maintenance plans is that it works across multiple databases and therefore you can push out one task to handle the same activity across all of your databases.  The problem that I have seen with maintenance plans is that sometimes they do not work as expected, therefore here is another approach.
The script below allows you to rebuild indexes for all databases and all tables within a database.  This could be further tweaked to handle only indexes that need maintenance as well as doing either index defrags or index rebuilds.
The script uses two cursors one for the databases and another for the tables within the database.  In addition, it uses the INFORMATION_SCHEMA.TABLES view to list all of the tables within a database. 
Because we need to change from database to database we also need to create dynamic SQL code for the queries.  For the DBCC DBREINDEX option we can just pass in the parameters, but for the ALTER INDEX statement we need to build the query dynamically.  Here is the script.
DECLARE @Database VARCHAR(255)   DECLARE @Table VARCHAR(255)  DECLARE @cmd NVARCHAR(500)  DECLARE @fillfactor INT

SET
@fillfactor = 90
DECLARE DatabaseCursor CURSOR FOR
SELECT
name FROM MASTER.dbo.sysdatabases   WHERE name NOT IN ('master','msdb','tempdb','model','distribution')   ORDER BY 1
OPEN DatabaseCursor
FETCH NEXT FROM DatabaseCursor INTO @Database  WHILE @@FETCH_STATUS = 0  BEGIN

   SET
@cmd = 'DECLARE TableCursor CURSOR FOR SELECT ''['' + table_catalog + ''].['' + table_schema + ''].['' +
  table_name + '']'' as tableName FROM '
+ @Database + '.INFORMATION_SCHEMA.TABLES
  WHERE table_type = ''BASE TABLE'''  

  
-- create table cursor
  
EXEC (@cmd)
  
OPEN TableCursor  

  
FETCH NEXT FROM TableCursor INTO @Table  
  
WHILE @@FETCH_STATUS = 0  
  
BEGIN  

       IF
(@@MICROSOFTVERSION / POWER(2, 24) >= 9)
      
BEGIN
          
-- SQL 2005 or higher command
          
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
          
EXEC (@cmd)
      
END
       ELSE
       BEGIN
          
-- SQL 2000 command
          
DBCC DBREINDEX(@Table,' ',@fillfactor)
      
END

       FETCH
NEXT FROM TableCursor INTO @Table  
  
END  

   CLOSE
TableCursor  
  
DEALLOCATE TableCursor

  
FETCH NEXT FROM DatabaseCursor INTO @Database  END
CLOSE
DatabaseCursor   DEALLOCATE DatabaseCursor
The script will work for both SQL 2000 and higher versions.  For SQL 2000 it uses DBREINDEX and for SQL Server 2005 and higher it uses ALTER INDEX.  Thanks go out to LittlePanthers for providing the code snippet to check the version of SQL Server.
Also, I have excluded the system databases, so you can include these or also add other databases to exclude from you index maintenance routines.

Next Steps

  • This is a simple base script that could be modified into a stored procedure and also allow you to pass other parameters such as doing an index rebuild or an index defrag.
  • Make the index rebuild statements more robust with other options.
  • You could also modify this to read from a table that you create to identify which databases and which indexes you want to run this against.  You can look at index fragmentation and only rebuild the indexes that need to be rebuilt.
  • This approach rebuilds all indexes, so be careful if you run this on very large indexes.

Index Fragmentation Report in SQL Server 2005 and 2008

ProblemWhile indexes can speed up execution of queries several fold as they can make the querying process faster, there is overhead associated with them. They consume additional disk space and require additional time to update themselves whenever data is updated, deleted or appended in a table. Also when you perform any data modification operations (INSERT, UPDATE, or DELETE statements) index fragmentation may occur and the information in the index can get scattered in the database. Fragmented index data can cause SQL Server to perform unnecessary data reads and switching across different pages, so query performance against a heavily fragmented table can be very poor. In this article I am going to write about fragmentation and different queries to determine the level of fragmentation.
SolutionWhen indexes are first built, little or no fragmentation should exist. Over time, as data is inserted, updated, and deleted, fragmentation levels on the underlying indexes may begin to rise. So let's see how it happens.
When a page of data fills to 100 percent and more data must be added to it, a page split occurs. To make room for the new incoming data, SQL Server moves half of the data from the full page to a new page. The new page that is created is created after all the pages in the database. Therefore, instead of going right from one page to the next when looking for data, SQL Server has to go from one page to another page somewhere else in the database looking for the next page it needs. This is called index fragmentation.
There are basically two types of fragmentation:
  • External fragmentation - External, a.k.a logical,  fragmentation occurs when an index leaf page is not in logical order, in other words it occurs when the logical ordering of the index does not match the physical ordering of the index. This causes SQL Server to perform extra work to return ordered results. For the most part, external fragmentation isn’t too big of a deal for specific searches that return very few records or queries that return result sets that do not need to be ordered.
  • Internal fragmentation - Internal fragmentation occurs when there is too much free space in the index pages. Typically, some free space is desirable, especially when the index is created or rebuilt. You can specify the Fill Factor setting when the index is created or rebuilt to indicate a percentage of how full the index pages are when created. If the index pages are too fragmented, it will cause queries to take longer (because of the extra reads required to find the dataset) and cause your indexes to grow larger than necessary. If no space is available in the index data pages, data changes (primarily inserts) will cause page splits as discussed above, which also require additional system resources to perform.
As we learned, heavily fragmented indexes can degrade query performance significantly and cause the application accessing it to respond slowly. So now the question is how to identify the fragmentation. For that purpose SQL Server 2005 and 2008 provide a dynamic management function (DMF) to determine index fragmentation level. This new DMF (sys.dm_db_index_physical_stats) function accepts parameters such as the database, database table, and index for which you want to find fragmentation. There are several options that allow you to specify the level of detail that you want to see in regards to index fragmentation, we will see some of these options in the examples below.
The sys.dm_db_index_physical_stats function returns tabular data regarding one particular table or index.
Input Parameter Description
database_id The default is 0 (NULL, 0, and DEFAULT are equivalent values in this context) which specify to return information for all databases in the instance of SQL Server else specify the databaseID from sys.databases if you want information about a specific database. If you specify NULL for database_id, you must also specify NULL for object_id, index_id, and partition_number.
object_id The default is 0 (NULL, 0, and DEFAULT are equivalent values in this context) which specify to return information for all tables and views in the specified database or else you can specify object_id for a particular object. If you specify NULL for object_id, you must also specify NULL for index_id and partition_number.
index_id The default is -1 (NULL, -1, and DEFAULT are equivalent values in this context) which specify to return information for all indexes for a base table or view. If you specify NULL for index_id, you must also specify NULL for partition_number.
partition_number The default is 0 (NULL, 0, and DEFAULT are equivalent values in this context) which specify to return information for all partitions of the owning object. partition_number is 1-based. A nonpartitioned index or heap has partition_number set to 1.
mode mode specifies the scan level that is used to obtain statistics. Valid inputs are DEFAULT, NULL, LIMITED, SAMPLED, or DETAILED. The default (NULL) is LIMITED.
  • LIMITED - It is the fastest mode and scans the smallest number of pages. For an index, only the parent-level pages of the B-tree (that is, the pages above the leaf level) are scanned. In SQL Server 2008, only the associated PFS and IAM pages of a heap are examined; the data pages of the heap are not scanned. In SQL Server 2005, all pages of a heap are scanned in LIMITED mode.
  • SAMPLED - It returns statistics based on a 1 percent sample of all the pages in the index or heap. If the index or heap has fewer than 10,000 pages, DETAILED mode is used instead of SAMPLED.
  • DETAILED - It scans all pages and returns all statistics.
Note
  • The sys.dm_db_index_physical_stats dynamic management function replaces the DBCC SHOWCONTIG statement. It requires only an Intent-Shared (IS) table lock in comparison to DBCC SHOWCONTIG which required a Shared Lock, also the algorithm for calculating fragmentation is more precise than DBCC SHOWCONTIG and hence it gives a more accurate result.
  • For an index, one row is returned for each level of the B-tree in each partition (this is the reason, if you look at image below, for some indexes there are two or more than two records for a single index; you can refer to the Index_depth column which tells the number of index levels). For a heap, one row is returned for the IN_ROW_DATA allocation unit of each partition. For large object (LOB) data, one row is returned for the LOB_DATA allocation unit of each partition. If row-overflow data exists in the table, one row is returned for the ROW_OVERFLOW_DATA allocation unit in each partition.
Example
Let’s see an example. The first script provided below gives the fragmentation level of a given database including all tables and views in the database and all indexes on these objects. The second script gives the fragmentation level of a particular object in the given database. The details about the columns and its meaning returned by the sys.dm_db_index_physical_stats are given in the below table.
Script : Index Fragmentation Report Script
--To Find out fragmentation level of a given database
--This query will give DETAILED information
--CAUTION : It may take very long time, depending on the number of tables in the DB
USE AdventureWorks
GO
SELECT object_name(IPS.object_id) AS [TableName], 
   SI.name AS [IndexName], 
   IPS.Index_type_desc, 
   IPS.avg_fragmentation_in_percent, 
   IPS.avg_fragment_size_in_pages, 
   IPS.avg_page_space_used_in_percent, 
   IPS.record_count, 
   IPS.ghost_record_count,
   IPS.fragment_count, 
   IPS.avg_fragment_size_in_pages
FROM sys.dm_db_index_physical_stats(db_id(N'AdventureWorks'), NULL, NULL, NULL , 'DETAILED') IPS
   JOIN sys.tables ST WITH (nolock) ON IPS.object_id = ST.object_id
   JOIN sys.indexes SI WITH (nolock) ON IPS.object_id = SI.object_id AND IPS.index_id = SI.index_id
WHERE ST.is_ms_shipped = 0
ORDER BY 1,5
GO
--To Find out fragmentation level of a given database and table
--This query will give DETAILED information
DECLARE @db_id SMALLINT;
DECLARE @object_id INT;
SET @db_id = DB_ID(N'AdventureWorks');
SET @object_id = OBJECT_ID(N'Production.BillOfMaterials');
IF @object_id IS NULL 
BEGIN
   PRINT N'Invalid object';
END
ELSE
BEGIN
   SELECT IPS.Index_type_desc, 
      IPS.avg_fragmentation_in_percent, 
      IPS.avg_fragment_size_in_pages, 
      IPS.avg_page_space_used_in_percent, 
      IPS.record_count, 
      IPS.ghost_record_count,
      IPS.fragment_count, 
      IPS.avg_fragment_size_in_pages
   FROM sys.dm_db_index_physical_stats(@db_id, @object_id, NULL, NULL , 'DETAILED') AS IPS;
END
GO

Returned Column Description
avg_fragmentation_in_percent It indicates the amount of external fragmentation you have for the given objects. The lower the number the better - as this number approaches 100% the more pages you have in the given index that are not properly ordered.
For heaps, this value is actually the percentage of extent fragmentation and not external fragmentation.
avg_page_space_used_in_percent It indicates how dense the pages in your index are, i.e. on average how full each page in the index is (internal fragmentation). The higher the number the better speaking in terms of fragmentation and read-performance. To achieve optimal disk space use, this value should be close to 100% for an index that will not have many random inserts. However, an index that has many random inserts and has very full pages will have an increased number of page splits. This causes more fragmentation. Therefore, in order to reduce page splits, the value should be less than 100 percent.
fragment_count A fragment is made up of physically consecutive leaf pages in the same file for an allocation unit. An index has at least one fragment. The maximum fragments an index can have are equal to the number of pages in the leaf level of the index. So the less fragments the more data is stored consecutively.
avg_fragment_size_in_pages Larger fragments mean that less disk I/O is required to read the same number of pages. Therefore, the larger the avg_fragment_size_in_pages value, the better the range scan performance.

Wednesday, July 13, 2011

VMware vSphere 4.1虚拟化学习手册1:VMware vSphere简介(ZT)

什么是VMware vSphere
vSphere是VMware推出的基于云的新一代数据中心虚拟化套件,提供了虚拟化基础架构、高可用性、集中管理、监控等一整套解决方案
VMware vSphere 4套件组成
ESX/ESXi:物理服务器的虚拟化层,它将处理器、内存、存储器和资源虚拟化(交换机)为多个虚拟机,是vSphere服务器虚拟化基础架构组件;
vCenter:用于集中管理ESX/ESXi及其上的虚拟机,可通过其配置ESX/ESXi Cluster,实现HA、DRS、VMotion等高可用性功能;通过扩展vCenter相关插件,可实现更多高级功能;4.1以后只能装在64位os上。
vSphere Client:vSphere 管理客户端,用于直接连接ESX/ESXi主机实现基本管理任务,或通过连接vCenter实现ESX/ESXi和VM的集中管理;
Vmware vSphere Web Access:web管理界面,可以通过浏览器对Vmware vSphere或Vcenter进行管理。
vConverter:虚拟机迁移工具,可实现物理机到虚拟机的迁移(P2V),以及不同厂商虚拟机文件或镜像间的转换(V2V);分为以下三个版本:
  • vCenter Converter Enterprise:vCenter集成版本,安装后集成在vCenter管理界面中,此版本还包括一个用于冷迁移的CD启动版;
  • vCenter Converter Standalone:免费版本;
VCB:虚拟机备份命令行工具,可实现在线或离线虚拟机的自动备份;
Vmware VMFS:vmwae的文件系统,类似Windows的ntfs或linux的ext2和ext3。
VMware vmotion:一台vm从一个物理esx迁移到另一台物理esx上。
Vmware HA(高可用性):虚拟机的高可用,就是双机互备,跟普通的HA理念双互备差不多。
Vmware DRS.:在一个大的资源池分配给每一台vm的资源智能调优。
Vmware FT:多台esx.提供资源跑一个vm。
DataRecovery:虚拟机备份、恢复图形化工具;
Update Manager:用于ESX/ESXi主机及虚拟机的升级和补丁管理;可分为以下俩个版本:
  • vCenter Update Manager:vCenter集成版本,可在vCenter服务器上选择安装,安装后集成在vCenter管理界面中;
  • vSphere Host Update Utility:可在安装vSphere Client时选择安装,免费版本,用于单主机升级;
vCenter Guided Consolidation:在把现有的应用服务器整合到虚拟机之前,对其进行在线分析,以确认其资源使用情况,并给出整合建议,可在vCenter服务器上选择安装,安装后集成在vCenter管理界面中;
vCenter Orchestrator:vSphere工作流引擎,通过其可进行虚拟化相关的工作流设计,以实现虚拟化环境管理的自动化,此组件随vCenter自动安装;

vSphere4架构图
Vmware vmotion对inter 兼容列表
Vmware vmotion对amd 兼容列表
Vmware vmotion 对intel/amd cpu兼容列表
vSphere 4版本比较:
 
参考网址:
VMware vSphere (ESX)各版本比较:
http://www.vmware.com/cn/products/vsphere/buy/editions_comparison.html
VMware vSphere Hypervisor (ESXi)与其他虚拟化软件比较:
http://www.vmware.com/cn/products/vsphere-hypervisor/compare.html
如何下载并使用免费版本:
去VMware网站注册一个账户,申请一个免费的ESXi授权,然后下载ESXi或ESX进行安装;
注册下载页面:
https://www.vmware.com/cn/tryvmware/index.php?p=free-esxi&lp=1
使用vSphere Client连接安装好的ESX/ESXi,输入免费授权CDKey,之后就可在其上建立并运行虚拟机;
免费版本因为无vCenter可用,所以只能使用vSphere Client进行单机管理,可免费使用的功能组件仅限于以下两个:vCenter Converter Standalone和vSphere Host Update Utility
默认在不输入授权CDKey的情况下,可在60天内免费试用vSphere 4 Enterprise Plus;
关于ESX和ESXi
很多人都有一个误解,认为“ESX是全功能的收费版本,ESXi是个缩减功能的免费版”,这是错误的;下面我大概解释一下:
ESX和ESXi的唯一区别是“ESX带有一个基于Linux的Service Console,而ESXi则无此控制台”;
除去以上一点,其他功能两者没什么区别,具体可使用的功能取决于您使用的License类型,比如:Enterprise Plus License可授权使用vSphere4的所有功能,不管是ESX还是ESXi;
那可以免费使用ESX吗?当然可以!只要在安装好的ESX上输入免费ESXi License即可,两者的License是通用的;

source :http://www.liusuping.com/xunihua/VMware-vSphere-jianjie.html

虽然很黄,但很现实(zt)


1、赵四小姐从十六岁开始跟张学良。跟一年,属奸情;跟三年,算偷情;跟六十年,
便成为千古爱情!
此事的重要启示:很多事情不看做不做,而看你做多久。

2、民国初名妓小凤仙,如果跟了民工,就属于扫黄对象;她跟了蔡锷,则千古留芳了
;倘若她跟了孙中山,那便可能成为国母。
此事的重要启示:不在于你干什么,而看你跟谁干。

3、女浴室起火,里面人乱作一团,赤裸身体往外跑,只见大街上白花花一大群,一老
者大喊“快捂住”,众裸女突然醒悟,但身上要紧部位有三处,手忙脚乱捂不过来,不
知所措。这时老者又大喊:“捂脸就行,下面都一样!”
此事的重要启示:在特殊情况下抓工作不可能面面俱到,要抓住重点。


4、少妇报案:“我把钱放在胸衣内,在拥挤的地铁内被一帅哥偷走了…”警察纳闷:
“这么敏感的地方你就没觉察到?”少妇红着脸答:“谁能想到他是摸钱呢?”
此事的重要启示:“让客户的钱在愉快体验中不知不觉地被摸走,是商业模式的最高境界

5、组织几个人收保护费,那是黑社会。朱元障组织几百万人抢下王位,就是伟大的皇
帝。武则天睡了公公睡儿子,虽属乱伦,但乱的够大,故成为女皇。
此事的重要启示:不在你干没干坏事,而在于干多大!
6、一公司在小便池上贴上条:“往前一小步, 文明一大步”,结果地上仍有许多尿渍
。后来公司认真吸取教训, 重新设计成:“尿不到池里说明你短;尿到池外说明你软”
,结果地上比以前干净许多。
此事的重要启示:给客户的投资建议一定要具体,确切,中要害.

7、某日,女秘书神色凝重地说:王总,我怀孕了。 王继续低头看文件,然后淡淡一笑:我
早结扎了。 女秘书楞了一会媚笑道:我和您开玩笑呢! 王抬起头看了她一眼,喝了口茶
说:我也是。
此事的重要启示:在江湖上混的人,遇事不要慌,先让子弹飞一会。

8、男子去提亲,女方家长:请自我介绍。
A说:我有一千万;
B说:我有一栋豪宅,价值两千万;
家长很满意。就问C,你家有什么?
C答:我什么都没有,只有一个孩子,在你女儿肚子里。
AB无语,走了。
此事的重要启示:核心竞争力不是钱和房子,是在关键的岗位有自已的人。
9、[中~央~党~校教学案例]一男干部怕吃苦不愿援藏,谎称眼睛突然失明。领导闻听
,出面让一美女脱光站在他面前,问“看见了吗”?答:看不见。领导飞起一脚给他臀
部:狗日的,老二都直了还看不见?收拾行李,明天进藏。
此事的重要启示:1.人性化才能真正了解人。2.组织比个人高明。3.本能会出卖你
10, 广东一伙劫匪在抢劫银行时说了一句至理名言: “通通不许动,钱是国家的,命是自
己的!”大家都一声不吭躺倒。劫匪望了一眼躺在桌上四肢朝天的出纳小姐,说:“请
你躺文明些!这是劫财,又不是劫色!”劫匪回去后,其中一个新来的硕士劫匪说,老
大,我们赶快数一下抢了多少,那老劫匪(小学文化)说:“你傻啊?这么多,你要数
到什么时候啊?今天晚上看新闻不就知道了吗。” ----这就叫工作经验,这年头工作
经验比学历更重要! 劫匪走后,行长说,赶紧报案!主任刚要走,行长说急忙说:“
等等!把我们上次私自挪用的那五百万也加上去!”主任说:“要是劫匪每个月都来抢
一回就好了”。

中国人必须知道的76个常识

1.【十二生肖】
子鼠、丑牛、寅虎、卯兔、辰龙、巳蛇、午马、未羊、申猴、酉鸡、戌狗、亥猪
2.【十大名茶】
西湖龙井(浙江杭 州西湖区)、碧螺春(江苏吴县太湖的洞庭山碧螺峰)、信阳毛尖(河南信阳车云山)、君山银针(湖南岳阳君山)、六安瓜片(安徽六安和金寨两县的齐云山)、 黄山毛峰(安徽歙县黄山)、祁门红茶(安徽祁门县)、都匀毛尖(贵州都匀县)、铁观音(福建安溪县)、武夷岩茶(福建崇安县)
3.【四大名绣】
苏绣(苏州)、湘绣(湖南)、蜀绣(四川)、广绣(广东)
4.【四大名扇】
檀香扇(江苏)、火画扇(广东)、竹丝扇(四川)、绫绢扇(浙江)
5.【四大名花】
牡丹(河南洛阳)、水仙(福建漳州)、菊花(浙江杭州)、山茶(云南昆明)
6.【四大发明】
造纸(东汉.蔡伦)、火药(唐朝.古代炼丹家)、印刷术(北宋.毕升)、指南针(北宋.发明者无记载)
7.【古代主要节日】
元日:正月初一,一年开始。
人日:正月初七,主小孩。
上元:正月十五,张灯为戏,又叫“灯节”
社日:春分前后,祭祀祈祷农事。
寒食:清明前两日,禁火三日(吴子胥)
清明:四月初,扫墓、祭祀。
端午:五月初五,吃粽子,划龙(屈原)
七夕:七月初七,妇女乞巧(牛郎织女)
中元:七月十五,祭祀鬼神,又叫“鬼节”
中秋:八月十五,赏月,思乡
重阳:九月初九,登高,插茱萸免灾
冬至:又叫“至日”,节气的起点。
腊日:腊月初八,喝“腊八粥”
除夕:一年的最后一天的晚上,初旧迎新
8.【四书】
《论语》、《中庸》、《大学》、《孟子》
9.【五经】
《诗经》、《尚书》、《礼记》、《易经》、《春秋》
10.【八股文】
破题、承题、起讲、入手、起股、中股、后股、束股
11.【六子全书】
《老子》、《庄子》、《列子》、《荀子》、《扬子法言》、《文中子中说》
12.【汉字六书】
象形、指事、形声、会意、转注、假借
13.【书法九势】
落笔、转笔、藏峰、藏头、护尾、疾势、掠笔、涩势、横鳞竖勒
14.【竹林七贤】
嵇康、刘伶、阮籍、山涛、阮咸、向秀、王戎
15.【饮中八仙】
李白、贺知章、李适之、李琎、崔宗之、苏晋、张旭、焦遂
16.【蜀之八仙】
容成公、李耳、董促舒、张道陵、严君平、李八百、范长生、尔朱先生
17.【扬州八怪】
郑板桥、汪士慎、李鱓、黄慎、金农、高翔、李方鹰、罗聘
18.【北宋四大家】
黄庭坚、欧阳修、苏轼、王安石
19.【唐宋古文八大家】
韩愈、柳宗元、欧阳修、苏洵、苏轼、苏辙、王安石、曾巩
20.【十三经】
《易经》、《诗经》、《尚书》、《礼记》、《仪礼》、《公羊传》、《榖梁传》、《左传》、《孝经》、《论语》、《尔雅》、《孟子》
21.【四大民间传说】
《牛郎织女》、《孟姜女》、《梁山伯与祝英台》、《白蛇与许仙》
22.【四大文化遗产】
《明清档案》、《殷墟甲骨》、《居延汉简》、《敦煌经卷》
23.【元代四大戏剧】
关汉卿《窦娥冤》、王实甫《西厢记》、汤显祖《牡丹亭》、洪升《长生殿》
24.【晚清四大谴责小说】
李宝嘉《官场现形记》、吴沃尧《二十年目睹之怪现状》、刘鹗《老残游记》、曾朴《孽海花》
25.【五彩】
青、黄、赤、白、黑
26.【五音】
宫、商、角、址、羽
27.【七宝】
金、银、琉璃、珊瑚、砗磲、珍珠、玛瑙
28.【九宫】
正宫、中吕宫、南吕宫、仙吕宫、黄钟宫、大面调、双调、商调、越调
29.【七大艺术】
绘画、音乐、雕塑、戏剧、文学、建筑、电影
30.【四大名瓷窑】
河北的瓷州窑、浙江的龙泉窑、江西的景德镇窑、福建的德化窑
31.【四大名旦】
梅兰芳、程砚秋、尚小云、荀慧生
32.【六礼】
冠、婚、丧、祭、乡饮酒、相见
33.【六艺】
礼、乐、射、御、书、数
34.【六义】
风、赋、比、兴、雅、颂
35.【八旗】
镶黄、正黄、镶白、正白、镶红、正红、镶蓝、正蓝
36.【十恶】
谋反、谋大逆、谋叛、谋恶逆、不道、大不敬、不孝、不睦、不义、内乱
37.【九流】
儒家、道家、阴阳家、法家、名家、墨家、纵横家、杂家、农家
38.【三山】
安徽黄山、江西庐山、浙江雁荡山
39.【五岭】
越城岭、都庞岭、萌诸岭、骑田岭、大庾岭
40.【五岳】
(中岳)河南嵩山、(东岳)山东泰山、(西岳)陕西华山、(南岳)湖南衡山、(北岳)山西恒山
41.【五湖】
鄱阳湖(江西)、洞庭湖(湖南)、太湖(江苏)、洪泽湖(江苏)、巢湖(安徽)
42.【四海】
渤海、黄海、东海、南海
43.【四大名桥】
广济桥、赵州桥、洛阳桥、卢沟桥
44.【四大名园】
颐和园(北京)、避暑山庄(河北承德)、拙政园(江苏苏州)、留园(江苏苏州)
45.【四大名刹】
灵岩寺(山东长清)、国清寺(浙江天台)玉泉寺(湖北江陵)、栖霞寺(江苏南京)
46.【四大名楼】
岳阳楼(湖南岳阳)、黄鹤楼(湖北武汉)、滕王阁(江西南昌)、大观楼(云南昆明)
47.【四大名亭】
醉翁亭(安徽滁县)、陶然亭(北京先农坛)、爱晚亭(湖南长沙)、湖心亭(杭州西湖)
48.【四大古镇】
景德镇(江西)、佛山镇(广东)、汉口镇(湖北)、朱仙镇(河南)
49.【四大碑林】
西安碑林(陕西西安)、孔庙碑林(山东曲阜)、地震碑林(四川西昌)、南门碑林(台湾高雄)
50.【四大名塔】
嵩岳寺塔(河南登封嵩岳寺)、飞虹塔(山西洪洞广胜寺)、释迦塔(山西应县佛宫寺)、千寻塔(云南大理崇圣寺)
51.【四大石窟】
莫高窟(甘肃敦煌)、云岗石窟(山西大同)、龙门石窟(河南洛阳)、麦积山石窟(甘肃天水)
52.【四大书院】
白鹿洞书院(江西庐山)、岳麓书院(湖南长沙)、嵩阳书院(河南嵩山)、应天书院(河南商丘)
53.【四大佛教名山】
浙江普陀山(观音菩萨)、山西五台山(文殊菩萨)、四川峨眉山(普贤菩萨)、安徽九华山(地藏王菩萨)
54.【四大道教名山】
湖北武当山、江西龙虎山、安徽齐云山、四川青城山
55.【五行】
金、木、水、火、土
56.【八卦】
乾(天)、坤(地)、震(雷)、巽(风)、坎(水)、离(火)、艮(山)、兑(沼)
57.【三皇】
伏羲、女娲、神农
58.【五帝】
太皞、炎帝、黄帝、少皞、颛顼
59.【三教】
儒教、道教、佛教
60.【三清】
元始天尊(清微天玉清境)、灵宝天尊(禹余天上清境)、道德天尊(大赤天太清境)
61.【四御】
昊天金阙无上至尊玉皇大帝、中天紫微北极大帝、勾陈上宫天后皇大帝、承天效法土皇地祗
62.【八仙】
铁拐李、钟离权、张果老、吕洞宾、何仙姑、蓝采和、韩湘子、曹国舅
63.【十八罗汉】
布袋罗汉、长眉罗汉、芭蕉罗汉、沉思罗汉、伏虎罗汉、过江罗汉、欢喜罗汉、降龙罗汉、静坐罗汉、举钵罗汉、开心罗汉、看门罗汉、骑象罗汉、探手罗汉、托塔罗汉、挖耳罗汉、笑狮罗汉、坐鹿罗汉
64.【十八层地狱】
[第 一层]泥犁地狱、[第二层]刀山地狱、[第三层]沸沙地狱、[第四层]沸屎地狱、[第五层]黑身地狱、[第六层]火车地狱、[第七层]镬汤地狱、[第八 层]铁床地狱、[第九层]盖山地狱、[第十层]寒冰地狱、[第十一层]剥皮地狱、[第十二层]畜生地狱、[第十三层]刀兵地狱、[第十四层]铁磨地狱、 [第十五层]寒冰地狱、[第十六层]铁册地狱、[第十七层]蛆虫地狱、[第十 八 层]烊铜地狱
65.【五脏】
心、肝、脾、肺、肾
66.【六腑】
胃、胆、三焦、膀胱、大肠、小肠
67.【七情】
喜、怒、哀、乐、爱、恶、欲
68.【五常】
仁、义、礼、智、信
69.【五伦】
君臣、父子、兄弟、夫妇、朋友
70.【三姑】
尼姑、道姑、卦姑
71.【六婆】
牙婆、媒婆、师婆、虔婆、药婆、稳婆
72.【九属】
玄孙、曾孙、孙、子、身、父、祖父、曾祖父、高祖父
73.【五谷】
稻、黍、稷、麦、豆
74.【中国八大菜系】
四川菜、湖南菜、山东菜、江苏菜、浙江菜、广东菜、福建菜、安徽菜
75.【五毒】
石胆、丹砂、雄黄、矾石、慈石
76.【配药七方】
大方、小方、缓方、急方、奇方、偶方、复方

Friday, July 8, 2011

How to insert and update table in SSIS (Upsert or Merge)?

Source:http://pragmaticworks.com/forum/yaf_postsm46_How-to-insert-and-update-table-in-SSIS--Upsert-or-Merge.aspx#post46

Many times you have need to Insert new record or Update existing record in target table using SSIS. This can be done in 2 ways.

1. Combination of OLEDB Destination and OLEDB Command. This approach can be very slow if you have lots of updates. For large amount of records please check the next approach.
http://beingoyen.blogspo...e-instead-of-insert.html



2. Using TaskFactory Upsert Destination.
http://pragmaticworks.co...p_Upsert_Destination.htm

Upsert Destination Tutorial

http://pragmaticworks.co...rt_Destination_Video.htm

Upsert Destination (Batch Update Or Insert)

 SSISUpsert Destination Helps synchronize two systems no matter the source or destination. Many companies struggle with the development of loading procedures that determine whether the row is new or to be updated. The Upsert Destination reads the rows coming into the data flow and detects if the row should be inserted or updated. This saves the developer a long development cycle and could save up to 90% on each execution of a package.

Key Features:
  • Conditionally inserts or updates data into a table. Inserts are performed using bulk insert which is significantly faster compared to row-by-row approach. There are four methods in which data can be updated giving the user a great amount of control on whether data should be updated or not.
    1. Bulk update - Updates will be performed in the fastest amount of time because no data compare is performed. This is by far the fastest way to perform updates
    2. Column Compare - Updates will only be performed based on what columns the user selects to compare. If the data in the selected columns does not match, the update will be performed. Otherwise it will be ignored.
    3. DateTime Compare - Updates will only be performed based on a datetime column being compared from the source to a column in the destination.
    4. Timestamp Compare - Updates will only be performed based on a timestamp column being compared from the source to a column in the destination.
  • Currently supports SQL 2005 and SQL 2008 as destination
  • Easy to use User Interface











SSIS Upsert Test

SSIS auto generate update scripts

Specify the TableName in @TableName variable to create the Update script to be used for SSIS.

DECLARE @counter int;
DECLARE @QueryString nvarchar(2400) = ''
DECLARE @ColumnName nvarchar(120);
DECLARE @TableName nvarchar(120) = 'MyTable'
CREATE TABLE #ColumnList
(
ID int IDENTITY(1,1)
,ColumnName varchar(1200)
)
INSERT #ColumnList
select sys.columns.name from sys.columns
inner join sys.tables on sys.tables.object_id = sys.columns.object_id
where sys.tables.name = @TableName
and sys.columns.name <> 'ID'
SET @counter = 1
SET @QueryString = 'UPDATE dbo.' + @TableName + '
SET '
SET @ColumnName = (SELECT ColumnName FROM #ColumnList WHERE ID=@counter)
SET @QueryString = @QueryString + '
' + @ColumnName+ ' = ? '
SET @counter = @counter + 1;
WHILE @counter <= (SELECT COUNT(*) FROM #ColumnList)
BEGIN
SET @ColumnName = (SELECT ColumnName FROM #ColumnList WHERE ID=@counter)
SET @QueryString = @QueryString + '
,' + @ColumnName+ ' = ? '
SET @counter = @counter + 1;
END
SET @QueryString = @QueryString + '
WHERE ID = ?;'
PRINT @QueryString
DROP TABLE #ColumnList


source: http://beingoyen.blogspot.com/2010/03/ssis-auto-generate-update-scripts.html

SQL Server在什么样的条件下需要重建索引

 问:在什么样的条件下需要重建索引?
答:重建索引需要如下两个条件。
一:分析(analyze)指定索引之后,查询index_stats的height字段的值,如果这个值>=4 ,最好重建(rebuild)这个索引。虽然这个规则不是总是正确,但如果这个值一直都是不变的,则这个索引也就不需重建。
二:在分析(analyze)指定索引之后,查询index_stats的del_lf_rows和lf_rows的值,如果(del_lf_rows/lf_rows)*100 > = 20,则这个索引也需要重建。
举例如下:
SQL > analyze index IND_PK validate structure;
SQL > select name,height,del_lf_rows,lf_rows,
(del_lf_rows/lf_rows) *100 from index_stats;
NAME    HEIGHT DEL_LF_ROWS  LF_ROWS (DEL_LF_ROWS/LF_ROWS)*100
------------------------------
INDX_PK  4   277353   990206   28.0096263
SQL> alter index IND_PK rebuild;

运用SQL Server 2005 SSIS变量

一、 SSIS变量简介
SSIS(SQL Server Integration Services,SQL Server整 合服务)变量分为两种,一种是系统变量,一种用户定义的变量。系统变量包含有关包、容器、任务或事件处理程序的非常有用的信息。例如,在运行 时,MachineName 系统变量包含运行包的计算机的名称,StartTime 变量包含包开始运行的时间。系统变量是只读的。在 SSIS 中,用户变量是用户在开发的过程中根据需要申明的变量。用户变量可以使用在脚本中,在由优先约束、For 循环容器、派生列转换和条件性拆分转换使用的表达式中,以及在更新属性值的属性表达式中。
在各种编程语言中申明的变量一般情况下都会有 作用范围的,SSIS变量也不例外,SSIS变量也是有作用范围的。根据作用范围分类,变量分为包变量和组件变量。包变量在包任何一个组件中都可以调用, 组件变量只能够在申明变量的组件中有效。在变量的窗口中可以看到变量的作用域(如图1所示)。BeginDate就是TestPackage包变量,而i 是数据转换这个数据流组件的变量。
如何灵活运用SQL Server 2005 SSIS变量
图1
二、 SSIS用户变量的申明、赋值、使用
1、 申明变量
申明变量非常简单,如果你要申明包变量,只要单击控制流选项卡,然后在包开发区域空白处单击右键选择变量命令(出现如图1的画面),单击新建变量按钮就新建一个变量,输入名称,选择数据类型,赋初值就完成了 (如图2)。
红色方框是新建变量按钮,蓝色方框是删除按钮。如果要申明某数据流任务组件使用的变量,只要双击该数据流任务组件,在流控制选项卡空白的地方单击右键选择变量命令,在变量窗口中新建一个变量,这时变量的作用域就是你所选的流任务组件。
如何灵活运用SQL Server 2005 SSIS变量
图2
2、 赋值
在实际开发中,除了在变量申明的时候给变量赋值外,笔者还摸索了两种方式给变量赋值,一种是通过执行 SQL 任务组件返回值的方式给变量赋值,一种是通过脚本组件来给变量赋值。
利用执行 SQL 任务组件方法是先设置好组件的数据库连接属性,然后输入从数据取数据的SQL语句,设置组件返回的结果集为单行。在结果集界面中单击“新建“,在结果集那 一列输入你刚才SQL 语句中返回列的名称,在变量名称列选择你要赋值的变量(过程如图3、图4所示)。
如何灵活运用SQL Server 2005 SSIS变量
图3
如何灵活运用SQL Server 2005 SSIS变量
图4
图3中红色方框中SQL语句非常简单,返回单行,结果是1。在图4中,将返回的result列的一行赋值给用户变量i。
利用脚本组件赋值变量比较简单,只需要设置脚本组件的ReadOnlyvariable或者ReadWriteVariable,将变量的名称设置为他们的值(多个变量以逗号分隔),它们的区别是前者在脚本组件只能够读,后者可以读写。然后在脚本组件中通过
Dts.Variables("i").Value = 1
3、 变量的使用
变量在ssis中使用的地方很多,笔者介绍两个典型的应用。
(1) 执行 SQL 任务组件的参数
假定申明了一个日期类型变量StartDate,用户需要通过从某个表中选择在StartDate日期之前的数据,这个时候需要将StartDate作为参数传给执行 SQL 任务组件。在执行 SQL 任务组件输入SQL的地方输入如下命令语句:
SELECT * FROM TABLE_a WHERE 日期字段 < ?
然后在参数据映射界面新增映射,在变量名称列选择用户变量StartDate,选择类型为DATE,在参数名称列输入给参数取的名称。这样就可以将StartDate变量传给SQL任务组件的SQL语句了。
(2) 在脚本组件中赋值
可以在脚本组件中通过Dts.Variables("i").Value = 1方式赋值给变量,也可以通过这种方式来使用变量。比如Dts.Variables("other_variable").Value = Dts.Variables("i").Value+1,这个语句是可以在脚本组件中执行的,将i变量加1后赋值给另外一个变量。

正确理解.NET和SQL Server中“空值”

初学数据库编程的人,经常会对“空值”产生疑问,例如通过编程新建的一个表中所有数据皆显示为,手动添加并删除文字后又变成了空白;一个字符串类型的字段,明明没有填值,却不等于"";用ADO.NET从数据库中取值,每遇到有的就出错……这需要我们正确认识.NET和SQL Server中几种不同的“空值”。
1:真正的空值
等同“没有输入的值”,可以出现在大多数类型的字段中(如果没有别的约束条件),SQL server中表示为null,显示为,手工在SQL Server企 业管理器中输入的方法是按Ctrl+0。它在.NET中对应System.DBNull.Value。在T-SQL命令中,判断一个值是不是空值,要用 “is null”而不是“= null”;处理空值有个ISNULL函数,它使用指定的值替换null。用ADO.NET从数据库得到的空值无法自动转化为空字符串或Nothing, 须手动检测:如果得到System.DBNull.Value,则赋给数据对象Nothing或其它自定义的有意义的值。
2:空字符串(零长度字符串),只出现在字符串类型(如nvarchar)的字段中,SQL server中表示为’’,显示为空白,手工在SQL Server企业管理器中输入时清空一个单元格即可。它在.NET中对应System.String.Empty,也就是我们常用的""。在T-SQL命令中处理空字符串和处理一般的字符串没什么区别。用ADO.NET从数据库得到的空字符串也和一般的字符串没什么区别。
相关的概念还有VB.NET中的Nothing和对应于C#.NET中的null(注意这个null是C#.NET中的null而非SQL Server中null),它们在.NET中是表示不引用任何对象的空引用的值,在传入SQL server时,根据不同的上下文环境,可能存为真正的空值(比如在更新一个字符串类型的字段值时),也可能调用在SQL server中自定义的默认值(比如传给一个有默认值的存储过程参数),也可能因为无法进行类型转换而引发.NET异常。因此在用ADO.NET向SQL Server中存储数据时,大家一定要小心使用Nothing。