블로그 이미지
010-9967-0955 보미아빠

카테고리

보미아빠, 석이 (500)
밥벌이 (16)
싸이클 (1)
일상 (1)
Total
Today
Yesterday

달력

« » 2024.5
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

공지사항

최근에 올라온 글

 

 ShadowImage  동일 박스 내에서 데이터 볼륨을 복제하는 솔루션으로 최대 9개의 복제볼륨을 생성할 수 있음
 TrueCopy  Box-To-Box 솔루션으로 두 대의 Box를 물리적으로 이격된 곳에서(혹은 동일 건물 내) 광채널이나 네트워크를 사용하여 데이터 볼륨을 복제
 Universal Replicator  두 개 이상의 이기종 장치들을 사용하는 스토리지 시스템에서 간단하게 재해복구 (DR) 복제가 가능

 

참고 : http://www.his21.co.kr/advantage/advantage72/docu08.htm

         http://www.his21.co.kr/advantage/advantage76/docu05.htm

 

전체 솔루션 리스트 : http://www.his21.co.kr/his/solution/infor01

 

대규모 시스템에서 해당 소프트웨어를 사용해 복제시 성능 문제가 일어나는지 주의깊게 살펴 보아야 한다.

 

 

 

Posted by 보미아빠
, |

수 년간 성능 데이터를 데이터베이스에 저장하게 되면, perfmon 프로그램으로 해당 데이터베이스를 열어 분석 하고자 할 때 대단히 오랜 시간이 걸린다. 해당 서버와 해당 카운터만 골라와 빠르게 분석하고자 한다면 다음 방법을 사용한다.

 

 

 

 

 

특정 기간의 데이터만 잘라 3개의 테이블로 담아 데이터베이스를 구성한다.

Select * into 테이블이름같게 from (select * from 같은테이블 where 조건) a

 

 

여기서 SQL Server 선택 하도록 한다. ***

 

 

 

 

 

 

 

 

 

 

 

RESTORE FILELISTONLY FROM DISK ='D:\CAFE\PERF2\PERF2.BAK'

GO

 

RESTORE DATABASE PERF0518BBS2 FROM DISK ='D:\CAFE\PERF2\PERF2.BAK'

WITH MOVE 'PERF2' TO 'D:\PERF0518BBS2.MDF'

, MOVE 'PERF2_LOG' TO 'D:\PERF0518BBS2.LDF'

 

USE master

GO

 

 

-- 데이터 베이스 속성 변경

ALTER DATABASE PERF0518BBS2 SET RECOVERY SIMPLE

GO

ALTER DATABASE [perf0518bbs2] MODIFY FILE ( NAME = N'perf2', FILEGROWTH = 512000KB )

GO

ALTER DATABASE [perf0518bbs2] MODIFY FILE ( NAME = N'perf2_log', SIZE = 512000KB , FILEGROWTH = 512000KB )

GO

 

USE PERF0518BBS2

GO

 

-- 데이터 보정

declare @recordindex_reset int

, @min_counterdatetime varchar(100)

, @max_counterdatetime varchar(100)

, @min_counterid int

, @number_of_record int

 

select @recordindex_reset = MIN(RecordIndex) - 1

     , @min_counterdatetime= min(counterdatetime)

     , @max_counterdatetime = MAX(counterdatetime) 

     , @min_counterid = MIN(counterid)

  from dbo.CounterData

 

select @number_of_record = COUNT(*) from CounterData where CounterID = @min_counterid

 

if (@recordindex_reset > 1) begin

update dbo.CounterData set recordindex = RecordIndex - @recordindex_reset

update dbo.DisplayToID set

   logstarttime = @min_counterdatetime

 , logstoptime = @max_counterdatetime

 , numberofrecords = @number_of_record

end

 

-- 인덱스 생성

CREATE CLUSTERED INDEX [CL_COUNTERDATA] ON [DBO].[COUNTERDATA]

(

[GUID] ASC,

[COUNTERID] ASC,

[RECORDINDEX] ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

GO

 

-- 데이터 확인

select top 100 * from dbo.CounterData

select * from dbo.CounterDetails

select * from dbo.DisplayToID

 

 

 

 

 

 

 

Posted by 보미아빠
, |

ASC 인덱스의 경우 최고값을 직접 쿼리해보고 통계를 계산해서 해당 predicate 에서 선택될 row 가 몇개인지 인식한 후 플랜을 작성하는 방법

 

1) 언제 통계가 업데이트 되었는지 과거 3번까지 볼 수 있다.

2) 3번 통계가 업데이트 되고 나면 해당 인덱스가 ASC 인지 아닌지 결정(낙인)한다.

3) Forcescan 은 2008 sp1 부터 가능 

 

Test 는 아래를 보면서 하세요~

글과 내용이 쉬워서 해석은 안합니다.

 

 


Fabiano Amorim

 

Statistics on Ascending Columns
01 September 2011

It comes as rather a shock to find out that one of the commonest circumstances in an OLTP database, an ascending primary key with most querying on the latest records, can throw the judgement of the Query Optimiser to the extent that perfomance nose-dives. Fabiano once again puts on snorkel and goggles to explore the murky depths of execution plans to find out why.

First of all, a warning:

Much of what you will see here is ‘undocumented’. That means we can discuss it, but you can’t trust that this will not change in a service pack or a new product release. Also you will not be able to call Microsoft support for help or advice on the topic, or if you get an error when using the following techniques. Use with precaution and with extra attention.

Introduction

A very common problem relating to distribution statistics is associated with what we call “ascending value columns” in a table. This generally happens when a large table has ascending values and the most recent rows are the ones most commonly being accessed. When data values in a column ascend, most new insertions are beyond the range covered by the distribution statistics. This can lead to poorly performing plans since they inaccurately predict from the statistics that filters selecting recent data would exclude the entire relation.

As we already know, a column which belongs to a statistic has to get to a threshold quantity of modifications after the previous update in order to trigger the auto-updated statistics, and for certain cases, this threshold is too high. In other words, a column table requires too many modifications before the distribution statistics are rebuilt.

This delay in auto-updating a statistic can be a problem for queries that are querying the newest data in the table. That is very likely to happen when we are using date or identity columns. As you know, the use of an outdated statistic can be very bad for performance because SQL Server will not be able to estimate precisely the number of rows a table will return, and is likely to use a poor execution plan.

To simulate this problem I’ll start by creating a table called Pedidos (which means Orders in Portuguese) and I’ll make the identity column a primary key.

USE Tempdb

GO

SET NOCOUNT ON;

GO

IF OBJECT_ID('Pedidos') IS NOT NULL

DROP TABLE [Pedidos]

GO

CREATE TABLE [dbo].[Pedidos]

(

[ID_Pedido] [int] IDENTITY(1, 1) NOT NULL,

[ID_Cliente] [int] NOT NULL,

[Data_Pedido] Date NOT NULL,

[Valor] [numeric](18, 2) NOT NULL,

CONSTRAINT [xpk_Pedidos] PRIMARY KEY CLUSTERED(ID_Pedido)

)

GO

CREATE INDEX ix_Data_Pedido ON Pedidos(Data_Pedido)

GO

INSERT INTO Pedidos (ID_Cliente, Data_Pedido, Valor)

SELECT ABS(CONVERT(Int, (CheckSUM(NEWID()) / 10000000))),

'18000101',

ABS(CONVERT(Numeric(18,2), (CheckSUM(NEWID()) / 1000000.5)))

GO

-- Inserting 50000 rows in the table

INSERT INTO Pedidos WITH(TABLOCK) (ID_Cliente, Data_Pedido, Valor)

SELECT ABS(CONVERT(Int, (CheckSUM(NEWID()) / 10000000))),

(SELECT DATEADD(d, 1, MAX(Data_Pedido)) FROM Pedidos),

ABS(CONVERT(Numeric(18,2), (CheckSUM(NEWID()) / 1000000.5)))

GO 50000

SELECT * FROM Pedidos

GO

Here is what the data looks like:

Data showing sequential orders

As you can see the orders are added sequentially and that’s what usually happens with this sort of table. The date_order (column data_pedido) is the date of the order.

Now let’s suppose a query that is searching for the orders of the day will look like the following:

SET STATISTICS IO ON

SELECT * FROM Pedidos

WHERE Data_Pedido >= Convert(date, GetDate())

OPTION (RECOMPILE)

SET STATISTICS IO OFF

In the beginning of the day SQL Server can estimate that 0 (even it shows 1 on the plan, it is estimating 0) rows will be returned. But what will happen in the end of the day when thousands of orders were inserted?

It’s likely that an execution plan using the Key Lookup will not be a good option since the lookup operation will demand a lot of pages to be read.

Let’s simulate the problem by inserting 5001 new orders in the table. Because this is below the autoupdate threshold it will not trigger the auto-updated statistics.

INSERT INTO Pedidos WITH(TABLOCK) (ID_Cliente, Data_Pedido, Valor)

SELECT ABS(CONVERT(Int, (CheckSUM(NEWID()) / 10000000))),

GetDate(),

ABS(CONVERT(Numeric(18,2), (CheckSUM(NEWID()) / 1000000.5)))

GO

INSERT INTO Pedidos (ID_Cliente, Data_Pedido, Valor)

VALUES (ABS(CONVERT(Int, (CheckSUM(NEWID()) / 10000000))),

(SELECT DateAdd(d, 1, MAX(Data_Pedido)) FROM Pedidos),

ABS(CONVERT(Numeric(18,2), (CheckSUM(NEWID()) / 1000000.5))))

GO 5000

Now the table has lots of new orders, and each order was inserted in ascending order. That means the column Data_Pedido is increasing.

Let’s run the same query and check how many pages SQL Server has to read to execute this plan.

SET STATISTICS IO ON

SELECT * FROM Pedidos

WHERE Data_Pedido >= Convert(date, GetDate())

OPTION (RECOMPILE)

SET STATISTICS IO OFF

GO

Execution Plan

Note: Notice that I’m using the hint RECOMPILE to avoid query plan reuse. Also the auto create and auto update statistics are enabled on the tempdb database.

We can see here that the estimation of how many rows will be returned is very wrong. The 5001 insertions were not enough to trigger the auto-updated statistics, and the query optimizer still thinks that the table has no orders greater than GetDate().

What if we force a scan on the base table?

SET STATISTICS IO ON

SELECT * FROM Pedidos WITH(FORCESCAN, INDEX(0))

WHERE Data_Pedido >= Convert(date, GetDate())

OPTION (RECOMPILE)

SET STATISTICS IO OFF

Execution Plan

Execution Plan

Note: The hint FORCESCAN is new on SQL Server 2008 R2 SP1. You can read more this hint http://technet.microsoft.com/en-us/library/ms187373.aspx.

As we can see, a scan on the clustered index requires much fewer page-reads, 200 pages on clustered index versus 10017 on the non-clustered one, plus the lookup on the clustered index.

The estimate is still wrong because the statistics are outdated and therefore don’t represent the reality of what is in the table.

Branding

SQL Server can detect when the leading column of a statistics object is ascending and can mark or “brand” it as ascending. A statistics object that belongs to an ascending column is branded as “ascending” after three updates on the statistics. It’s necessary to update it with ascending column values so that when the third update occurs, SQL Server brands the statistics object as ascending.

It’s possible to check the statistics’ brand using the trace flag 2388, when turned on it changes the result of the DBCC SHOW_STATISTICS, and then we can see a column called “Leading column type” with the brand of the column.

For instance:

-- Enable trace flag 2388

DBCC TRACEON(2388)

GO

-- Look at the branding

DBCC SHOW_STATISTICS (Pedidos, [ix_Data_Pedido])

GO

-- Disable trace flag 2388

DBCC TRACEOFF(2388)

GO

Column type unknown

As we can see, the column now is “Unknown”. Let’s insert 10 rows with ascending orders and update the statistics to see what will happen.

-- Insert 10 rows

INSERT INTO Pedidos (ID_Cliente, Data_Pedido, Valor)

VALUES (ABS(CONVERT(Int, (CheckSUM(NEWID()) / 10000000))),

(SELECT DateAdd(d, 1, MAX(Data_Pedido)) FROM Pedidos),

ABS(CONVERT(Numeric(18,2), (CheckSUM(NEWID()) / 1000000.5))))

GO 10

-- Update statistics

UPDATE STATISTICS Pedidos [ix_Data_Pedido] WITH FULLSCAN

GO

DBCC TRACEON(2388)

DBCC SHOW_STATISTICS (Pedidos, [ix_Data_Pedido])

DBCC TRACEOFF(2388)

Showing Null column type

As I said before, the statistics have to be updated three times to be branded as ascending, so let’s do it.

-- Insert 10 rows

INSERT INTO Pedidos (ID_Cliente, Data_Pedido, Valor)

VALUES (ABS(CONVERT(Int, (CheckSUM(NEWID()) / 10000000))),

(SELECT DateAdd(d, 1, MAX(Data_Pedido)) FROM Pedidos),

ABS(CONVERT(Numeric(18,2), (CheckSUM(NEWID()) / 1000000.5))))

GO 10

-- Update statistics

UPDATE STATISTICS Pedidos [ix_Data_Pedido] WITH FULLSCAN

GO

-- Insert 10 rows

INSERT INTO Pedidos (ID_Cliente, Data_Pedido, Valor)

VALUES (ABS(CONVERT(Int, (CheckSUM(NEWID()) / 10000000))),

(SELECT DateAdd(d, 1, MAX(Data_Pedido)) FROM Pedidos),

ABS(CONVERT(Numeric(18,2), (CheckSUM(NEWID()) / 1000000.5))))

GO 10

-- Update statistics

UPDATE STATISTICS Pedidos [ix_Data_Pedido] WITH FULLSCAN

GO

DBCC TRACEON(2388)

DBCC SHOW_STATISTICS (Pedidos, [ix_Data_Pedido])

DBCC TRACEOFF(2388)

Ascending Column Type

Trace Flags 2389 and 2390

By default, the query optimizer keeps the information about the branding of statistics, but doesn’t make use of it. The optimizer won’t choose a different plan based on whether the column is ascending or not. To change this, you need to use the trace flags 2389 or 2390.

When the trace flag 2389 is enabled, the statistics are ascending and you have a covered index on the ascending leading key of the statistic, then the query optimizer will query the table to compute the highest value of the column. This value is then used in the estimation of how many rows will be returned for the predicate.

The trace flag 2390 works similarly to flag 2389, the main difference being that, with this flag set, the query optimizer doesn’t care if the column was branded as ascending or not. In other words, even if the column is marked as “Unknown” it will query the table to find the highest value.

To see this in practice we’ll use an undocumented query hint called QUERYTRACEON. With this query hint we can enable a trace flag in a statement scope.

Here we have a command that is using the trace flags 2389 and 2390:

SET STATISTICS IO ON

SELECT * FROM Pedidos

WHERE Data_Pedido >= Convert(date, GetDate())

OPTION(QUERYTRACEON 2389, QUERYTRACEON 2390, RECOMPILE)

SET STATISTICS IO OFF

GO

Execution Plan

SQL Server, we can see, now has a new estimation and it was enough to avoid the bad plan that was using the key lookup.

Note: Internally QUERYTRACEON will run DBCC TRACEON command and you’ll need a sysadmin privilege to run this hint. A good alternative is to use a stored procedure and run it as an administrator user.

Note: There is another brand for a statistics object called “Stationary”. The query optimizer will not trigger the query on the table to compute the value if the brand is stationary. In other words, the data in the leading column(s) is not ascending.

Conclusion

Remember that this is just an alternative way to fix a problem that is seen with large tables with ascending columns. The best option is to update your statistics periodically. In this case, it’s just a matter of creating a job to update the statistics more frequently for the tables with ascending columns up to several times a day, there is no fixed number of updates per day here, you’ll need to figure out which is the appropriate number of updates necessary for your scenario. You also have query hints, plan guides and other alternatives to fix this problem.

Here are some questions I asked myself when I first read about these trace flags.

Are trace flags always good?

Probably not, you have to find the scenario where to test them, and test, test, test.

Will SQL Server always query the table to find the highest value on in the column?

Not always. It depends on the predicate and how you are querying the table.

Can I use the trace flag 2389 alone?

Yes, but it will work only for columns branded as ascending.

Can I use the trace flag 2390 alone?

You can, but it doesn’t make sense to do so because it will stop working when the column turns out to be ascending.

As with everything, you need to test it before you use it and it always depends on the scenario. It’s also worth mentioning that this method is not supported; so don’t call Microsoft if you have any doubts about this, or me either, come to think of it.

That’s all folks, see you.



This article has been viewed 5972 times.

 

Posted by 보미아빠
, |

실습하시다 모르는거 있으시면 언제든지 문의 주세요.

감사합니다.

 

 

병렬처리_김민석_speaker_sqlunpluuged_2012_final.pptx

 

unplugged2012.zip

 

Posted by 보미아빠
, |

 

 

다음에 더 준비해서 느긋하게 발표해 드릴께요~

시간이 너무 짧아 맘이 급하더군요....

 

 

 

 

Posted by 보미아빠
, |

 

 

주의 : 데이터베이스 이름은 대 소 문자를 가린다.

Table 속 내용까지 받을 수 있습니다.

옵션도 다양하게 줄 수 있으니 help 를 보고 하세요~

 

sqlpubwizinstaller.exe

음 1.2 버전 이구요 1.4 까지 나온걸로 아는데 VS 를 설치하지 않고 다운 받아 까는 방법은 모르겠네요.

누가 구하시면 살짝 연락 주세요!

 

Posted by 보미아빠
, |

-- by minsouk kim
-- 2012-03-21

select cpu_id, b.session_id, b.exec_context_id, e.blocking_session_id
  , preemptive_switches_count, d.context_switches_count
  , /*f.last_wait_type,*/g.wait_type,g.wait_duration_ms, g.blocking_exec_context_id,resource_description, task_state
  , sql_handle, plan_handle
--  , a.*, b.*, c.*, d.*, e.*, f.*
  from sys.dm_exec_sessions a -- 연결정보
  join sys.dm_os_tasks b -- worker 와 연결하기위한 컨테이너
    on a.session_id = b.session_id
  join sys.dm_os_threads c -- windows thread 와 연결
    on b.worker_address = c.worker_address
  join sys.dm_os_schedulers d -- cpu 정보
    on b.scheduler_id = d.scheduler_id
  join sys.dm_os_workers f -- 실제 worker
    on b.task_address = f.task_address     
  left join sys.dm_exec_requests e -- 요청중인 모든 세션을 볼 수 있음 (요청자 정보)
    on b.task_address = e.task_address   
  left join sys.dm_os_waiting_tasks g
    on b.task_address = g.waiting_task_address
 where a.session_id > 50
 order by b.session_id, b.exec_context_id
Posted by 보미아빠
, |

processor groups

카테고리 없음 / 2012. 3. 13. 17:32

Processor Groups

3 out of 6 rated this helpful Rate this topic

The 64-bit versions of Windows 7 and Windows Server 2008 R2 and later versions of Windows support more than 64 logical processors on a single computer. This functionality is not available on 32-bit versions of Windows.

Systems with more than one physical processor or systems with physical processors that have multiple cores provide the operating system with multiple logical processors. A logical processor is one logical computing engine from the perspective of the operating system, application or driver. A core is one processor unit, which can consist of one or more logical processors. A physical processor can consist of one or more cores. A physical processor is the same as a processor package, a socket, or a CPU.

Support for systems that have more than 64 logical processors is based on the concept of a processor group, which is a static set of up to 64 logical processors that is treated as a single scheduling entity. Processor groups are numbered starting with 0. Systems with fewer than 64 logical processors always have a single group, Group 0.

Windows Server 2008, Windows Vista, Windows Server 2003, and Windows XP: Processor groups are not supported.

When the system starts, the operating system creates processor groups and assigns logical processors to the groups. If the system is capable of hot-adding processors, the operating system allows space in groups for processors that might arrive while the system is running. The operating system minimizes the number of groups in a system. For example, a system with 128 logical processors would have two processor groups with 64 processors in each group, not four groups with 32 logical processors in each group.

For better performance, the operating system takes physical locality into account when assigning logical processors to groups. All of the logical processors in a core, and all of the cores in a physical processor, are assigned to the same group, if possible. Physical processors that are physically close to one another are assigned to the same group. A NUMA node is assigned to a single group unless the capacity of the node exceeds the maximum group size. For more information, see NUMA Support.

On systems with 64 or fewer processors, existing applications will operate correctly without modification. Applications that do not call any functions that use processor affinity masks or processor numbers will operate correctly on all systems, regardless of the number of processors. To operate correctly on systems with more than 64 logical processors, the following kinds of applications might require modification:

  • Applications that manage, maintain, or display per-processor information for the entire system must be modified to support more than 64 logical processors. An example of such an application is Windows Task Manager, which displays the workload of each processor in the system.
  • Applications for which performance is critical and that can scale efficiently beyond 64 logical processors must be modified to run on such systems. For example, database applications might benefit from modifications.
  • If an application uses a DLL that has per-processor data structures, and the DLL has not been modified to support more than 64 logical processors, all threads in the application that call functions exported by the DLL must be assigned to the same group.

By default, an application is constrained to a single group, which should provide ample processing capability for the typical application. The operating system initially assigns each process to a single group in a round-robin manner across the groups in the system. A process begins its execution assigned to one group. The first thread of a process initially runs in the group to which the process is assigned. Each newly created thread is assigned to the same group as the thread that created it.

An application that requires the use of multiple groups so that it can run on more than 64 processors must explicitly determine where to run its threads and is responsible for setting the threads' processor affinities to the desired groups. The INHERIT_PARENT_AFFINITY flag can be used to specify a parent process (which can be different than the current process) from which to generate the affinity for a new process. If the process is running in a single group, it can read and modify its affinity using GetProcessAffinityMask and SetProcessAffinityMask while remaining in the same group; if the process affinity is modified, the new affinity is applied to its threads.

A thread's affinity can be specified at creation using the PROC_THREAD_ATTRIBUTE_GROUP_AFFINITY extended attribute with the CreateRemoteThreadEx function. After the thread is created, its affinity can be changed by calling SetThreadAffinityMask or SetThreadGroupAffinity. If a thread is assigned to a different group than the process, the process's affinity is updated to include the thread's affinity and the process becomes a multi-group process. Further affinity changes must be made for individual threads; a multi-group process's affinity cannot be modified using SetProcessAffinityMask. The GetProcessGroupAffinity function retrieves the set of groups to which a process and its threads are assigned.

To specify affinity for all processes associated with a job object, use the SetInformationJobObject function with the JobObjectGroupInformation or JobObjectGroupInformationEx information class.

A logical processor is identified by its group number and its group-relative processor number. This is represented by a PROCESSOR_NUMBER structure. Numeric processor numbers used by legacy functions are group-relative.

For a discussion of operating system architecture changes to support more than 64 processors, see the white paper Supporting Systems That Have More Than 64 Processors.

For a list of new functions and structures that support processor groups, see What's New in Processes and Threads.

Related topics

Multiple Processors
NUMA Support
Posted by 보미아빠
, |

tuning point

카테고리 없음 / 2012. 3. 7. 16:22

 

Posted by 보미아빠
, |
Posted by 보미아빠
, |

최근에 달린 댓글

최근에 받은 트랙백

글 보관함