블로그 이미지
010-9967-0955 보미아빠

카테고리

보미아빠, 석이 (500)
밥벌이 (16)
싸이클 (1)
일상 (1)
Total
Today
Yesterday

달력

« » 2024.4
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

공지사항

최근에 올라온 글

SQLTAG.ORG 에서 발표한
Sort Warning 을 주제로 한 테스트 스크립트와 해결방법을 설명하기위한 데모 스크립트 입니다.

powered by http://sqlworkshops.com/ 

테스트 하시다가 막히거나 모른는 부분이 있으면 언제든지 질문 하세요.
실제 고급 sql 엔지니어도 모르는 경우가 많은 예제 입니다.




Posted by 보미아빠
, |


메모리 상황을 보기위해 현재 가장 편한것은 dbcc memorystatus 이다. 이것을 이용해 어떤 메모리 영역에 문제가 있을 수 있는지 한번에 알아볼 수 있는 좋은 방법을 소개한다.

DMV 는 Session 당 할당 메모리를 보거나 뭐 더 좋은 많은 결과를 볼 수 있는 쿼리가 존재 한다. 그러나 DMV 는 모든 정보를 제공하지 않고 오직 dbcc memorystatus 만 모든 정보를 제공한다.

 select granted_memory_kb, used_memory_kb, max_used_memory_kb
   from sys.dm_exec_query_memory_grants
  where session_id = 55


과거 버전의 서버에서 토탈 어떻게 사용하고 있는지 편리하게 제공하는 방법이 없다. 그래서 set nocount on 과 dbcc memorystatus 결과를 c:\dbcc.txt 라는 파일로 만들어 sql 로 파싱해서 편리하게 분석해 보자
눈으로 살펴보다가 내 눈이 심히 피곤했고, 이 결과를 파일로 만들어 넣은것은 헤더 정보를 같이 볼 방법은 이 방법밖에 없었다. 2005 이상에서는 편리하게 쿼리 할 수 있지만 2000 일 수 도 있어 쿼리를 과거 버전과 호환성 있게 만드느라 이렇게 만들었다.


동작 방법

xp_cmdshell 활성화가 필요하다.
강제로 설정하고 예전 설정으로 돌리면 되지만 작업 할려니 귀찮다.




모니터링 방법

use master
go
if object_id ('dbcc_memorystatus') is not null
drop proc dbcc_memorystatus
go

create proc dbcc_memorystatus
as
set nocount on
set transaction isolation level read uncommitted
-- script by minsouk kim
-- sqlsql.tistory.com
-- sqltag.org
-- 2011.08.30
-- xp_cmdshell 활성화 필요
declare @srvname varchar(1000)
   , @sql varchar(8000)
   , @ins_filename varchar(1000)
  
select @srvname = srvname from sysservers where srvid = 0
select @ins_filename = replace(@srvname,'\','_')
set @sql = 'sqlcmd -S '+@srvname+' -E -q "set nocount on; dbcc memorystatus;" -o c:\'+@ins_filename+'_dbcc_memorystatus.txt'
EXEC master..xp_cmdshell @sql ,no_output
if object_id ('tempdb..##dbcc_raw') is not null
drop table ##dbcc_raw
if object_id ('tempdb..##dbcc_memorystatus') is not null
drop table ##dbcc_memorystatus
if object_id ('tempdb..##dbcc_memorystatus_header') is not null
drop table ##dbcc_memorystatus_header
if object_id ('tempdb..##memorystatus') is not null
drop table ##memorystatus
create table ##dbcc_raw
(value varchar(1000))
set @sql = '
BULK INSERT ##dbcc_raw
FROM ''\\127.0.0.1\c$\'+@ins_filename+'_dbcc_memorystatus.txt''
WITH
(
TABLOCK
)'
exec (@sql)
create table ##dbcc_memorystatus
(idx int identity(1,1), value varchar(1000))
insert into ##dbcc_memorystatus
select * from ##dbcc_raw
create table ##dbcc_memorystatus_header
(idx int identity(1,1), header_idx int)
insert ##dbcc_memorystatus_header
select idx
  from ##dbcc_memorystatus
 where case when value like '%-%' then 1 else 0 end = 1
select * into ##memorystatus
from (
select h.description type, v.description, v.value, h.scale
  from (select b.header_idx - 1 header_idx, b.header_idx+1 st_idx, a.header_idx-3 ed_idx
    from ##dbcc_memorystatus_header a
    join ##dbcc_memorystatus_header b
   on a.idx = b.idx + 1) d
  join (select idx
    , substring(value,1, len(value) - charindex(' ',reverse(rtrim(value)))) description
    , substring(value,len(value) - charindex(' ',reverse(rtrim(value)))+1,100) scale
    from ##dbcc_memorystatus ) h
    on d.header_idx = h.idx
  join (select idx
    , substring(value,1, len(value) - charindex(' ',reverse(rtrim(value)))) description
    , substring(value,len(value) - charindex(' ',reverse(rtrim(value)))+1,100) value
    from ##dbcc_memorystatus ) v
    on v.idx between d.st_idx and ed_idx
 -- where h.description+v.description like '%sqlcp%'   
 --order by cast(v.value as bigint) desc
 ) a
PRINT 'select * from ##memorystatus where type+description like ''%sqlcp%'''
PRINT 'select * from ##memorystatus where type+description like ''%GLOBAL%'' ORDER BY CAST(VALUE AS BIGINT) DESC'
PRINT 'select * from ##memorystatus order by cast(value as bigint) desc '
-- select * from ##memorystatus
go

exec dbcc_memorystatus 



결과 예제 


SQL Server 2000 DBCC MEMORYSTATUS 에 대한 기술 정보
http://support.microsoft.com/?id=271624
SQL Server 2005 DBCC MEMORYSTATUS 에 대한 기술 정보
http://support.microsoft.com/?id=907877 



 

Posted by 보미아빠
, |


나는 이것을 프로시저와 메모리 그랜트 이슈라고 부르기 보다. "쿼리를 여러개로 분리하면 좋은 경우다" 라고 말하고 싶다.
실제 많은 이와 같은 프로시저는 경우에 따라 여러 sub procedure 를 콜 하도록 구성한다. steatement level recompile 도 용서 할 수 없는 경우가 있기 때문이다. 

이 문서에는 recompile 하는 것이 좋겠다고 하지만, 우리는 이런경우 여러개로 분리해 쓴다.
아래 이유가 잘 설명되어 있지만. 쿼리플랜이 만들어질때 메모리 할당량이 설정되기 때문에 sort warning 이 생길수 있다는 관점에서 recompile 을 해 적당한 메모리를 할당 받으라는 것이다. 

그러나, 실제 운영을 해보면, 이런 이슈보다 기간에 따라 쿼리 플랜이 달라져야 하고 인덱스를 다르게 써야 하는경우가 더 많았다. 그걸 수용하기 위해서는 statement level 의 recompile 이나, 분리된 다른 sub procedure 를 힌트로 박아 운영하는 것이 더 보편적인 이슈 해결 방법 이였다.

다음 사이트를 참고해서 학습하면 좋을듯 하다.
http://www.sqlworkshops.com/plancachingandquerymemory.htm

참고로 이분이 소개한 대부분의 내용이 옵티마이저를 소개한 블로그에서 먼저 소개되어 있다. 하지만, 현실적으로 어떻게 이러한 문제를 해결하는게 좋을지에 대한 해답은 여기가 가장 좋은듯 하다. 

배껴서 강의하면 재미 날듯 해 한국어로 완벽하게 이해하는 강의를 해봐야지....SQLTAG 에서도 TeamView + 게임톡 강의로 많은 사람들과 이런 이슈를 하나 하나 나누어 보아야겠다. 언어의 장벽을 SQL에는 없도록 해봐야지.

비슷한 주제로 김정선 강사님이 강의 했다는데 들어 두었으면 좋았을걸..하는 아쉬움이 남는다. 아쉽게 스터디 날짜랑 같아 모두 한배타고 아무도 못 들었다. -_- 제가 책임지고 같은 주제로 더 심도깊게 강의 해 드리겠습니다. 운영의 경험 + 수많은 KB 조사까지 넣어 (알지 ?).

모두들 즐거운 SQL Time 되세요~

이번것은 넘 쉬어서 변역 안할란다.




Let’s create a stored procedure that sorts customers by name within certain date range.

To observe Sort Warnings, enable 'Sort Warnings' in SQL Profiler under Events 'Errors and Warnings'.

--Example provided by www.sqlworkshops.com

create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as

begin

      declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime

      select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c

            where c.CreationDate between @CreationDateFrom and @CreationDateTo

            order by c.CustomerName

      option (maxdop 1)

      end

go

Let’s execute the stored procedure initially with 1 month date range.

set statistics time on

go

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-01-31'

go

The stored procedure took 48 ms to complete.

 

The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.

 

The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.

 

There was no Sort Warnings in SQL Profiler. To observe Sort Warnings, enable 'Sort Warnings' in SQL Profiler under Events 'Errors and Warnings'.

 

Now let’s execute the stored procedure with 6 month date range.

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-06-30'

go

The stored procedure took 679 ms to complete.

 

The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.

 

The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.

 

There was Sort Warnings in SQL Profiler. To observe Sort Warnings, enable 'Sort Warnings' in SQL Profiler under Events 'Errors and Warnings'.

 

To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.

 

Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.

In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.

exec sp_recompile CustomersByCreationDate

go

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-06-30'

go

Now the stored procedure took only 294 ms instead of 679 ms.

 

The stored procedure was granted 26832 KB of memory.

 

The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.

 

There was no Sort Warnings in SQL Profiler.

 

Now let’s execute the stored procedure with 1 month date range.

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-01-31'

go

The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.

 

This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.

 

Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range.

exec sp_recompile CustomersByCreationDate

go

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-01-02'

go

The stored procedure took 1 ms.

 

The stored procedure was granted 1024 KB based on 1440 rows being estimated.

 

There was no Sort Warnings in SQL Profiler.

 

Now let’s execute the stored procedure with 6 month date range.

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-06-30'

go

The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.

 

The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.

 

There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.

 

Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.

Let’s recreate the stored procedure with recompile hint.

--Example provided by www.sqlworkshops.com

drop proc CustomersByCreationDate

go

create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as

begin

      declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime

      select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c

            where c.CreationDate between @CreationDateFrom and @CreationDateTo

            order by c.CustomerName

      option (maxdop 1, recompile)

      end

go

Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-01-30'

exec CustomersByCreationDate '2001-01-01', '2001-06-30'

go

The stored procedure took 48ms and 291 ms in line with previous optimal execution times.

 

The stored procedure with 1 month date range has good estimation like before.

 

The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.

 

The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.

 

Let’s recreate the stored procedure with optimize for hint of 6 month date range.

--Example provided by www.sqlworkshops.com

drop proc CustomersByCreationDate

go

create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as

begin

      declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime

      select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c

            where c.CreationDate between @CreationDateFrom and @CreationDateTo

            order by c.CustomerName

      option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))

      end

go

Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-01-30'

exec CustomersByCreationDate '2001-01-01', '2001-06-30'

go

The stored procedure took 48ms and 291 ms in line with previous optimal execution times.

 

The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.

 

The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.

 

Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range.

--Example provided by www.sqlworkshops.com

exec CustomersByCreationDate '2001-01-01', '2001-12-31'

go

The stored procedure took 1138 ms to complete.

 

2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.

 

The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.

 

There was Sort Warnings in SQL Profiler.

 

As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.

This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.

Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.

Posted by 보미아빠
, |

최근에 달린 댓글

최근에 받은 트랙백

글 보관함