블로그 이미지
보미아빠

카테고리

보미아빠, 석이 (529)
밥벌이 (16)
싸이클 (1)
일상 (1)
Total
Today
Yesterday

달력

« » 2025.9
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

공지사항

최근에 올라온 글

예전에 REMARK 필요라고 마크해둔 것을 하루에 한 개씩 한번 공부해 보고 내 의견을 추가해 보겠다.

 

누군가 지식이 많을것 같은 사람이 설명을 하면, 모두 다 믿게 된다. 실험에서처럼....

 

그러나, 사람은 틀릴 수 있고 완벽하지 않다. 그래서 아키텍처의 이해와 데이터의 이해 기반으로 다시 해석해 보는것이 필요 할수도 있다. 아래 까만글은 아가씨의 필기이고, 파란 내용은 김민석이 추가한 내용이고, 틀릴수 있다. 누구든지 틀린 내용이 보이면 알려주세요~

 

누가 맞다 틀리다가 아니라 강의한 내용의 검증과 해석을 나름 해본 것이다. 이번 강의도 잘못 설명한 것이 좀 있지만, 나름 좋은 내용이다.

 

 

 

1. top(1) 인데 왜 Index Seek가 아니고 scan인가?

 [remark 필요]

 

2. MS kb942982
   원격 로그인의 권한이 부족하면, 성능 문제가 발생 할 수가 있다.
   통계 정보가 필요시 최적화 실패 가능... DBCC SHOW STATISTICS() 수행 권한 필요..
   SQL2012 sp1에서 해결 됨.  이전 버젼은 원격 로그인에 대한 충분한 권한이 있는지 확인해 봐야 한다.
   항상 호출되는 것은 아니다.
  
   http://msdn.microsoft.com/ko-kr/library/ms174384.aspx
   통계 개체를 보려면 테이블의 소유자이거나 sysadmin 고정 서버 역할, db_owner 고정 데이터베이스 역할 또는 db_ddladmin 고정 데이터베이스 역할의 멤버여야 합니다.
   SQL Server 2012 SP1에서는 권한 제한을 수정하여 사용자가 SELECT 권한을 통해 이 명령을 사용할 수 있게 되었습니다. 다음 요구 사항에서는 명령을 실행하기 위해
   SELECT 권한이 있어야 합니다.
  
   연결서버에 db_ddladmin sql2012 sp1 이전 사용자는 권한을 줘야 하는군.
  
3. UNION 쿼리 결과에 대해서 집계를 하는 작업
 개별적으로 집계를 구하고 결과를 UNION 후 집계하는게 더 좋다.
 
4. 임시 테이블 캐싱으로 인한 통계 예측 오류
 sql2005+
 임시 테이블 값이 실행 계획 최적화에 관련된 경우 통계 예측(재사용) 오류 발생
 해결 방법은 ?
 
 1. option recompile : 임시 테이블에 대한 통계에 대해서는 리컴파일을 하지 않는다.
 2. 임시 테이블에 대한 재컴파일이 일어날 수 있도록 한다.
   - 개체 ddl alter를 해서 재컴파일이 일어나도록 유도 한다. (option recompile을 같이 쓴다)
     사용하기 어렵겠다 ㅠㅠ... 이름 충돌...
   - ddl index를 생성해서 리컴파일 유도를 한다.
   프로시져 with recompile <-- 아예 캐슁이 안 된다... 그냥 adhoc 쿼리 이다..
 [remark 필요]

   
5.  병렬 처리되는 TOP 성능이 나쁘다?
 maxdop에 따른 다른 결과 . 일관되지 않은 재현... (폴 화이트)
 [remark 필요]

 
6. 병렬처리시 STATISTICS IO 결과를 신뢰할 수 없다면?
 논리적인 읽기 수가 부정확하게 발생.
 
 검색수 : 쓰레드 수
 
 Trace flag : querytraceon 8649 강제 병렬 처리...
 병렬처리 TOP의 경우 검색수 잘못 나온다.... I/O도 잘못 나온다...

 [remark 필요]

 

7.  SELECT 결과 변수 할당 시 PEO 동작 안함 - 폴 화이트
 PEO - SQL2008+ parameter embedding optimization
 option(recompile) 힌트
 
 파라메터 스니핑 문제 해결 방안 중 하나..
 
 벗,,,, 결과를 변수에 할당하면 동작을 안 한다.
 
 top 1도 동일 함... (교성왈)..ㅋㅋㅋ
 결론은 변수에 할당 하지 말아라....
 select
  @value = customerid
 from 테이블
 where 
 with option(recompile)

 [remark 필요]


 
8.  Shared Lock이 필요 한가... 폴 화이트...
 X 잠금 행에 S잠금은 차단?? (Lock optimization)
 예전에 인옥형이랑 야그했던 것...
 
9. 자동증가열의 통계정보 추적

 "Branding"
  증가열 자동 인식
  추적 플래그 2388 + DBCC SHOW_STATISTICS()가 좀 다르게 나온다.. 오호.......... 20% 변경하는 알고리즘을 알 수가 있다.. 오호.. ^^

 [remark 필요]

 


  http://sqlscope.wordpress.com/2012/02/
  통계 정보를 이해하는데 아주 유용 하다.
  
  - 자동 통계 정보 갱신 시기
  처음 500건 바뀌었을 때, 이후 20% 변경시 통계 정보 갱신.
  
 테이블 최대값 검색 후 반영
  추적 플래그 2389 (asc) , 2390
  - 자동 증가열 예를 들어 최신 날짜.  (통계 정보가 없는 것)
   SELECT *
   FROM inc
   WHERE data > '날짜'
   OPTION ( querytrace 2389, recompile)
    nc_index 통계 선열에 covered index가 존재하면,,,,,
    비용은 크지 않다...
  - 자동 증가열이 아니면 2390
   SELECT *
   FROM inc
   WHERE data > '날짜'
   OPTION ( querytrace 2390, recompile)
   
 SQL2012 SP1+에서는 새로운 함수 활용.
  http://judydba.tistory.com/391
  sys.dm_db_stats_properties  변경된 데이타 건수 추가 됨.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  
##. 컨설팅 사례


10. 엄청난 카운터 갯수
 
 엄청난 갯수의 카운터 + 파일 사이즈 적게 ㅡ/ㅡ  수집 간격 3초, 4초 ??
 엄청난 파일들을 가지고 차트로 분석하고~
 
 - 성능카운트 90~80개 정도
 - 스팟성~일 경우~ 수집 간격을 짧게
  [remark 필요]
 
11. 쿼리 옵티마이져 최적화 단계 정보 확인(메타 누적 정보)

 http://technet.microsoft.com/ko-kr/library/ms175002.aspx
 참고 ) http://sqlintegrity.blogspot.kr/2012/07/sql-server-optimization-using-dmv.html
 참고 ) http://beyondrelational.com/modules/1/justlearned/0/tips/8666/sql-server-find-how-many-times-hints-has-been-used-in-your-queries.aspx
 오호...  이런것도 있네... https://www.simple-talk.com/sql/t-sql-programming/dmvs-for-query-plan-metadata/
 
 final cost
  - 총 최적화 수입니다.
  - 최적화된 계획의 평균 예상 비용(내부 비용 단위)입니다.

 select *
 from sys.dm_exec_query_optimizer_info
 
12. 헐.... lock requests/sec  11,946,739
 
 lock requests/sec가 올라갈 경우....
 - 대량의 데이타를  access  하는 경우
 - 잠금 격리 수준이 높을 경우
 
 
13. 클러스터 서버에서는 sp_server_diagnostics를 이용해서 시스템에 대한 상태를 체크
 20초 단위, cpu 1초 씀
 

14. index 상태값에 disable이라는 상태가 있네 ㅡ/ㅡ?

 인덱스 사용안함 ... 이거 어디서 볼 수 있는 거야?
 Disabling Indexes
 http://msdn.microsoft.com/en-us/library/ms177406(v=sql.105).aspx
 http://msdn.microsoft.com/en-us/library/ms173760.aspx
 
 select
  is_disabled, *
 from sys.indexes
 
15. MEMOBJ_INDEXSTATSMGR memory object
 missing index statistics
 
16. 추적 플래그 2371 이슈
 trace 2371, 데이타 증가에 맞게 통계 시점.. sql2008 r2 sp2+
 http://blogs.msdn.com/b/saponsqlserver/archive/2011/09/07/changes-to-automatic-update-statistics-in-sql-server-traceflag-2371.aspx
 http://www.sqlservercentral.com/blogs/steve_jones/2012/10/22/trace-flag-2371-and-statistics/
 
 새벽시간
  유지 관리 계획 마법사를 통해서 통계 업데이트 사용할 때 이슈 발생
  기본 샘플링 비율이 100% <-- 헉.... 이러기..? 역시 UI를 쓰면 안되겠다

 서비스 중(샘플링 0.39% 예측 갯수에 문제가 발생... .ㅜㅜ)
  통계정보 왜곡 발생
  쿼리 리컴파일
  고부하 실행계획 생성
 
 통계 업데이트 시간 산정 중요.
 별로도의 샘플링 비율을 가지고 간다면 2371를 고민해야 함. 
[remark 필요]


17. 집계 함수 성능 차이
 MIN은 NULL값을 반환하고 첫번째 값을 가져온다.
 MAX는 마지막 값을 가져온다.
 
 
18. getdate() -> 변수 할당해서 변경..

19. 유일값과 전체 집계 한 번에 구하기..
 

Posted by 보미아빠
, |

CREATE LOGIN SQLTAG_GOD WITH PASSWORD = 'P@ssw0rd' -- 로그인생성
GO
USE MASTER
GO
CREATE USER SQLTAG_GOD FOR LOGIN SQLTAG_GOD  -- User 생성
GO
ALTER SERVER ROLE SYSADMIN ADD MEMBER SQLTAG_GOD
GO

USE [master]
GO
DENY CONNECT SQL TO SQLTAG_GOD
GO
ALTER LOGIN SQLTAG_GOD DISABLE
GO

USE [DBA]
GO
CREATE USER SQLTAG_GOD FOR LOGIN SQLTAG_GOD  -- User 생성
GO
-- ALTER ROLE DB_OWNER ADD MEMBER SQLTAG_GOD
GO

use dba
go
if object_id ('tblx') is not null
drop table tblx
go

create table tblx
(idx int)
go

insert into tblx values (1)
go

if object_id('usp_dynamic_querytraceon') is null
exec ('create proc usp_dynamic_querytraceon as select 1 ')
go

use dba
go


alter proc usp_dynamic_querytraceon
--with execute as owner -- 정적 sql 일때는 execute as owner 만 있으면 된다.
as
execute as login ='sqltag_god'
exec ('SELECT count(*) cnt FROM dbo.tblx  OPTION (QUERYTRACEON 8649)')
-- adhoc 일때는 planguide 로 먹는다.
-- dynamic sql 일때는 execute as login 으로 IMPERSONATE 권한을 가진 사용자가 실행하면 된다.
go


CREATE LOGIN sida WITH PASSWORD = '1234' -- 로그인생성
GO
USE MASTER
GO
CREATE USER sida FOR LOGIN sida  -- User 생성
GO
USE dba
GO
CREATE USER sida FOR LOGIN sida  -- User 생성
GO
GRANT EXECUTE TO sida
go

-- sida 는 [SQLTAG_GOD] 으로 변신할수 있는 힘을주겠다.........
use [master]
GO
GRANT IMPERSONATE ON LOGIN::[SQLTAG_GOD] TO sida -- WITH GRANT OPTION
GO
-- end

-- 다른 session 에서 sida 로 로그인 해 아래 스크립트를 실행해 본다.
use dba
go
exec usp_dynamic_querytraceon
go

 

-- IMPERSONATE  권한이 있으면 다음과 같은 쿼리도 가능하다.

EXECUTE ('SELECT * FROM dbo.tblx  OPTION (QUERYTRACEON 8649)') AS LOGIN='sqltag_god'

 

sa 이름이 바뀌지 않았다면  sa 로 해도 됩니다.
 

Posted by 보미아빠
, |

좀 아쉽다.....

아직도 Buffer Pool 용량제한, I/O 성능등은 제어하지 못한다.

2014 에서는 가능해 보인다.

 

하지만

2012 에서 20개의 pool 에서 62개로 pool 이 늘어나고

경쟁방식의 맥스 cpu 사용량과 더불어 hard cap 이 가능한 것은 좋아 보이기도 한다.

또한 BP 을 제외한 모든 메모리 영역은 제어 가능하다.

 

아직도 시끄러운 이웃은......이사가는게(다른 인스턴스로) 답인듯 하다.........

RG의 timeout 설정은 재미있는 설정인듯 하다.

 

Resource Governor in SQL Server 2012.docx

 


USE [MASTER]
GO

DROP DATABASE MB500
DROP DATABASE MB1000
DROP DATABASE MB200
GO


/****** OBJECT:  DATABASE [READAHEAD]    SCRIPT DATE: 2013-07-07 오후 3:23:18 ******/
CREATE DATABASE MB500
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'MB500', FILENAME = N'F:\MSSQL\MB500.MDF' , SIZE = 1GB , MAXSIZE = UNLIMITED, FILEGROWTH = 1048576KB )
 LOG ON
( NAME = N'MB500_LOG', FILENAME = N'F:\MSSQL\MB500.LDF' , SIZE = 1GB , MAXSIZE = 2048GB , FILEGROWTH = 1048576KB )
GO
CREATE DATABASE MB1000
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'MB1000', FILENAME = N'F:\MSSQL\MB1000.MDF' , SIZE = 1GB , MAXSIZE = UNLIMITED, FILEGROWTH = 1048576KB )
 LOG ON
( NAME = N'MB1000_LOG', FILENAME = N'F:\MSSQL\MB1000.LDF' , SIZE = 1GB , MAXSIZE = 2048GB , FILEGROWTH = 1048576KB )
GO
CREATE DATABASE MB200
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'MB200', FILENAME = N'F:\MSSQL\MB200.MDF' , SIZE = 1GB , MAXSIZE = UNLIMITED, FILEGROWTH = 1048576KB )
 LOG ON
( NAME = N'MB200_LOG', FILENAME = N'F:\MSSQL\MB200.LDF' , SIZE = 1GB , MAXSIZE = 2048GB , FILEGROWTH = 1048576KB )
GO

USE MB500
GO

SELECT TOP 125000 ROW_NUMBER() OVER ( ORDER BY (SELECT NULL)) IDXINT
  , CAST('A' AS CHAR(3000)) CHAR3000
  , CAST(ABS(CHECKSUM(NEWID())) % 1000000 AS INT) CINT3
  INTO MB500
  FROM SYSOBJECTS A1
     , SYSOBJECTS A2
     , SYSOBJECTS A3
     , SYSOBJECTS A4
     , SYSOBJECTS A5
GO
USE MB1000
GO
SELECT TOP (125000*2) ROW_NUMBER() OVER ( ORDER BY (SELECT NULL)) IDXINT
  , CAST('A' AS CHAR(3000)) CHAR3000
  , CAST(ABS(CHECKSUM(NEWID())) % 1000000 AS INT) CINT3
  INTO MB1000
  FROM SYSOBJECTS A1
     , SYSOBJECTS A2
     , SYSOBJECTS A3
     , SYSOBJECTS A4
     , SYSOBJECTS A5
GO
USE MB200
GO
SELECT TOP (125000/5*2) ROW_NUMBER() OVER ( ORDER BY (SELECT NULL)) IDXINT
  , CAST('A' AS CHAR(3000)) CHAR3000
  , CAST(ABS(CHECKSUM(NEWID())) % 1000000 AS INT) CINT3
  INTO MB200
  FROM SYSOBJECTS A1
     , SYSOBJECTS A2
     , SYSOBJECTS A3
     , SYSOBJECTS A4
     , SYSOBJECTS A5
GO

-- 테스트 쿼리
SELECT COUNT(*) FROM MB200.DBO.MB200
SELECT COUNT(*) FROM MB500.DBO.MB500
SELECT COUNT(*) FROM MB1000.DBO.MB1000
GO

-- BP 용량 조회
SELECT TYPE, SUM(PAGES_KB) PAGES_KB
  FROM  SYS.DM_OS_MEMORY_CLERKS
 GROUP BY TYPE
 ORDER BY PAGES_KB DESC
GO

SELECT LEFT(CASE DATABASE_ID WHEN 32767 THEN 'RESOURCEDB' ELSE DB_NAME(DATABASE_ID) END, 20) AS DATABASE_NAME
  , COUNT(*)AS BUFFERED_PAGE_COUNT
  , COUNT(*) * 8192 / (1024 * 1024) AS BUFFER_POOL_MB
  FROM SYS.DM_OS_BUFFER_DESCRIPTORS
 GROUP BY DB_NAME(DATABASE_ID) ,DATABASE_ID
HAVING LEFT(CASE DATABASE_ID WHEN 32767 THEN 'RESOURCEDB' ELSE DB_NAME(DATABASE_ID) END, 20) LIKE 'MB%'
 ORDER BY 1
GO

USE MB200
GO

CREATE CLUSTERED INDEX CL_MB200 ON MB200 (IDXINT)
GO

USE MB500
GO

CREATE CLUSTERED INDEX CL_MB500 ON MB500 (IDXINT)
GO

USE MB1000
GO

CREATE CLUSTERED INDEX CL_MB1000 ON MB1000 (IDXINT)
GO

CREATE LOGIN MB1000 WITH PASSWORD = 'MB1000', CHECK_POLICY = OFF
CREATE LOGIN MB500 WITH PASSWORD = 'MB500', CHECK_POLICY = OFF
CREATE LOGIN MB200 WITH PASSWORD = 'MB1000', CHECK_POLICY = OFF

USE MB1000
GO
CREATE USER MB1000 FOR LOGIN MB1000;
CREATE USER MB500 FOR LOGIN MB500;
CREATE USER MB200 FOR LOGIN MB200;
GO

USE MB500
GO
CREATE USER MB1000 FOR LOGIN MB1000;
CREATE USER MB500 FOR LOGIN MB500;
CREATE USER MB200 FOR LOGIN MB200;
GO

USE MB200
GO
CREATE USER MB1000 FOR LOGIN MB1000;
CREATE USER MB500 FOR LOGIN MB500;
CREATE USER MB200 FOR LOGIN MB200;
GO

CREATE RESOURCE POOL MB1000POOL
CREATE RESOURCE POOL MB500POOL
CREATE RESOURCE POOL MB200POOL


CREATE WORKLOAD GROUP MB1000GROUP USING MB1000POOL
CREATE WORKLOAD GROUP MB500GROUP USING MB500POOL
CREATE WORKLOAD GROUP MB200GROUP USING MB200POOL
GO

ALTER RESOURCE GOVERNOR RECONFIGURE
GO

USE MASTER
GO

CREATE FUNCTION CLASSIFIER_V1()
RETURNS SYSNAME WITH SCHEMABINDING
BEGIN
 DECLARE @val varchar(32)
 SET @val = 'default';
 IF 'MB1000' = SUSER_SNAME()
  SET @val = 'MB1000GROUP';
 ELSE IF 'MB500' = SUSER_SNAME()
  SET @val = 'MB500GROUP';
 ELSE IF 'MB200' = SUSER_SNAME()
  SET @val = 'MB200GROUP';
 RETURN @val;
END
GO

ALTER RESOURCE GOVERNOR
WITH (CLASSIFIER_FUNCTION = dbo.CLASSIFIER_V1)
GO
ALTER RESOURCE GOVERNOR RECONFIGURE
GO

SELECT * FROM sys.dm_resource_governor_configuration
SELECT * FROM sys.dm_resource_governor_resource_pools
SELECT * FROM sys.dm_resource_governor_workload_groups

 

 

 

 

 

Posted by 보미아빠
, |

아래글을 읽어보면 2014 에서 어떤 부분이 빨라졌고, 이 성능향상 포인트를 찾을려면 어떤 workload 에서 이득이 있는지 알 수 있을 것이다.

 

1)

게임의 아이템 테이블(row 넓이가 짧고, 수천번 업데이트 하는 테이블) 은 큰 성능향상이 있을것 같다.

page 래치 병목이 없고, c dll 로 먼저 컴파일된 네이티브 프로시저가 사용되고, 문맥전환이 최소화 되고, 트랜잭션 로그를 모아서 내리기 때문에 빨라질 것이다. 그러나, 에러처리 로직과 코드로 제어해야 할 부분이 많이 늘었다.

그러나 캐시팜이 없는 게임의 경우 최적의 솔루션이 되지 않을까 싶다.

 

2)

게임의 친구관계 리스트를 뽑는데도 최적화가 가능하다.

 

3) 딱 한대로 대박게임의 workload 를 다 감당할 수 있을것 같은 느낌이 들지만........많은 코드 전환이 필요해 보인다.

 

 

https://www.simple-talk.com/sql/database-administration/exploring-in-memory-oltp-engine-(hekaton)-in-sql-server-2014-ctp1/

 

25 June 2013

The continuing drop in the price of memory has made fast in-memory OLTP  increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to  memory-optimised 'Hekaton' technology, but how you  balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

Microsoft announced SQL Server 2014 at TechEd in early June, and the biggest news for SQL Server OLTP in years is the in-memory OLTP engine. This article is an introduction to the features and limitations of this new technology.

There is much to cover here. I will assume that you understand the current versions of SQL Server and so I will only discuss the differences.

Three important points about this new feature that should be stated up front:

  1. SQL Server 2014 will be a 1.0 version of this feature, and the development focus was on mainstream OLTP processing. Think light-weight transactional records processed with stored procedures. Many of the outlier OLTP features are not yet supported. Some of these gaps may be addressed by RTM, but some are not even planned until a later version.
  2. Hekaton can be leveraged inside of a normal SQL Server database. It does not require special hardware, nor does it require a separate database. The expectation is that you will migrate only a portion of your tables and stored procedures to Hekaton. That means your database will have a mix of standard and memory-optimized objects in the same database.
  3. There are customers who have been already running early releases in production with very good results.

CTP1 of SQL Server 2014 is planned for release at TechEd Europe in late June. Release of SQL Server 2014 is expected “in the second half of calendar year 2013.”

What is SQL Server In-memory OLTP Engine (Code-name: Hekaton)

Before the announcement of SQL Server 2014 and the sessions at TechEd 2013 in June, very little information was available about project Hekaton. We knew it was a new technology that 1) allowed tables to be resident in memory, 2) allowed for machine-compiled stored procedures, and 3) was claimed to be very fast.

Now that we have more details, here is a full picture of the two basic features of what is now officially called SQL Server In-memory OLTP Engine.

Memory-optimized Tables

“Memory-optimized” tables (as opposed to standard tables, now called “disk-based” tables) are completely in memory.

Memory-optimized Storage

Memory-optimized tables do not use pages for storage. The record location mechanism employed is pointers stored in hash tables.

Optimistic Concurrency Control

There are new algorithms that process records without locking and blocking. First, since there are no pages, there are no page latches. Also, writes to memory-optimized tables use versioning similar to RCSI (read committed snapshot isolation) except that tempdb is not used for row versions. Every write has an associated ever-increasing transaction number that allows for straightforward handling of reads. Uncommitted records are stored in-memory but are not visible until the transaction commits, so there are no “dirty reads” of uncommitted data.

Native Stored Procedures

We have talked for years about Standard SQL Server stored procedures being “compiled”, but the end-product of the compilation is code that is then interpreted at run-time. Hekaton has the ability to take high-level code and translate it to C and then build a DLL. When the stored procedure is called at run-time, the compilation steps and associated CPU overhead have already been completed. The goal here is to do the same work with many fewer CPU instructions. Based on reported results, the savings are dramatic. Microsoft is talking about 10X to 25X performance gains overall.

Native stored procedures are “atomic” in the sense that they run in a single CPU until they complete. This reduces context-switching, which is very costly in terms of CPU cycles.

An Early Caveat

Be aware that the only tables that can be involved in a native stored procedure are memory-optimized tables. Disk-based tables are not allowed. Stored procedures that access both memory-optimized and disk-based tables are called “interop” procedures.

Other Features

Block Transaction Log Writes

Whereas disk-based tables write individual records to the transaction log, memory-optimized tables write what we’ll call consolidated records. If you wrote a hundred records to a disk-based table that had a single non-clustered index (and were updating a column in that index), you would write two hundred records to the transaction log. With a memory-optimized table and a best-case scenario, you would write exactly one record to the transaction log with all the required detail. (Each log block is 24KB, so depending on the size of the records being written and the number of records, it could write multiple records.)

In-memory Indexes

Disk-based tables store non-clustered indexes on disk. For many OLTP tables, the size of the indexes is larger than the size of the base table. Creation and updates to indexes are written to the transaction log which takes CPU cycles and often very large amounts of disk space.

With memory-optimized tables, non-clustered indexes are not persisted to disk, so they take up no disk space, and updates do not have to be written to the transaction log. When a SQL Server 2014 database is started up, indexes for memory-optimized tables are built at the time that a table is loaded into memory.

Streaming Data Files

Hekaton leverages filestream capabilities to stream data records to disk. Records are written as a continuous stream; there are no UPDATE records, only INSERT and DELETE records. INSERTs are stored in “data files” and DELETEs are stored in “delta files”.

Non-durable Tables

Memory-optimized tables can have durable schema and non-durable data meaning that on start-up, the table is recreated, but no data is loaded.

How could this be used? Besides the ETL scenario described below, the obvious use is ASP session state. This also could be used anywhere you use a #temp or @temp table and you know that only one process is using the table at a given time. The usage of all the other features in Hekaton is rather straightforward. I expect that the provision of non-durable tables is the feature that will be used the most creatively of all. People will use this in ways that the SQL Server team never imagined. (Initially, non-durable tables are only supported in native stored procedures. By RTM, the plan is to support non-durable tables in interop stored procedures as well.)

No Buffer Pool

In Hekaton, there is no buffer pool. All the data for memory-optimized tables is in memory. Note that there are buffer pool-related changes in SQL Server 2014 that allow you to use SSDs to augment the size of your buffer pool.

What Problems Does Hekaton Address?

Scenario 1: Write-intensive Audit Log

Assume you have a centralized audit logging table for an OLTP system. You’ve designed it with an integer or bigint clustered primary key which means all new records are written to the end of the table. This design allows every page to be filled completely without any page fragmentation.

With a very high number of cores (past 12 to 16) and a high-performance OLTP workload, page latches can cause write delays because each writer has to acquire a page latch before it can write a new record.

With memory-optimized tables, there are no pages, so there are no page latches. Also, the writes are so fast that it is much less likely that multiple writers would try to access the same record at once. In disk-based tables, the waits are often because writers are trying to access the same page, rather than the same record. Without pages, this is no longer an issue.

Partitioning a Large Table

An application’s audit log table is likely to be partitioned by time, say a single partition per week. Since Hekaton does not (initially) support partitioning, we need to find a creative solution. The recommendation here is to have the active partition in-memory and all other partitions in a partitioned disk-based table. A view can be created to abstract away the complexity of the split tables.

Say you have two memory-optimized tables: CurrentAuditA and CurrentAuditB. Initially, the application is writing to A until the time boundary is hit. Then it switches and writes to B. (The view must be updated at this time to include table B instead of table A.) After the switch, a simple INSERT/SELECT statement writes the records from memory-optimized table A to disk-based table Audit2013W26 representing the 26th week. When that table has been fully processed (PK, indexes), it can be switched in to the partitioned audit log table.

Scenario 2: Read-intensive Table

Consider a large reference table; say a list of drugs with details about quantities, prices, etc. In an on-line consumer order application, access to this table would be almost all reads with occasional writes. And reads on popular drugs could be very high. The pages that contain popular drugs would likely be cached in memory even with a disk-based table.

Because memory-optimized tables use hash tables and pointers directly to memory-resident records, simple reads are much faster. For a read on a disk-based table, even if the pages are already loaded into memory, the system has to traverse multiple layers in a B-tree index. If the index is non-clustered and not a covering index, then the actual record is accessed. This results in more operations to return the same data.

Scenario 3: Mixed Read-write Customer Table

In that on-line drug order system, the table that holds orders would receive a large amount of writes along with some updates, deletes, and also many select statements. If one process is trying to update the status of an order to in-process and another process is trying to update the quantity, these two processes could end up locking or blocking each other.

Scenario 4: ETL Staging Table

Traditionally, staging tables for ETL processing have used heaps (tables without clustered indexes) and minimal or bulk logging. This was the way to get the best throughput because it minimized the amount of processing that was required with each write.

Memory-optimized tables allow the option of schema-persistence without data-persistence. Writes to these tables don’t cause any transaction logs writes because data written here is transient. If the server crashed during processing, the data can be simply reloaded from the source.

Caveats and Recommendations

Caveats

Write-write Conflicts

If two processes try to write to the same record at the same time, the first will succeed and the second will fail because there is no locking. This requires changes to stored procedures to handle the failure. Whether that “failure” should be followed by a retry or an error should be evaluated on a case-by-case basis. Think through the implications of two writers updating the same record. Retry basically mimics last-writer-wins logic. That is often the simplest choice but not the best choice.

Statistics Updates

Native stored procedures do not recompile automatically based on statistics updates. This will have to be handled through scripting or manual processing.

Running Out of Memory

As you might expect, running out of memory in a memory-optimized database is a problem. If your tables grow to the point where you run out of memory, write activity stops, but read activity can continue. Even many SELECTs would fail, so your database is fundamentally inoperable. The important thing to remember is that durable in-memory tables are fully ACID-compliant, so you will not have data loss in committed records. The SQL Server will continue to improve memory management before RTM.

Size Limitations

Record size is limited to 8,060 bytes. There are a number of creative solutions that should allow you to store your LOB data and long string or binary data in disk-based tables, but the important thing here is that this will require changes to your application.

Durable memory-optimized tables are limited to 512 GB. (Non-durable tables have no size limit.)

Foreign Keys and Check Constraints

My biggest disappointment with the initial version is that it does not support foreign key constraints or check constraints. For a database designer/developer, “let the application handle data quality” is like fingernails on a chalkboard.

IDENTITY, SEQUENCE

IDENTITY and SEQUENCE are not supported. Do a quick search on the web, and you’ll quickly find solutions like this one.

MERGE

Even interop-mode procedures do not support MERGE. The explanation is that the team concentrated on “mainstream” OLTP workloads, but this will cause grief for many of us that have widespread use of MERGE in our current code.

Other Limitations

More on the not-supported list:

  1. DML triggers
  2. Data types
    1. XML
    2. CLR
    3. LOB
  3. Schema changes are not support in the same way as disk-based tables.
  4. There are no heaps in memory-optimized tables. (This is not a bad thing, just interesting. It makes sense though. A heap is just a bag of records in a particular spot. Records in memory have to have pointers for look-up.)
  5. Compression is not supported.
  6. Range indexes are expected to be available in CTP2.

Recommendations

You likely will not want to put everything in memory-optimized tables. In a normal OLTP database, the 80-20 rule is more like 95-5. If you put effort into moving the dozens of tables that make up just 5% of your database’s processing load, your gain is minimal. Target the tables that are used most heavily. Partition large tables into hot and cold. Keep the hot data in memory-optimized tables and everything else in disk-based tables.

There are reasons to move even the cold part of a large table into memory-optimized tables. Memory-optimized tables do not persist indexes, so depending on the number of indexes on your big tables, you could save significant storage by migrating. Weigh the cost of memory against the fully-realized cost of disk storage. It may tip toward disk-based today, but as memory prices continue to drop, reevaluate.

Transaction Log Performance

Evaluate the implications of pumping maybe ten times as many transactions through your system and consider upgrading the storage for the transaction log.

Instrumentation

Consider how your instrumentation will need to change if transactions complete in microseconds instead of seconds or milliseconds.

References

SQL Server 2014: A Closer Look at SQL Server Blog

Edgenet Gain Real-Time Access to Retail Product Data with In-Memory Technology at SQL Server Blog

Microsoft Announces SQL Server 2014 at Visual Studio Magazine

SQL Server 2014: What’s vNew for vNext by Thomas LaRock

SQL Server 2014! by James Serra

 

Posted by 보미아빠
, |

http://www.datamanipulation.net/sqlquerystress/sqlquerystressdownload.asp

http://sqlblog.com/blogs/adam_machanic/default.aspx

Posted by 보미아빠
, |

SQL은 집합처럼 보이는 순차처리 어플일 뿐이다....집합 집합 집합 하고 2,3년차 꼬셔 사고를 말리지 말자..

아래는 재미있는 문자열 장난이다. 심심할때 한번 보길 바란다.

 

 

 

 

 

IF OBJECT_ID ('USP_SPLITSTRING') IS NULL
EXEC('CREATE PROC USP_SPLITSTRING AS SELECT 1 ')
GO

ALTER PROC USP_SPLITSTRING
(
@INPUTV VARCHAR(MAX) ='1,2,3,0,'
, @STRINGCNT INT = 4 -- 주던지 내부에서 LEN 해서 구하던지.......
)
AS
SET NOCOUNT ON

DECLARE @I1 VARCHAR (10), @I2 VARCHAR (10), @I3 VARCHAR (10), @I4 VARCHAR (10), @I5 VARCHAR (10)
    ,@I6 VARCHAR (10), @I7 VARCHAR (10), @I8 VARCHAR (10), @I9 VARCHAR (10), @I10 VARCHAR (10)

IF @STRINGCNT <= 10
BEGIN
 SELECT
  @I1 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,   @I2 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,   @I3 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,   @I4 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,   @I5 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,   @I6 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,   @I7 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,   @I8 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,   @I9 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)
 ,  @I10 = SUBSTRING(@INPUTV, 0, CHARINDEX(',', @INPUTV)), @INPUTV = SUBSTRING(@INPUTV, CHARINDEX(',', @INPUTV)+1, 90000000000000)

 SELECT *
   FROM (VALUES (@I1), (@I2), (@I3), (@I4), (@I5), (@I6), (@I7), (@I8), (@I9), (@I10)) V (INPUTV)
  WHERE INPUTV <> '' 
END ELSE BEGIN
 PRINT 'WHILE 로 계속 돌거라....끝날때까지 '
END
GO

EXEC USP_SPLITSTRING '1,2,2,0,1,32,4,2,12222222,11,', 10 -- ROW 로 만드는데 약 수백 US 가 걸린다.
GO
EXEC USP_SPLITSTRING '1,2,2,0,1,32,4,2,12222222,11,32123,2,32,3,3,211123,', 16 -- 끝날때까지 돌거라~
GO

--1. 10개 파라메터로 호출
--2. 넘는것은 따로 호출
--3. 기타...등등

 

 

 

Posted by 보미아빠
, |

SELECT P.NAME PACKAGE_NAME, O.NAME OBJECT_NAME, OBJECT_TYPE
  , OC.NAME OBJECT_COLUMNS_NAME, CASE COLUMN_TYPE WHEN 'DATA' THEN '=' when 'customizable' then '=(1)' ELSE '' END PREDICATE_SRC
  , OC.TYPE_NAME OBJECT_COLUMN_TYPE_NAME, OC.COLUMN_TYPE OBJECT_COLUMN_TYPE
  , MV.MAP_KEY MAP_VALUES_KEY, MV.MAP_VALUE MAP_VALUES_MAP_VALUE
  FROM SYS.DM_XE_PACKAGES P
  JOIN SYS.DM_XE_OBJECTS O
    ON P.GUID = O.PACKAGE_GUID
  LEFT JOIN SYS.DM_XE_OBJECT_COLUMNS OC
    ON O.NAME = OC.OBJECT_NAME
   AND O.PACKAGE_GUID = OC.OBJECT_PACKAGE_GUID
  LEFT JOIN SYS.DM_XE_MAP_VALUES MV
    ON OC.OBJECT_PACKAGE_GUID = MV.OBJECT_PACKAGE_GUID
   AND OC.TYPE_NAME = MV.NAME
--   AND OC.TYPE_NAME ='WAIT_TYPES'
 --  WHERE MV.NAME = '%RECOMPILE_CAUSE%'
 WHERE O.NAME LIKE '%file_read%'
 ORDER BY O.NAME
  , CASE OC.COLUMN_TYPE WHEN 'READONLY' THEN 1 WHEN 'CUSTOMIZABLE' THEN 2 WHEN 'DATA' THEN 3 ELSE NULL END 
  , OC.NAME
  , MV.MAP_KEY
OPTION (HASH JOIN)

SELECT DISTINCT OBJECT_TYPE FROM SYS.DM_XE_OBJECTS
SELECT * FROM SYS.DM_XE_OBJECTS WHERE NAME LIKE '%COMPILE%'
SELECT * FROM SYS.DM_XE_OBJECT_COLUMNS WHERE NAME LIKE '%COMPILE%'
SELECT * FROM SYS.DM_XE_OBJECTS WHERE OBJECT_TYPE ='ACTION'
SELECT * FROM SYS.DM_XE_OBJECTS WHERE OBJECT_TYPE ='PRED_SOURCE' ORDER BY NAME

declare @keyword varchar(100) = 'file'
select * from sys.dm_xe_objects where name like '%'+@keyword+'%' or description like '%'+@keyword+'%'
select * from sys.dm_xe_object_columns where name like '%'+@keyword+'%' or type_name like '%'+@keyword+'%'
select * from sys.dm_xe_map_values where name like '%'+@keyword+'%' or map_value like '%'+@keyword+'%'

 

Posted by 보미아빠
, |

cmemthread wait

카테고리 없음 / 2013. 5. 8. 23:42

cpu가 많고, cmemthread wait 가 많을때 traceflag 8048 을 enable 하면 메모리 할당 경합을 줄일 수 있다.

이는 numa 나 suma(bios 에서 numa 를 disable 한 경우나 tf 로 numa 를 disable 한 경우)로 구성된 메모리 파티션을 per cpu로 메모리 관리 정책 변경이다. 

 

heap memory 를 할당 할 때 SQL 은 mutex, spin lock (spin lock 을 돌다가 포기하면 back-off 가 발생한다.) 을 이용해 critical section을 관리한다. 그러므로, SQL Server 가 메모리 할당 전략을 NUMA 의 per node 로 되어 있고 4개의 물리 소켓 프로세서가 있다면, 4개의 thread 만 동시에 메모리를 할당 할 수 있을것이다. 그러나, 메모리 구분 전략을 per cpu 로 바꾼다면, 동시에 많은 thread 가(최대 core 갯수만큼) heap 에 메모리를 할당 할 수 있으므로 메모리 할당경합이 줄 수 있을것이다.

 

tempdb 를 물리 코어수 만큼 다 나누더라도 동시에 임시테이블이나 worker table 을 만드는 횟수가 적다면, tempdb 를 나누는 것이 도움이 되지 않는다. 이와 마찬가지로 memroy 할당이 동시에 많지 않다면, 이러한 TF 는 도움이 되지 않는다. 생각해 봐라. core 가 80개 인데 tempdb 80개로 나누고 있겠는가? 아니다. 적절히 동시에 사용하는 수만큼만 나누어 주면 된다. latch 경합이 발생하는 경우 파일을 나누는 것은 도움이 된다. 그러나, 물리 i/o 가 문제가 되는 경우에는 균형할당 알고리즘에 의해 random i/o 만 증가 시킬 수 있다. tempdb 를 나누는 것은 latch 경합을 줄이는 것이고 물리적 하드디스크의 random i/o 를 줄이는 것이 아니다.

 

위와 같이 cmemthread 경합이 발생하는 대표적인 경우는 ad-hoc 쿼리가 많을 때 일듯하다. SQL Server 는 쿼리 플랜을 만들때 메모리 할당이 필요하고, 이 때 파라메터 바인딩된 쿼리를 쓰지 않았을 경우 각각의 ad-hoc 쿼리가 모두 컴파일 될 필요가 있다. 이 때 이러한 유형(cmemthread)의 경합이 발생할 수 있다. 이러한 쿼리를 발생시키는 쿼리가 ISV 에서 만든 솔루션이라면 파라메터 바인딩된 쿼리로 고치지 못할 것이다. 이러한 경우 TF8048은 강력한 무기가 될 수 있을듯 하다.

 

SELECT

type, pages_in_bytes,

CASE

WHEN (0x20 = creation_options & 0x20) THEN 'Global PMO. Cannot be partitioned by CPU/NUMA Node. TF 8048 not applicable.'

WHEN (0x40 = creation_options & 0x40) THEN 'Partitioned by CPU.TF 8048 not applicable.'

WHEN (0x80 = creation_options & 0x80) THEN 'Partitioned by Node. Use TF 8048 to further partition by CPU' (이때만 효과가 있음)

ELSE 'UNKNOWN'

END

from sys.dm_os_memory_objects

order by pages_in_bytes desc

 

 

Trace flag 8048 http://blogs.msdn.com/b/psssql/archive/2011/09/01/sql-server-2008-2008-r2-on-newer-machines-with-more-than-8-cpus-presented-per-numa-node-may-need-trace-flag-8048.aspx

Trace flag 8015 http://blogs.msdn.com/b/psssql/archive/2010/04/02/how-it-works-soft-numa-i-o-completion-thread-lazy-writer-workers-and-memory-nodes.aspx

http://blogs.msdn.com/b/psssql/archive/2012/12/20/how-it-works-cmemthread-and-debugging-them.aspx

http://dba.stackexchange.com/questions/28756/sql-server-anyone-use-suma-trace-flag-8048-or-trace-flag-8015

 

 

 

 

Posted by 보미아빠
, |

http://superuser.com/questions/152720/windows-7-how-to-boot-up-in-normal-mode-after-improper-shut-down

 

This has been quite the difficult question. There doesn't seem to be much of anything out there. The only viable solution out there seems to be the following.

WARNING: This can damage your system, use at own risk.

First run a command line window as administrator.

Start Menu > All Programs > Accessories > (right-click & "Run as administrator") Command prompt

next run the following commands

bcdedit /export C:\BCDbak

This will make an export of your Boot Configuration Data Store. You can import it with

bcdedit /import C:\BCDbak   
bcdedit /import C:\BCDbak /clean  

You can now try to disable recoverymode with the following command.

bcdedit /set {default} recoveryenabled No

This answer may work for you, but I certainly recommend that you invest in a UPS for this sytem.


EDIT: You can also display your configuration by running

bcdedit /enum

and

bcdedit /enum /v
Posted by 보미아빠
, |

 

5월 31일 스터디 수강하시면 알려드리겠습니다.

그전에 스스로 한번 구현해 보세요

 

 

 

 

 

Posted by 보미아빠
, |

최근에 달린 댓글

최근에 받은 트랙백

글 보관함