블로그 이미지
SQL Server VS. 석이 minsouk@hotmail.com MSSQL 쿼리성능 관련해 궁금한 사항이 있다면 언제나 누구나 TeamViewer + Line (네이버 japan 메신저) 에 minsouk1 추가 후 연락주세요~ 010-9967-0955 보미아빠

카테고리

보미아빠, 석이 (431)
밥벌이 (16)
싸이클 (1)
일상 (1)
Total179,976
Today8
Yesterday49

달력

« » 2017.06
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30  

공지사항

norecovery 로 복구한 데이터베이스 mdf ldf 를 신규 서버로 이전해와 

복구 하려고 할 때 


메시지 4333, 수준 16, 상태 1, 줄 32

로그가 복원되지 않아 데이터베이스를 복구할 수 없습니다.

메시지 3013, 수준 16, 상태 1, 줄 32

RESTORE DATABASE이(가) 비정상적으로 종료됩니다.


norecovery 로 복구한 데이터베이스 mdf ldf 를 신규 서버로 이전해와 

계속 transacion log 를 복구 하려고 할 때 


메시지 3446, 수준 16, 상태 2, 줄 18
주 로그 파일을 데이터베이스 'agtest'(6:0)에 대해 사용할 수 없습니다. 로그를 백업할 수 없습니다.
메시지 3013, 수준 16, 상태 1, 줄 18
RESTORE LOG이(가) 비정상적으로 종료됩니다.

mdf, ldf 파일에 권한이 없어도 날 수 있는 오류이다. 

메시지가 그시기 하다.....


위 방식(SI 백업시 이런 시나리오가 됩니다.)으로 복구 할 때는 

select * from sysaltfiles where dbid = 5 

와 

restore filelistonly from disk = '' 

에서의 fileid 순서가 같아야 한다. 


CREATE DATABASE [agtest2] ON  PRIMARY 

( NAME = N'agtest2', FILENAME = N'c:\temp\agtest2.mdf' , SIZE = 10000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )

 LOG ON 

( NAME = N'agtest2_log', FILENAME = N'c:\temp\agtest2_log.ldf' , SIZE = 10000KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)

GO

USE [master]

GO


-- 파일 순서를 맞추어 주기 위해서 

ALTER DATABASE [agtest2] ADD FILE ( NAME = N'agtest2_1', FILENAME = N'C:\Temp\agtest2_1.ndf' , SIZE = 3072KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO


이런 식으로 fake db 를 2008 에서 만든 후 2014에 리커버리 한 다음 

서버를 종료 파일 바꿔치기를 진행한다. 


저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

resource link

분류없음 / 2016.06.12 10:50


저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

IF OBJECT_ID('TBLX') IS NOT NULL

DROP TABLE TBLX

GO


CREATE TABLE TBLX (

    C1 VARCHAR(100), 

    CJSON NVARCHAR(MAX) CHECK ( ISJSON ( CJSON ) = 1 ) 

)

GO


;WITH CTE_P AS 

(

SELECT DISTINCT LASTWAITTYPE 

FROM MASTER.DBO.SYSPROCESSES 

)

INSERT INTO TBLX (C1, CJSON) 

SELECT 

LASTWAITTYPE 

, SYSPROCESSESJSON =

(

SELECT SPID, KPID, WAITTYPE

FROM MASTER.DBO.SYSPROCESSES C

WHERE P.LASTWAITTYPE = C.LASTWAITTYPE

FOR JSON AUTO

)

FROM CTE_P P

GO


UPDATE TBLX 

SET CJSON = '[{"SPID":6,"KPID":10348,"WAITTYPE":"AKW="},{"SPID":15,"WAITTYPE":"AKW="}]'

WHERE C1 = 'LAZYWRITER_SLEEP'


SELECT * 

FROM TBLX A

CROSS APPLY OPENJSON (CJSON)

WITH (

   SPID BIGINT

   , KPID BIGINT

   , WAITTYPE VARCHAR(100)

) AS J







https://sqlwithmanoj.com/2015/06/01/exportconvert-table-or-sql-query-data-to-json-string-format-sql-server-2016-part-1/


을 시작으로 죽 읽어보면 도움이 많이 된다. 



multi row 일 경우 json 컬럼에 인덱스를 만들수 없다. 

index 는 그냥 computed column 에 일반 인덱스를 만들 뿐이다. 



escape character 


static string EscapeForJson(string s) {
  string quoted = System.Web.Helpers.Json.Encode(s)
  return quoted.Substring(1, quoted.Length - 2);
}


 public static string cleanForJSON(string s)
    {
        if (s == null || s.Length == 0) {
            return "";
        }

        char         c = '\0';
        int          i;
        int          len = s.Length;
        StringBuilder sb = new StringBuilder(len + 4);
        String       t;

        for (i = 0; i < len; i += 1) {
            c = s[i];
            switch (c) {
                case '\\':
                case '"':
                    sb.Append('\\');
                    sb.Append(c);
                    break;
                case '/':
                    sb.Append('\\');
                    sb.Append(c);
                    break;
                case '\b':
                    sb.Append("\\b");
                    break;
                case '\t':
                    sb.Append("\\t");
                    break;
                case '\n':
                    sb.Append("\\n");
                    break;
                case '\f':
                    sb.Append("\\f");
                    break;
                case '\r':
                    sb.Append("\\r");
                    break;
                default:
                    if (c < ' ') {
                        t = "000" + String.Format("X", c);
                        sb.Append("\\u" + t.Substring(t.Length - 4));
                    } else {
                        sb.Append(c);
                    }
                    break;
            }
        }
        return sb.ToString();
    }





-- 대소문자 가림


IF OBJECT_ID ('CUSTOMER') IS NOT NULL

DROP TABLE CUSTOMER

GO



CREATE TABLE DBO.CUSTOMER( 

    ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY,

    NAME NVARCHAR(50), DETAIL NVARCHAR(MAX))

GO

INSERT INTO DBO.CUSTOMER ( NAME, DETAIL )

SELECT TOP 200000 NEWID(),

 REPLACE('{"ADDRESS":{"STATE":"KA","COUNTRY":"INDIA"},

 "PHONE":"@PHONE"}',

 '@PHONE', 100000000-ROW_NUMBER() OVER (ORDER BY SC1.OBJECT_ID))

FROM SYS.ALL_COLUMNS SC1

        CROSS JOIN SYS.ALL_COLUMNS SC2

GO 


SET STATISTICS IO ON 

GO


SELECT *

FROM DBO.CUSTOMER

WHERE JSON_VALUE(DETAIL,'$.PHONE') = '99890000'



ALTER TABLE DBO.CUSTOMER

ADD PHONENUMBER AS JSON_VALUE(DETAIL,'$.PHONE')


SELECT TOP 10 * FROM DBO.CUSTOMER 


CREATE INDEX IX_CUSTOMER_PHONENUMBER

 ON DBO.CUSTOMER(PHONENUMBER)

GO

 

SELECT *

FROM DBO.CUSTOMER

WHERE JSON_VALUE(DETAIL,'$.PHONE') = '99890000'

GO


--(1개 행이 영향을 받음)

--테이블 'CUSTOMER'. 검색 수 1, 논리적 읽기 수 6, 물리적 읽기 수 0, 미리 읽기 수 0, LOB 논리적 읽기 수 0, LOB 물리적 읽기 수 0, LOB 미리 읽기 수 0.


SELECT * FROM CUSTOMER WHERE PHONENUMBER = '99890000'

GO


--(1개 행이 영향을 받음)

--테이블 'CUSTOMER'. 검색 수 1, 논리적 읽기 수 6, 물리적 읽기 수 0, 미리 읽기 수 0, LOB 논리적 읽기 수 0, LOB 물리적 읽기 수 0, LOB 미리 읽기 수 0.



select switchoffset (ctime, '+09:00') ctime, ceventname, j.wait_type, j.duration, j.batch_text, j.statement, j.current_state, cjsondata

from txelog a

cross apply openjson (cjsondata)

with (

   duration bigint

   , batch_text varchar(max) 

   , statement varchar(max)

   , wait_type varchar(100)

   , current_state varchar(100)

) as j

order by cTime desc 

저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

sysinternals 의 procmon 이면 가능하다 


다음은 procmon 으로 PML 파일로 로그를 저장하고 분석한 그림이다. 

작업관리자의 리소스 모니터로 해도 되지만 로그로 저장하는 기능이 없으니 procmon 이 좋을듯 하다. 

언제 얼마나 사용했는지 정확하게 모니터링 가능하다. 

필터도 자유롭고....





저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠


공분산.xlsx





install.packages("data.table")

library("data.table")


DF <- fread("C:\\R\\2\\example_studentlist.csv", data.table=F)


DF <- read.csv("C:\\R\\2\\example_studentlist.csv", data.table=F)


DF <- read.csv("C:\\R\\2\\example_studentlist.csv")


# sp_help 같은거 

str(DF)

#컬럼리스트 

ls(DF)


#데이터 전체 조회

DF


#삭제

detach(DF)


#등록

attach (DF)


#등록된 항목 

search()


#평균

mean(height, na.rm=T)


#중앙값

median(height, na.rm=T)


#범위 (최소 최대값) 

range(height, na.rm=T)


#사분위

quantile(height,na.rm=T)


#사분위 -- 25% 와 75% 사이

#즉 중앙값을 구하는 기준으로 4분위를 구한다. InterQuartile Range

IQR(height, na.rm=T)



#평균, 중앙값, Q1, Q3 한번에 보기

summary(height, na.rm=T)


#box plot

boxplot(height)


#상관계수

cor(height, weight)


#피어슨 적률 상관계수

cor.test(height, weight)


#3개의 컬럼만 뽑아서 상관계수를 구해보자

cor(DF[,c(3,7,8)])

DF


#cor var 과 같은 다변량 함수의 결측치(빠진값) 는 디테일하게 다루어야 해서 

# var 는 되던데......설명이 이상하네...213page

#아래 명령어가 먹지 않음 

cor(weight, height, na.rm=T)


#대신 이게 먹는데, ns.rm=T 와 같은 의미임 

cor(weight, height, use="complete.obs")


#결측치 테스트를 위해 하나의 테이블을 더 만들고 

DF2<-DF


#두개의 값을 NA로 업데이트 한다.

DF2

DF2[2,7]<- NA

DF2[4,8]<- NA

DF2


detach(DF)

attach (DF2)

search()


#값중 NA 가 있으면 생깜...

cor(height, weight)

cor(weight, height, use="complete.obs") #전체 행 제

cor(weight, height, use="pairwise.complete.obs") #피어슨이 사용한 방법 해당 백터만 제거 

cor(weight, height, use="everything") # na.rm=F 와 같은 값 NA값도 포함해 구함 그러니 값이 안나오겠지 


#분산과 공분산 구하기 

var(height, na.rm=T)

var(height, weight, na.rm=T)

cov(height, weight, use="complete.obs")


var(DF2[,c(3,7,8)], na.rm=T)

DF2


#표준편차 구하기 (한개 변수만 넣어서 구할수 있음)

sd(height, na.rm=T)


#표준화

scale(height)

#무식하게 한번 해서 비교 해보면

#표준화는 평균을 0으로 만들고, 표준편차로 나누어 주면 된다. 

(height - mean(height, na.rm=T))/sd(height, na.rm=T)

mean(height)


#변동계수 

#서로 단위가 다른 평균과 표준편차를 비교하기 위해서 쓴다. 

#변공계수가 크면 값이 더 넓게 분포함 

sd(height, na.rm=T)/mean(height, na.rm=T)

sd(weight, na.rm=T)/mean(weight, na.rm=T)

#> sd(height, na.rm=T)/mean(height, na.rm=T)

#[1] 0.0476805

#> sd(weight, na.rm=T)/mean(weight, na.rm=T)

#[1] 0.1736455

# 아 몸무게가 더 넓게 퍼져 있구나.............


# 콘솔 지우기 ctrl - L

# 모든변수 지우기



###############################################################################

# EX 06

###############################################################################

rm(list=ls())

search()

detach (DF)

detach(DF2)


DF<-read.csv("C:\\R\\2\\example_salary.csv", stringsAsFactors=T, na="-")

DF<-read.csv("C:\\R\\2\\example_salary.csv", na="-")

DF<-read.csv("C:\\R\\2\\example_salary.csv")


search()

rm(DF)

detach(DF)

str(DF)


#컬럼이름 확인

colnames(DF)

#컬럼이름 변경 

colnames(DF) <- c("age", "salary", "specialSalary", "workingTime", "numberOfWorker", "career", "sex")


#검색목록에 데이터프레임 올리기

attach(DF)

#평균구하기 

Mean <- mean(salary, na.rm=T)

Mean

#중앙값 구하기 

Mid <- median(salary, na.rm=T)

Mid

#범위 구하기

Range <- range(salary, na.rm=T)

Range

#관측치 찾아보기 

w <- which(DF$salary==4064286)

DF[w,]

#사분위구하기

Qnt <- quantile(salary, na.rm=T)

Qnt



###############################################################################

# EX 07

###############################################################################


rm(list=ls())

search()

detach (DF)

detach(DF2)


DF<-read.csv("C:\\R\\2\\example_salary.csv", stringsAsFactors=F, na="-")

DF<-read.csv("C:\\R\\2\\example_salary.csv", na="-")

DF<-read.csv("C:\\R\\2\\example_salary.csv")


# top 으로 보기 

head(DF, 5)

str(DF)


#컬럼이름 확인

colnames(DF)

#컬럼이름 변경 

colnames(DF) <- c("age", "salary", "specialSalary", "workingTime", "numberOfWorker", "career", "sex")

#성별로 평균 구하기 


# 필요 라이브러리 

install.packages("ggplot2")

library("ggplot2")

install.packages("reshape2")

library("reshape2")


str(DF)


#tapply 는 group by 로 이해하면 된다. 남여 평균 급여 

temp <- tapply(DF$salary,DF$sex, mean, na.rm=T)

temp

melt<-melt(temp)

ggplot(melt, aes(x=Var1, y=value, fill=Var1))+geom_bar(stat="identity")





########################### 기본 그림 그리기 인듯

pie(temp)

pie(temp,col=rainbow(10)) 

pct <- round(temp/sum(temp)*100,1)

names(pct)

lab <- paste(names(temp),"\n",pct,"%")

pie(temp,main="남여 평균 연봉",col=rainbow(10),cex=0.8, labels=lab )

pie(pct)


pie(temp,main="남여 평균 연봉",col=rainbow(10),cex=0.8, labels=lab )

par(new=T)

pie(temp,radius=0.6, col="white", labels=NA, border=NA)


barplot(temp,main="남여 평균 연봉",col=rainbow(10), space=0.1,ylim=c(0,3000000), cex.name=0.7 )

barplot(temp,main="남여 평균 연봉",col=rainbow(10), space=0.1,xlim=c(0,3000000),  cex.name=0.7, horiz=T)

########################### 기본 그림 그리기 인듯


# 에러남 찾아볼것 

savePlot("a.png", type="png")


# 남여 표준편차 : 격차도 심하네 

sd <- tapply(DF$salary,DF$sex, sd, na.rm=T)

sd


# 남여 Range :

rg <- tapply(DF$salary,DF$sex, range, na.rm=T)

rg


#경력별 평균월급

temp <- tapply(DF$salary,DF$career, mean, na.rm=T)

temp


melt <- melt(temp)

ggplot(melt, aes(x=Var1, y=value, group=1)) +geom_line(colour="skyblue2", size=2)  +coord_polar() + ylim(0,max(melt$value))


#표준편차 

temp <- tapply(DF$salary,DF$career, sd, na.rm=T)

temp


#경력별 범위

temp <- tapply(DF$salary,DF$career, range, na.rm=T)

temp


#가장 적게 받는 월급 집단

a1 <-DF[which(DF$salary==1172399),]

a2 <-DF[which(DF$salary==1685204),]

a3 <-DF[which(DF$salary==1117605),]

a4 <-DF[which(DF$salary==1245540),]

a5 <-DF[which(DF$salary==1548036),]


list <-list(a1,a2,a3,a4,a5)

list



###############################################################################

# EX 08

###############################################################################

rm(list=ls())

detach (DF)

search()


install.packages("ggthemes")

DF<-read.csv("C:\\R\\2\\example_cancer.csv", stringsAsFactors=F, na="-")

str(DF)



attach(DF)

mean(age)

summary(age)


boxplot(age)

boxplot(age,range=1.5)

grid


boxplot(age,range=3)

distIQR<-IQR(age,na.rm=T)

distIQR


posIQR <- quantile(age,probs = c(0.25,0.75), na.rm=T)

posIQR

posIQR <- quantile(age,probs = c(0.1,0.9), na.rm=T)

posIQR


DownWhisker <- posIQR[[1]] -distIQR*1.5

UpWhisker <- posIQR[[2]] +distIQR*1.5


DownWhisker; UpWhisker

Outlier <-subset(DF, subset = (DF$age < DownWhisker | DF$age > UpWhisker))

Outlier



###############################################################################

# EX 09

###############################################################################

rm(list=ls())

detach (DF)

search()


DF<-read.csv("C:\\R\\2\\example_salary.csv", stringsAsFactors=F, na="-")

head(DF,5)

head(DF)


#컬럼이름 확인

colnames(DF)

#컬럼이름 변경 

colnames(DF) <- c("age", "salary", "specialSalary", "workingTime", "numberOfWorker", "career", "sex")

#성별로 평균 구하기 


head(DF)

#표준화시키기

Scale <- scale(DF$salary)

Scale

head(Scale,10)

#표준화 한 값을 데이터프레임에 포함시키기 

DF <-cbind(DF,scale = Scale)

str(DF)

g1<-ggplot(DF,aes(x=scale,y=age))

g2<-geom_segment(aes(yend=age),xend=0)

g3<-g1+g2+geom_point(size=7,aes(colour=sex, shape=career))+theme_minimal()

g3



DF

install.packages("combinat")

library("combinat")

nrow(combn(45,6))

720/48


install.packages("gtools")

library("gtools")

1/nrow(combinations(45,6))

nrow(combinations(6,4))



#이항분포 9번성공 

dbinom(9, 10000, 0.0009)



저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

use tempdb 

go


if object_id('tblx') is not null

drop table tblx 

go

 

create table tblx 

(idx int identity(1,1)

,c1 char(200))

go


insert into tblx default values 

go 1000


declare @in1 int = 1

declare @in2 int = 3

declare @in3 int = 4

declare @in4 int = 2


select b.*

from (values (@in1), (@in2), (@in3), (@in4)) as a(invalue)

inner loop join tblx b 

on b.idx = a.invalue


select * 

from tblx 

where idx in (@in1, @in2, @in3, @in4) 

go





in query 의 경우 merge interval 등의 내부 처리를 거쳐 단 한개의 값으로 축소한다. 

그런데, inner loop join 의 경우 같은 결과를 만들려면 외부에서 unique 함을 보장해 줘야하니 주의하길 바란다. 

이렇게 명시적 order 하는게 가장 좋아 보임 



select b.*

from (values (@in1, 1), (@in2, 2), (@in3, 3), (@in4, 4)) as a(invalue, inOrder)

join tblx b 

on b.idx = a.invalue

order by inOrder 




저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

perfmon

분류없음 / 2016.06.08 14:00

원격 컴퓨터에 아래 설정을 한다. 


https://automationspecialist.wordpress.com/2011/10/10/troubleshoot-perfmon-exe-shows-error-message-unable-to-connect-to-machine/


프로그램에서 접근하는 계정이 혹은 원격 컴퓨터의 계정이 

컴퓨터관리 / 그룹 / Performance Log Users 와 Performance Monitor Users 에 들어 있어야 한다. 

혹은 Administrators 에 들어 있으면 된다. 음......


Remote Registry Service 를 enable 한다 


해당 컴퓨터를 호스트에 등록한다. 

C:\Windows\System32\drivers\etc


방화벽 열기

NetLogon 서비스 (NP-in)              port no : 445 

139번은 off 되어도 가능함 


아래와 같이 Test 프로그램을 돌린다. 

샘플 프로그램은 다음과 같다. 



using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Diagnostics;

using System.Security.Principal;

using System.Threading;

using System.Runtime.InteropServices;


namespace CPUCounterTest

{

    class Program

    {

        public const int LOGON32_LOGON_INTERACTIVE = 2;

        public const int LOGON32_LOGON_NEW_CREDENTIALS = 9;

        public const int LOGON32_LOGON_SERVICE = 3;

        public const int LOGON32_PROVIDER_DEFAULT = 0;


        [DllImport("advapi32.dll", CharSet = CharSet.Auto)]

        public static extern bool LogonUser(

            String lpszUserName,

            String lpszDomain,

            String lpszPassword,

            int dwLogonType,

            int dwLogonProvider,

            ref IntPtr phToken);


        [DllImport("kernel32.dll", CharSet = CharSet.Auto)]

        public extern static bool CloseHandle(IntPtr handle);


        static void Main(string[] args)

        {

            //AD01047147  al01023274

            IntPtr userHandle = new IntPtr(0);



            string username = "Naver";

            string password = "";

            string hostname = "al01023274";

            // workGroup 환경이라도 아이디 암호가 같아야 한다. 



            LogonUser(username, hostname, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref userHandle);

            

            WindowsIdentity identity = new WindowsIdentity(userHandle);

            WindowsImpersonationContext context = identity.Impersonate();

            PerformanceCounterCategory cat = new PerformanceCounterCategory("Processor", hostname);

            List<PerformanceCounter> counters = new List<PerformanceCounter>();

            foreach (string instance in cat.GetInstanceNames())

                counters.Add(new PerformanceCounter("Processor", "% Processor Time", instance, hostname));

            for (int i = 0; i < 10000; i++)

            {

                foreach (PerformanceCounter counter in counters)

                    Console.Write(counter.NextValue() +" ");

                Console.WriteLine();

                Thread.Sleep(800);

            }


            context.Undo();

        }

    }

}





저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

http://www.sqlskills.com/blogs/paul/most-common-latch-classes-and-what-they-mean/

 

 

 

 

[Edit 2016: Check out my new resource – a comprehensive library of all wait types and latch classes – see here.]

Back in May I kicked off a survey about prevalent latches on SQL Server instances across the world (see here). It’s taken me quite a while to get around to collating and blogging about the results, but here they are at last!

I got information back for almost 600 servers, and if you remember, I gave you some code to run that would output the top non-page latches that are being waited on during LATCH_XX waits. Non-page latches are those that are neither PAGELATCH_XX (waiting for access to an in-memory copy of a data-file page) nor PAGEIOLATCH_XX(waiting for a data file page to be read into memory from disk).

Every non-page data structure in SQL Server that must be thread-safe (i.e. can be accessed by multiple threads) must have a synchronization mechanism associated with it – either a latch or a spinlock. I blogged about spinlocks last year – see  – and they’re used for some data structures that are accessed so frequently, and for such a short time, that even the expense of acquiring a latch is too high. Access to all other data structures is controlled through a latch.

Using wait statistics analysis (see here), if you’re seeing LATCH_EX or LATCH_SH wait types are one of the top 3-4 prevalent wait types, contention for a latch might be contributing to performance issues. If that is the case, you’ll need to use sys.dm_os_latch stats to figure out what the most prevalent latch is – using code similar to that below (with some example output):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
/*============================================================================
  File:     LatchStats.sql
 
  Summary:  Snapshot of Latch stats
 
  SQL Server Versions: 2005 onwards
------------------------------------------------------------------------------
  Written by Paul S. Randal, SQLskills.com
 
  (c) 2015, SQLskills.com. All rights reserved.
 
  For more scripts and sample code, check out
 
  You may alter this code for your own *non-commercial* purposes. You may
  republish altered code as long as you include this copyright and give due
  credit, but you must obtain prior permission before blogging this code.
   
  THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF
  ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED
  TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
  PARTICULAR PURPOSE.
============================================================================*/
 
WITH [Latches] AS
    (SELECT
        [latch_class],
        [wait_time_ms] / 1000.0 AS [WaitS],
        [waiting_requests_count] AS [WaitCount],
        100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage],
        ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum]
    FROM sys.dm_os_latch_stats
    WHERE [latch_class] NOT IN (
        N'BUFFER')
    AND [wait_time_ms] > 0
)
SELECT
    MAX ([W1].[latch_class]) AS [LatchClass],
    CAST (MAX ([W1].[WaitS]) AS DECIMAL(14, 2)) AS [Wait_S],
    MAX ([W1].[WaitCount]) AS [WaitCount],
    CAST (MAX ([W1].[Percentage]) AS DECIMAL(14, 2)) AS [Percentage],
    CAST ((MAX ([W1].[WaitS]) / MAX ([W1].[WaitCount])) AS DECIMAL (14, 4)) AS [AvgWait_S]
FROM [Latches] AS [W1]
INNER JOIN [Latches] AS [W2]
    ON [W2].[RowNum] <= [W1].[RowNum]
GROUP BY [W1].[RowNum]
HAVING SUM ([W2].[Percentage]) - MAX ([W1].[Percentage]) < 95; -- percentage threshold
GO
LatchClass                        Wait_S  WaitCount  Percentage  AvgWait_S
——————————— ——- ———- ———– ———-
LOG_MANAGER                       221.43  4659       45.81       0.0475
ACCESS_METHODS_HOBT_VIRTUAL_ROOT  199.56  7017       41.28       0.0284
FGCB_ADD_REMOVE                   35.17   1047       7.27        0.0336
DBCC_OBJECT_METADATA              26.85   256490     5.55        0.0001

You can also see the latch class of the latches being waited for in the resource_description column of sys.dm_os_waiting_tasks DMV when the wait_type column isLATCH_XX.

The survey results are based on running this code, for latch classes that are the top latch class on 10 or more servers.

Just like the wait statistics survey results from 2010, I’m not surprised by these results as we see these over and over on client systems.

For the remainder of this post I’m going to list the 10 latch classes represented in the data above in descending order and explain what they mean. Some of this information is from my knowledge of the internals and some of it is from catching the point that latches are acquired using Extended Events and callstack dumping.

Note that when I’m describing a latch class, there may be many instances of that particular latch class. For instance, an FGCB_ADD_REMOVE latch exists for each filegroup in each database attached to the instance.

  • 164: ACCESS_METHODS_DATASET_PARENT
  • 83: ACCESS_METHODS_SCAN_RANGE_GENERATOR
    • These two latches are used during parallel scans to give each thread a range of page IDs to scan. The LATCH_XX waits for these latches will typically appear withCXPACKET waits and PAGEIOLATCH_XX waits (if the data being scanned is not memory-resident). Use normal parallelism troubleshooting methods to investigate further (e.g. is the parallelism warranted? maybe increase ‘cost threshold for parallelism’, lower MAXDOP, use a MAXDOP hint, use Resource Governor to limit DOP using a workload group with a MAX_DOP limit. Did a plan change from index seeks to parallel table scans because a tipping point was reached or a plan recompiled with an atypical SP parameter or poor statistics? Do NOT knee-jerk and set server MAXDOP to 1 – that’s some of the worst advice I see on the Internet.)
  • 80: ACCESS_METHODS_HOBT_COUNT
    • This latch is used to flush out page and row count deltas for a HoBt (Heap-or-B-tree) to the Storage Engine metadata tables. Contention would indicate *lots* of small, concurrent DML operations on a single table.
  • 72: LOG_MANAGER
    • If you see this latch it is almost certainly because a transaction log is growing because it could not clear/truncate for some reason. Find the database where the log is growing and then figure out what’s preventing log clearing using: SELECT [log_reuse_wait_desc] FROM sys.databases WHERE [name] = N’youdbname’
  • 34: TRACE_CONTROLLER
    • This latch is used by SQL Trace for myriad different things, including just generating trace events. Contention on this latch would imply that there are multiple traces on the server tracing lots of stuff – i.e. you’re over-tracing.
  • 33: DBCC_MULTIOBJECT_SCANNER
    • This latch appears on Enterprise Edition when DBCC CHECK* commands are allowed to run in parallel. It is used by threads to request the next data file page to process. Late last year this was identified as a major contention point inside DBCC CHECK* and there was work done to reduce the contention and make DBCC CHECK* run faster. See KB article 2634571 and Bob Ward’s write-up for more details.
  • 16: ACCESS_METHODS_HOBT_VIRTUAL_ROOT
    • This latch is used to access the metadata for an index that contains the page ID of the index’s root page. Contention on this latch can occur when a B-tree root page split occurs (requiring the latch in EX mode) and threads wanting to navigate down the B-tree (requiring the latch in SH mode) have to wait. This could be from very fast population of a small index using many concurrent connections, with or without page splits from random key values causing cascading page splits (from leaf to root).
  • 16: FGCB_ADD_REMOVE
    • FGCB stands for File Group Control Block. This latch is required whenever a file is added or dropped from the filegroup, whenever a file is grown (manually or automatically), when recalculating proportional-fill weightings, and when cycling through the files in the filegroup as part of round-robin allocation. If you’re seeing this, the most common cause is that there’s a lot of file auto-growth happening. It could also be from a filegroup with lots of file (e.g. the primary filegroup in tempdb) where there are thousands of concurrent connections doing allocations. The proportional-fill weightings are recalculated every 8192 allocations, so there’s the possibility of a slowdown with frequent recalculations over many files.
  • 15: DATABASE_MIRRORING_CONNECTION
    • This latch is involved in controlling the message flow for database mirroring sessions on a server. If this latch is prevalent, I would suspect there are too many busy database mirroring sessions on the server.
  • 10: NESTING_TRANSACTION_FULL
    • This latch, along with NESTING_TRANSACTION_READONLY, is used to control access to transaction description structures (called an XDES) for parallel nested transactions. The _FULL is for a transaction that’s ‘active’, i.e. it’s changed the database (usually for an index build/rebuild), and that makes the _READONLYdescription obvious. A query that involves a parallel operator must start a sub-transaction for each parallel thread that is used – these transactions are sub-transactions of the parallel nested transaction. For contention on these, I’d investigate unwanted parallelism but I don’t have a definite “it’s usually this problem”. Also check out the comments for some info about these also sometimes being a problem when RCSI is used.

I hope you found this interesting and it helps you out with your performance troubleshooting!

저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

sql 2016 ssms

분류없음 / 2016.06.03 13:59

https://msdn.microsoft.com/en-us/library/mt238290.aspx


ko-kr 로 url을 바꾸어 들어가면 다른 버전(좌)이 받아지므로 영문페이지 하단에서 korean(우) 을 찾아서 다운 받을것




저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

카이제곱 검정

분류없음 / 2016.05.30 19:14

 

임상연구에 필요한 통계 분석

http://www.slideshare.net/mothersafe/ss-16208377

 

카이제곱 분포 임계치

http://ezstat.snu.ac.kr/textbook_sources/chapter_21.pdf

 

카이제곱 검정 chi-square test

http://dermabae.tistory.com/173

 

카이제곱분포표

http://ai-times.tistory.com/396

http://www.6025.co.kr/statistics/ka.asp

 

R 로 해보면

http://www.slideshare.net/yoonani/09-30176401

 

p-value 가 크면

귀무가설이 사실이라는 가정하에서 이러한 현상이 발생할 가능성이 많다는 의미임

따라서 p-value > a (유의수준) 이면 귀무가설 채택

 

 





공분산,  상관계수 공식

http://terms.naver.com/entry.nhn?docId=1109629&cid=40942&categoryId=32204


 

 

저작자 표시 비영리 변경 금지
신고
Posted by 보미아빠

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

티스토리 툴바