블로그 이미지
보미아빠

카테고리

보미아빠, 석이 (528)
밥벌이 (16)
싸이클 (1)
일상 (1)
Total
Today
Yesterday

달력

« » 2025.9
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

공지사항

최근에 올라온 글

https://www.sql.kiwi/2013/01/optimizing-t-sql-queries-that-change-data/

 

Optimizing T-SQL queries that change data

Optimizing the performance of SQL Server INSERT, UPDATE, DELETE, and MERGE statements. Execution plan analysis and undocumented trace flags. Narrow (per-row) and wide (per-index) plans. Prefetching and

www.sql.kiwi

 

Optimizing T-SQL queries that change data

Most tuning efforts for data-changing operations concentrate on the SELECT side of the query plan. Sometimes people will also look at storage engine considerations (like locking or transaction log throughput) that can have dramatic effects. A number of common practices have emerged, such as avoiding large numbers of row locks and lock escalation, splitting large changes into smaller batches of a few thousand rows, and combining a number of small changes into a single transaction in order to optimize log flushes.

This is all good, but what about the data-changing side of the query plan—the INSERT, UPDATE, DELETE, or MERGE operation itself—are there any query processor considerations we should take into account? The short answer is yes.

The query optimizer considers different plan options for the write-side of an execution plan, though there isn’t a huge amount of T-SQL language support that allows us to affect these choices directly. Nevertheless, there are things to be aware of, and things we can look to change.

Note: In this post I am going to use the term update (lower case) to apply to any operation that changes the state of the database (INSERT, UPDATE, DELETE, and MERGE in T-SQL). This is a common practice in the literature, and is used inside SQL Server too as we will see.

The Three Basic Update Forms

Query plans execute using a demand-driven iterator model, and updates are no exception. Parent operators drive child operators (to the right of the parent) to do work by asking them for a row at a time.

Take the following AdventureWorks INSERT for example:

DECLARE @T table (ProductID integer NOT NULL);

INSERT @T (ProductID) 
SELECT P.ProductID 
FROM Production.Product AS P;

Plan execution starts at the far left, where you can think of the green T-SQL INSERT icon as representing rows returned to the client. This root node asks its immediate child operator (the Table Insert) for a row. The Table Insert requests a row from the Index Scan, which provides one. This row is inserted into the heap, and an empty row is returned to the root node (if the INSERT query contained an OUTPUT clause, the returned row would contain data).

This row-by-row process continues until all source rows have been processed. Notice that the XML showplan output shows the Heap Table Insert is performed by an Update operator internally.

1. Wide, per-index updates

Wide (aka per-index) update plans have a separate update operator for each clustered and nonclustered index.

An example per-index update plan is shown below:

This plan updates the base table using a Clustered Index Delete operator. This operator may also read and output extra column values necessary to find and delete rows from nonclustered indexes.

The iterative clustered index delete activity is driven by the Eager Table Spool on the top branch. The spool is eager because it stores all the rows from its input into a worktable before returning the first row to its parent operator (the Index Delete on the top branch). The effect is that all qualifying base table rows are deleted before any nonclustered index changes occur.

The spool in this plan is a common subexpression spool. It is populated once, then acts as a row source for multiple consumers. In this case, the contents of the spool are consumed first by the top-branch Index Delete, which removes index entries from one of the nonclustered indexes.

When the Sequence operator switches to asking for rows from the lower branch, the spool is rewound and replays its contents to delete rows from the second nonclustered index. Note that the spool contains the union of all columns required for all the nonclustered index changes.

2. Narrow, per-row updates

Narrow (aka per-row) updates have a single update operator that maintains the base table (heap or clustered index), and one or more nonclustered indexes. Each row arriving at the update operator updates all indexes associated with the operator before processing the next row. An example:

DECLARE @T table 
(
    pk integer PRIMARY KEY,
    col1 integer NOT NULL UNIQUE, 
    col2 integer NOT NULL UNIQUE
);

DECLARE @S table 
(
    pk integer PRIMARY KEY, 
    col1 integer NOT NULL,
    col2 integer NOT NULL);

INSERT @T (pk, col1)
SELECT
    S.pk,
    S.col1 
FROM @S AS S;

The execution plan viewed using Plan Explorer shows the nonclustered index maintenance explicitly. Hover over the Clustered Index Insert for a tooltip showing the names of the nonclustered indexes involved:

In SQL Server Management Studio, there is no information about the nonclustered index maintenance in the graphical plan or tooltips. We need to click on the update operator (Clustered Index Insert) and then check the Properties window:

The Object subtree shows the clustered index and two nonclustered indexes being maintained.

Cost-based choice

The query optimizer uses cost-based reasoning to decide whether to update each nonclustered index separately (per-index) or as part of the base table changes (per-row).

An example that updates one nonclustered index per-row, and another per-index is shown below:

CREATE TABLE #T 
(
    pk integer IDENTITY PRIMARY KEY,
    col1 integer NOT NULL,
    col2 varchar(8000) NOT NULL DEFAULT ''
);

CREATE INDEX nc1 ON #T (col1);
CREATE INDEX nc2 ON #T (col1) INCLUDE (col2);

INSERT #T
    (col1)
SELECT
    N.n
FROM dbo.Numbers AS N
WHERE
    N.n BETWEEN 1 AND 251;

The combination strategy can be seen with Plan Explorer:

The details are also displayed in the SSMS Properties window when the Clustered Index Insert operator is selected.

3. Single-operator updates

The third form of update is an optimization of the per-row update plan for very simple operations. The cost-based optimizer still emits a per-row update plan, but a rewrite is subsequently applied to collapse the reading and writing operations into a single operator:

DECLARE @T AS TABLE 
(
    pk integer PRIMARY KEY, 
    col1 integer NOT NULL
);

-- Simple operations
INSERT @T (pk, col1) VALUES (1, 1000);
UPDATE @T SET col1 = 1234 WHERE pk = 1;
DELETE @T WHERE pk = 1;

The complete execution plans (and extracts from the XML plans) are shown below:

The UPDATE and DELETE plans have been collapsed to a Simple Update operation, while the INSERT is simplified to a Scalar Insert.

We can see the optimizer output tree with trace flag 8607:

DECLARE @T table
(
    pk integer PRIMARY KEY, 
    col1 integer NOT NULL
);

DBCC TRACEON (3604, 8607);

INSERT @T (pk, col1) VALUES (1, 1000) OPTION (RECOMPILE);
UPDATE @T SET col1 = 1234 WHERE pk = 1 OPTION (RECOMPILE);
DELETE @T WHERE pk = 1 OPTION (RECOMPILE);

DBCC TRACEOFF (3604, 8607);

The message window output for the INSERT statement shown above is:

*** Output Tree: (trivial plan) ***
PhyOp_StreamUpdate
    (INS TBL: @T, iid 0x1 as TBLInsLocator(QCOL: .pk ) REPORT-COUNT), {
        - QCOL: .pk= COL: Expr1002 
        - QCOL: .col1= COL: Expr1003 } 
    PhyOp_ConstTableScan (1) COL: Expr1002  COL: Expr1003 
        ScaOp_Const TI(int,ML=4) XVAR(int,Not Owned,Value=1)
        ScaOp_Const TI(int,ML=4) XVAR(int,Not Owned,Value=1000)

Notice the two physical operators: A Constant Scan (in-memory table) containing the literal values specified in the query; and a Stream Update that performs the database changes per row.

All three statements produce a similar optimizer tree output (and all use a stream update). We can prevent the single-operator optimization being applied with undocumented trace flag 8758:

DECLARE @T table
(
    pk integer PRIMARY KEY, 
    col1 integer NOT NULL
);

DBCC TRACEON (8758);

INSERT @T (pk, col1) VALUES (1, 1000) OPTION (RECOMPILE);
UPDATE @T SET col1 = 1234 WHERE pk = 1 OPTION (RECOMPILE);
DELETE @T WHERE pk = 1 OPTION (RECOMPILE);

DBCC TRACEOFF (8758);

This exposes the expanded per-row update plans produced by the optimizer before the single-operator rewrite:

Single operator plans can be significantly more efficient than the multiple-operator form in the right circumstances, for example if the plan is executed very frequently.

Update plan optimizations

From this point forward, I’m going to use AdventureWorks DELETE examples, but the general points apply equally well to INSERT, UPDATE, and MERGE as well.

The examples will not have a complicated SELECT component, because I want to concentrate on the update side of the plan rather than the reading side.

Per-row updates

It’s worth emphasising that narrow update plans are an optimization that is not available for every update query (except for Hekaton where they are the default).

The optimizer favours a per-row plan if the expected number of rows at the update operator is low. In the example below, the optimizer expects to delete 25 rows:

This plan updates the base table (a clustered index in this case) and all nonclustered indexes in sequence per row. The row source here is an ordered scan of a nonclustered index, which means rows will not generally be presented in clustered index key order to the update operator.

An unordered stream like this tends to result in random I/O (assuming physical I/O is required). If the plan is expected to process only a small number of rows, the optimizer decides it is not worth adding extra operations to encourage a sequential I/O access pattern.

Unordered Prefetch

If the expected number of rows is a bit larger, the optimizer may decide to build a plan that applies prefetching to one or more update operators.

The basic idea is the same as ordinary prefetching (a.k.a read-ahead) for scans and range seeks. The engine looks ahead at the keys of the incoming stream and issues asynchronous I/O for pages that will be needed by the update operator soon.

Prefetching can help reduce the impact of the expected random I/O. An example of prefetching on the update operator is shown below for the same DELETE query with an expected cardinality of 50 rows:

The prefetching is unordered from the perspective of the clustered index. Neither SSMS nor Plan Explorer show the prefetch information prominently.

In Plan Explorer, we need to have the With Unordered Prefetch optional column selected. To do this, switch to the Plan Tree tab, open the Column Chooser, and drag the column to the grid. The unordered prefetch property is ticked for the Clustered Index Delete operator in the screenshot below:

In SSMS, click on the Clustered Index Delete icon in the graphical plan, and then look in the Properties window:

If the query plan were reading from the clustered index (instead of a nonclustered index), the read operation would issue regular read-ahead, so there would be no point prefetching from the Clustered Index Delete as well.

The example below shows the expected cardinality increased to 100, where the optimizer switches from scanning the nonclustered index with unordered prefetching to scanning the clustered index with regular read-ahead. No prefetching occurs on the update operator in this plan:

Where the plan property With Unordered Prefetch is not set, it is simply omitted—it does not appear set to False as you might expect.

Ordered Prefetch

Where rows are expected to arrive at an update operator in non-key order, the optimizer might consider adding an explicit Sort operator. The idea is that presenting rows in key order will encourage sequential I/O for the index pages.

The optimizer weighs the cost of sorting against the expected benefits of avoiding random I/O. The execution plan below shows an example of a sort being added for this reason:

The Sort is on Transaction ID, the clustering key for this table. With the incoming stream now sorted, the update operator can use ordered prefetching.

The plan is still scanning the nonclustered index on the read side, so read-ahead issued by that operator does not bring in the clustered index pages needed by the update operator.

Ordered prefetching tends to result in sequential I/O, compared with the mostly random I/O of unordered prefetching. The With Ordered Prefetch column is also not shown by default by Plan Explorer, so it needs to be added as we did before:

Not every update operator with ordered prefetching requires a Sort. If the optimizer finds a plan that naturally presents keys in nonclustered index order, an update operator may still use ordered prefetching to pull pages we are about to modify into memory before they are requested by the update operator.

DML Request Sort

When the optimizer decides to explore a plan alternative that requires ordered input to an update operator, it will set the DML Request Sort property for that operator.

Plan Explorer shows this setting in the update operator tooltip:

SSMS shows With Ordered Prefetch and DML Request Sort in the Properties window:

Nonclustered index updates

Sorting and ordered prefetching may also be considered for the nonclustered indexes in a wide update plan:

This plan shows all three update operators with DML Request Sort set.

The Clustered Index Delete does not require an explicit sort because rows are being read (with internal ordering guarantees) from the Clustered Index Scan.

The two non-clustered indexes do require explicit sorting. Both nonclustered index update operators also use ordered prefetching; the clustered index update does not because the prior scan automatically invokes read-ahead for that index.

The next example shows that each update operator is considered separately for the various optimizations:

That plan features unordered prefetching on the Clustered Index Delete (because the row source is a scan of a nonclustered index), an explicit Sort with ordered prefetching on the top branch Index Delete, and unordered prefetching on the bottom branch Index Delete.

The per-index and per-row update plan

Is the following a per-index or per-row update plan?

It appears to be a per-index plan because there are two separate update operators, but there is no blocking operator (Eager Spool or Sort) between the two.

This plan is a pipeline. Each row returned from the seek will be processed first by the Clustered Index Delete and then by the Index Delete. In that sense, both updates (clustered and nonclustered) are per-row.

Anyway, the important thing is not the terminology, it is being able to interpret the plan. Now we know what the various optimizations are for, we can see why the optimizer chose the plan above over the seemingly equivalent but more efficient-looking narrow plan:

The narrow plan has no prefetching. The Seek provides read-ahead for the clustered index pages, but the single non-clustered index has nothing to prefetch any pages from disk it might need.

By contrast, the wide update plan has two update operators. The Clustered Index Delete has no prefetching for the same reason as in the narrow plan, but the Index Delete has unordered prefetching enabled.

So, although the smaller narrow plan looks like it ought to be more efficient, it might perform less well if nonclustered index pages are required from disk and unordered prefetching proves effective.

On the other hand, if all required nonclustered index pages are in memory at the time the query is executed, the wide plan might perform less well since it features an extra operator and has to perform work associated with prefetching (even though ultimately no physical reads are issued).

Performance Impacts

You may have noticed a lack of performance numbers (I/O statistics, elapsed times and so on) in this post. There is a good reason for this. The optimizations presented here are quite dependent on SQL Server configuration and hardware. I don’t want you drawing general conclusions based on the performance of the rather ordinary single spinning hard drive in my laptop, the 256MB of RAM I have allowed for my SQL Server 2012 instance, and some rather trivial AdventureWorks examples.

Which Optimizations are Good?

If the data structures you are changing are very likely to be in memory, and/or if you have a very fast random I/O system, the optimizer’s goal of minimizing random I/O and enhancing sequential I/O may not make much sense for you. The strategies the optimizer employs may well end up costing more than they save.

On the other hand, if your system has a much smaller buffer pool than database working set size, and your I/O system works very much faster with large sequential I/O than with smaller random I/O requests, you should generally find the optimizer makes reasonable choices for you, subject to the usual caveats about useful up-to-date statistics and accurate cardinality estimations.

If your system fits the optimizer’s model, at least to a first approximation, you will usually find that narrow update plans are best for low row-count update operations, unordered prefetching helps a bit when more rows are involved, ordered prefetching helps more, and explicit sorts before nonclustered index updates help most of all for the largest sets. That is the theory, anyway.

What are the Problems?

The problem I see most often is with the optimization that is supposed to help most: The explicit Sort before a nonclustered index update.

The idea is that sorting the input to the index update in key order promotes sequential I/O, and often it does. The problem occurs if the workspace memory allocated to the Sort proves to be too small to perform it entirely in memory. The memory grant is fixed based on cardinality estimation and row size estimates, and may be reduced if the server is under memory pressure at the time.

A Sort that runs out of memory spills whole sort runs to physical tempdb disk, often repeatedly. This is often not good for performance, and can result in queries that complete very much slower than if the Sort had not been introduced at all.

Note: SQL Server reuses the general Sort operator here — it does not have a Sort specially tuned for updates that could make a best effort to sort within the memory allocated, but never spill to disk.

The narrow plan optimization can cause problems where it is selected due to a low cardinality estimate, where several nonclustered indexes need to be maintained, the necessary pages are not in memory, and the I/O system is slow at random reads.

The ordered and unordered prefetch options cause problems much more rarely. For a system with all data in memory, there is a small but possibly measurable impact due to the prefetch overhead (finding pages to consider read-ahead for, checking to see if they are already in the buffer pool, and so on). Even if no asynchronous I/O is ever issued by the worker, it still spends time on the task that it could spend doing real work.

What options are there?

The usual answer to deal with poor execution plan choices due to a system’s configuration or performance characteristics not matching the optimizer’s model is to try different T-SQL syntax, and/or to try query hints.

There are times when it is possible to rewrite the query to get a plan that performs better, and other times where rethinking the problem a bit can pay dividends. Various creative uses of partitioning, minimal logging, and bulk loading are possible in some cases, for example.

There are very few hints that can help with the update side of a plan that does not respond to the usual tricks, however. The two I use most often affect the optimizer’s cardinality estimation: OPTION (FAST n) and a trick involving TOP and the OPTIMIZE FOR query hint.

OPTION (FAST n) affects cardinality estimates in a plan by setting a final row goal, but its effects can be difficult to predict, and may not have much of an effect if the plan contains blocking operators. Anyway, the general idea is to vary the value of ‘n’ in the hint until the optimizer chooses the desired plan shape or optimization options as described in this post. Best of luck with that.

The idea with TOP is similar, but often tends to work better in my experience. The trick is to declare a bigint variable with the number of rows the query should return (or a very large value such as the maximum value of a bigint if all rows are required), use that as the parameter to a TOP, and then use the OPTIMIZE FOR hint to set a value for ‘n’ that the optimizer should use when considering alternative plans. This option is particularly useful for encouraging a narrow plan.

Most of the examples in this post used the TOP trick in the following general form:

DECLARE @n bigint = 9223372036854775807;

DELETE TOP (@n) 
FROM Production.TransactionHistory
OPTION (OPTIMIZE FOR (@n = 50));

Magic trace flags

There are trace flags that affect the optimizations discussed in this post. They are undocumented and unsupported, and may only work on some versions, but they can be handy to validate a performance analysis, or to generate a plan guide for a particularly crucial query. They can also be fun to play with on a personal system to explore their effects. The main ones that affect the optimizations described here are:

2332 : Force DML Request Sort
8633 : Enable prefetch only
8744 : Disable prefetch only
8758 : Disable rewrite to a single operator plan
8790 : Force a wide update plan
8795 : Disable DML Request Sort
9115 : Disable optimized NLJ and prefetch

These trace flags can all be manipulated using the usual DBCC commands or enabled temporarily for a particular query using the OPTION (QUERYTRACEON xxxx) hint.

Final Thoughts

The optimizer has a wide range of choices when building the writing side of an update plan. It may choose to update one or more indexes separately, or as part of the base table update. It may choose to explicitly sort rows as part of a per-index update strategy, and may elect to perform unordered or ordered prefetching as well.

As usual, these decisions are made based on costs computed using the optimizer’s model. This model may not always produce plans that are optimal for your hardware (and as usual the optimizer’s choices are only as good as the information you give it).

If a particular update query is performance critical for you, make sure cardinality estimates are reasonably accurate. Test with alternate syntax and/or trace flags to see if an alternative plan would perform significantly better in practice. It is usually possible to use documented techniques like TOP clauses and OPTIMIZE FOR hints to produce an execution plan that performs well. Where that is not possible, more advanced tricks and techniques like plan guides may be called for.

Thanks for reading. I hope this post was interesting and provided some new insights into update query optimization.

Posted by 보미아빠
, |

https://sqlgeekspro.com/server-principal-owns-one-or-more-server-roles/

 

Error 15141: The Server Principal Owns One or More Server Roles » SQLGeeksPro

This article describes the error 15141: The Server Principal Owns One or More Server Roles and provides necessary scripts to resolve the error message.

sqlgeekspro.com

 

 

Today we are going to discuss about how to resolve the “Error 15141: The Server Principal Owns One or More Server Roles and cannot be dropped”

Introduction

If you are a SQL Server DBA, you may encounter the Error 15141 when trying to delete a login id. First of all let me show you how the error message looks like in SSMS when you try to delete/drop a login.

In the above example screen shot we are trying to delete the login id ‘Admin’. I have observed that some DBA(s) do not read the error message carefully and starts beating about the bush.

There are very similar error messages while dropping logins as you can refer the following links:

Error 15141: The Server Principal Owns One or More Availability Groups

Error 15141: The Server Principal Owns One or More Endpoints

Error 15434: Could not drop login as the user is currently logged in

To emphasize if you observe the above error message clearly reads that the principal (or login) owns one or more server roles, which prevents you from dropping it.

Cause Of the Error 15141: The Server Principal Owns One or More Server Roles

When a server principal or login owns a server role, you cannot drop the login unless you first transfer ownership of the server role to another login. As a matter of fact SQL Server never allows you to drop a login if it owns any object. Hence it throws the error preventing you to drop the server principal or login.

Resolution

To resolve the error, you first need to identify the Server Roles owned by the login or server principal. Then you need to transfer the ownership for each of the server roles to ‘sa’ or any other login as per your organization standard.

1. Query to Identify the Server Roles the Login owns

SELECT sp1.name AS ServerRoleName, 
       sp2.name AS RoleOwnerName
       FROM sys.server_principals AS sp1
       JOIN sys.server_principals As sp2
       ON sp1.owning_principal_id=sp2.principal_id
       WHERE sp2.name='Admin' --Change the login name

Sample Output:

Here in the above example it shows that the login id ‘Admin’ owns two Server roles. On the contrary if the login would have own one or more database role(s), it would allow to delete the login but not the user. Now we’ll change the ownership.

2. Query to Change the Server Role Owner:

USE [master]
GO
ALTER AUTHORIZATION ON SERVER ROLE :: [ServerRole-Test] TO [sa] --Change The ServerRole Name and login Name
GO
ALTER AUTHORIZATION ON SERVER ROLE :: [AnotherServerRole-Test] TO [sa] --Change The ServerRole Name and login Name
GO

3. Drop the Login:

USE [master]
GO
DROP Login [Admin] --change the login Name

Query Result:

As can be seen now the the drop command completed successfully.

Conclusion:

To summarize the error Error 15141: The Server Principal Owns One or More Server Roles and cannot be dropped occurs when the login or server principal owns one or more Server Roles and you are trying to drop the login. Obviously you should not take any knee jerk reaction. After all you are trying to resolve the issue. Hence you should first list out the Server Roles which the login owns. Then change the ownership to [sa] or any other login as per your organization standards. Finally delete or drop the login. Definitely this should resolve the issue. Important to realize that this method will allow you to delete the login even if the login owns and database roles. Hence to repeat you need to be very careful while working in production systems.

Posted by 보미아빠
, |

subnet 나누기

카테고리 없음 / 2025. 4. 24. 16:44

* nCloud.com Classic Path 서비스에서 서브넷 구성 

10.36.192.0/19가 지정되어 있으므로, 2^(32-19) = 8192개 IP를 할당할 수 있는 VPC를 구성할 수 있다. 

멀티존 구성을 위해 서브넷을 나누어야하고, ncloud는 KR-1, KR-2 2개의 존이 있으므로 할당받은 ip를 2개의 서브넷으로 나누면
각각 4096개 IP를 수용할 수 있는 2개의 서브넷을 구성할 수 있다. 
2^(32-20) = 4096개 이므로 /20 CIDR(Classless Inter-Domain Routing)이 된다. 

첫 번째 서브넷 1의 시작값은 10.36.192.0/20 이며, 
두 번째 서브넷 2의 시작값은 10.36.208.0/20 이다. 

두 번째 서브넷 2의 시작값 계산은 아래와 같이 할 수 있다. 
4096 (서브넷의 가용 IP) / 256 (8bit) = 16 증가 
3번째 옥텟값 192에서 16을 더하면 208이므로, 두 번째 서브넷 2는 10.36.208.0/20이 된다. 

위 값으로 2개의 서브넷을 구성한다. 
https://www.site24x7.com/ko/tools/ipv4-subnetcalculator.html 사이트를 참고하면 더 쉽게 서브넷을 계산할 수 있다.

Posted by 보미아빠
, |

백업 튜닝

카테고리 없음 / 2025. 3. 12. 11:57

단계별 성능 테트스 후 최적값을 찾는다.

 

항목

---------------------------------------------------------

compression,

maxtransfersize (64K 단위로 4MB 까지 커질수 있고 1MB 가 default 임),

buffercount (계산값이 default))

 

백업 후 옵션 값 확인 

---------------------------------------------------------

dbcc traceon (3605,-1)
dbcc traceon (3213,-1)

 

 

 

단 영향도는 CPU 를 더 많이 쓰고, Disk I/O 도 증가할 수 있으므로 적절하게 튜닝해 운영 쿼리에 영향이 가지 않도록 한다. 약 7시간 걸리던 백업이 2시간으로 줄었다. (수십 TB)

 

 

튜닝 결과 

---------------------------------------------------------

 

Posted by 보미아빠
, |

intellij shortcut

카테고리 없음 / 2025. 2. 25. 19:00
파일이나 클래스에서 찾기 ctrl n
찾기  double shift
빌드 ctrl f9
실행 shift f10
디버그 shift f9
디버그 shift alt f9
디버그 종료 ctrl f2
디버그 step over f8
디버그 step out shift f8
디버그 step into f7
프로그램 입력 인자  run edit argument
부른넘 cltl b
클래스 찾기 ctrl alt b
폰트 사이즈  alt shift -=0
접기 펼치기 ctrl shift - +
최대화 ctrl shift f12
주석 ctrl /
자동 인덴트 맞추기  ctrl alt l 
선택 영역 확대 ctrl w, 축소 ctrl shift w
북마크 추가  f11 은 그냥 북마크 ctrl f11 번호가 있는 북마크
북마크 이동 shift f11 해서 골라가기, 번호가 있는 북마크는 ctrl 번호
북마크 전체 삭제  shift f11 해서 ctrl a 해서 삭제하거나 하나 하나 삭제

 

 

controller 에 api PostMapping 모아져 있고, service 에 실제 구현체 있다. 

Posted by 보미아빠
, |

-- 예제 --

exec xp_readerrorlog 0, 1, N'failed', N'Naver', '2025-01-08 00:00:00.000', '2025-01-08 01:00:00.000'

exec xp_readerrorlog 0, 1, null, null, '2025-01-08 00:00:00.000', '2025-01-08 01:00:00.000'

exec xp_readerrorlog 0, 1, N'failed', N'Naver', '2025-01-08 00:00:00.000'

exec xp_readerrorlog 0, 1, N'failed'

exec xp_readerrorlog 0, 1

exec xp_readerrorlog 

 

 

첫 번째 인자 : 파일 0 현재, 1 cycle -1로그,  2 sycle -2로그, 3 cycle -3로그


두 번째 인자 : 1 sql server 프로세스 오류, 2 sql server agent 프로세스 오류 

 

세 번째 인자 : 유니코드 필터 키워드 ex) N'failed'

 

네 번째 인자 : 유니코드 필터 키워드 (세 번째 필터에서 걸린거를 다시 필터) ex N'Naver' (실패 한것중 Naver 키워드가 또 들어간거 

 

다섯 번째 인자 : 로그 검색 시작일

 

여섯 번째 인자 : 로그 검색 마지막일 

Posted by 보미아빠
, |

Runner.zip
0.07MB

Posted by 보미아빠
, |

메시지 4326, 수준 16, 상태 1, 줄 237
이 백업 세트의 로그는 LSN 363000000085600001에서 종료되는데 데이터베이스에 적용하기에는 너무 이릅니다. LSN 363000000090400001을(를) 포함하는 최신 로그 백업이 복원될 수 있습니다.

메시지 3013, 수준 16, 상태 1, 줄 237
RESTORE LOG이(가) 비정상적으로 종료됩니다.

 

이 백업보다 더 *뒤*에 백업한 로그를 리스토어 한다. 

2시 백업본이면, 3시 백업본을 리스토어 해봐라~

로그 백업이 restore 되기위해서는 LSN이 순서대로 연결 되어야한다. 

Posted by 보미아빠
, |

$dbServerIpOrDomain = 'domain'
$dbServerPort = '1433'
$dbName = 'master'
$dbServerAccountId = 'id' 
$dbServerAccountPass = 'pass'

$query = 
@"
select @@servername Servername , @@version Version
; 
"@

function SqlQuery 
{
    param (
        [parameter(Mandatory=$true)]  [string] $server, 
        [parameter(Mandatory=$true)]  [string] $port,
        [parameter(Mandatory=$true)]  [string] $database, 
        [parameter(Mandatory=$true)]  [string] $id, 
        [parameter(Mandatory=$true)]  [string] $pass, 
        [parameter(Mandatory=$true)]  [string] $query,
        [parameter(Mandatory=$false)] [switch] $isRead = $false , 
        [parameter(Mandatory=$false)] [int]    $queryTimeout = 0 ,
        [parameter(Mandatory=$false)] [int]    $connTimeout = 5 ,
        [parameter(Mandatory=$false)] [string] $appName = "PowerShell"  
    ) 
    try
    {
        $substringLen = 100
        if ($query.Length -lt $substringLen)
        {
            $substringLen = $query.Length
        }
        $querySubstring = $query.substring(0, $substringLen)
        
        $conn = New-Object System.Data.SQLClient.SQLConnection
        $conn.ConnectionString = "server=$($server),$($port);database=$($database);User Id=$($id);Password=$($pass);Connect Timeout=$($connTimeout);Application Name=$($appName)"
        $conn.Open()
        $cmd = New-Object System.Data.SQLClient.SQLCommand
        $cmd.Connection = $conn
        $cmd.CommandText = $query
        $cmd.CommandTimeout=$queryTimeout
        if($isRead)
        {
            $ds=New-Object system.Data.DataSet
            $da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
            $da.fill($ds) | Out-Null
            Write-Host ($ds.Tables | Format-Table | Out-String)
        }
        else
        {
            $cmd.ExecuteNonQuery() | Out-Null
        }
        $conn.Close()

        Write-Host "SUCCESS, SERVER : $($server), QUERY : $($querySubstring) ..."
    }
    catch 
    {
        Write-Host "FAILED, SERVER : $($server), QUERY : $($querySubstring) ..."
        throw "FAILED SERVER : $($server), ERROR MESSAGE : $($_)"
    }
}

Clear 
ipconfig /all 
Resolve-DnsName $dbServerIpOrDomain
Test-NetConnection $dbServerIpOrDomain -Port $dbServerPort
SqlQuery -server $dbServerIpOrDomain -port $dbServerPort -database $dbName -id  $dbServerAccountId -pass $dbServerAccountPass -query $query -isRead:$true
Posted by 보미아빠
, |

multi-subnet ag

카테고리 없음 / 2024. 11. 14. 00:34

# DNS 를 조회하면 2개의 ip와 TTL 1200 이 출력된다. 
Resolve-DnsName lissql1

#PS C:\Users\dba> Resolve-DnsName lissql1
#Name                                           Type   TTL   Section    IPAddress                                
#----                                           ----   ---   -------    ---------                                
#lissql1.foo.company.local                A      1200  Answer     100.107.61.251                            
#lissql1.foo.company.local                A      1200  Answer     100.107.62.51                             



# 모듈을 불러들인다. 
Import-Module FailoverClusters  



# Network Name 을 찾는다. 
Get-ClusterResource -Cluster avconsole-clu1



# agsql1_lissql1 Network Name을 대상으로 아래 설정을 수행한다. 
Get-ClusterResource -Cluster avconsole-clu1 -Name "agsql1_lissql1" | Get-ClusterParameter
Get-ClusterResource -Cluster avconsole-clu1 -Name "agsql1_lissql1" | Set-ClusterParameter -Name RegisterAllProvidersIP 0
Get-ClusterResource -Cluster avconsole-clu1 -Name "agsql1_lissql1" | Set-ClusterParameter -Name HostRecordTTL 30



# 리소스 재시작과 클러스터 시작을 해줘야 적용된다. 
Stop-ClusterResource -Cluster avconsole-clu1 -Name "agsql1_lissql1"
Start-ClusterResource -Cluster avconsole-clu1 -Name "agsql1_lissql1"
Start-Clustergroup -Cluster avconsole-clu1 -Name "agsql1"



# 최종 확인, 아래처럼 1개의 ip와 ttl 30초로 설정 되어야 한다. 
Resolve-DnsName lissql1

#PS C:\Users\dba> Resolve-DnsName lissql1
#
#Name                                           Type   TTL   Section    IPAddress                                
#----                                           ----   ---   -------    ---------                                
#lissql1.foo.company.local                A      30    Answer     100.107.61.251      



# AG 에서 failover 해본다. (30초 기다려보고 Resolve-DnsName lissql1 응답을 확인한다.)

 

AD DNS 서버에서 아래값을 셋팅 한다. 기본값은 180초

*************************************************************** 

Set-DnsServerDsSetting -PollingInterval 30

Get-DnsServerDsSetting

 

 

 

Posted by 보미아빠
, |

최근에 달린 댓글

최근에 받은 트랙백

글 보관함