XSLT statement use...
Hello,
Can anyone please explain me meaning of following statemen ?
- <xsl:template match="GMTLogTransactionLine[TransactionType='0']">
<xsl:apply-templates select="following-sibling::GMTLogTransactionLine[1][not(TransactionType='I')][not(TransactionType='N')][not(TransactionType='NA')]" /> </xsl:template>
1) How XSLT processor come to know that which template it should call when we use apply-template ?
2) use of Following-sibling statement..
I searched everywhere but have not found the clear answer..
regards
Kulwinder
<xsl:template match="GMTLogTransactionLineTransactionType='0'">
http://www.w3schools.com/xsl/el_template.asp
<xsl:apply-templates select="following-sibling::GMTLogTransactionLine[1]not(TransactionType='I')not(TransactionType='N')not(TransactionType='NA')" />
http://www.w3schools.com/xsl/el_apply-templates.asp
Do check the above given links for the exact meaning of template and apply-template.....
The links also have exacmples (with output) so that you get a clear view of things.
Regards,
Abhishek.
Similar Messages
-
Can I buy final cut pro x , using my mac and then install it in other State using another Mac?
Yes, as long as both Macs meet the system requirements for it.
(112783) -
Write an UPdate statement using the logic used in PL/SQL block (oracle 10g)
Hi All,
I have written the following PL/SQL block. I want to write an UPDATE statement using the logic used in the following PL/SQL block. can any one please help me out in this regards.
DECLARE
v_hoov_fag gor_gold_post.hoov_flg%TYPE;
v_b49n gor_gold_post.b49n%TYPE;
CURSOR c
IS
SELECT bs_id, loyalty_date, loyalty_period, contract_date
FROM gor_gold_post
WHERE tariff_code IN (169, 135, 136);
BEGIN
FOR rec IN c
LOOP
IF (TRUNC (ADD_MONTHS (rec.loyalty_date, rec.loyalty_period)
- SYSDATE) < 304
OR ( TRUNC ( ADD_MONTHS (rec.loyalty_date, rec.loyalty_period)
- SYSDATE
) IS NULL
AND (SYSDATE - TO_DATE (rec.contract_date, 'YYYYMMDD')) > 91.2
THEN
v_hoov_flg := 1;
ELSE
v_hoover_flag := 99;
END IF;
IF (TRUNC (ADD_MONTHS (rec.loyalty_date, rec.loyalty_period)
- SYSDATE) < 121.6
OR ( TRUNC ( ADD_MONTHS (rec.loyalty_date, rec.loyalty_period)
- SYSDATE
) IS NULL
AND (SYSDATE - TO_DATE (rec.contract_date, 'YYYYMMDD')) > 91.2
THEN
v_b49n := 1;
ELSE
v_b49n := 99;
END IF;
UPDATE gor_gold_post
SET hoov_flg = v_hoov_flg,
b49n = v_b49n
WHERE bs_id = rec.bs_id AND tariff_code IN (169, 135, 136);
COMMIT;
END LOOP;
END;Thank you,Using case statement.
UPDATE gor_gold_post
SET hoov_flag = CASE WHEN TRUNC (ADD_MONTHS (rec.loyalty_date, rec.loyalty_period) - SYSDATE) < 304
OR
(TRUNC (ADD_MONTHS (rec.loyalty_date, rec.loyalty_period) - SYSDATE) IS NULL
AND (SYSDATE - TO_DATE (rec.contract_date, 'YYYYMMDD')) > 91.2)
THEN 1
ELSE 99
END,
b49n = CASE WHEN TRUNC (ADD_MONTHS (rec.loyalty_date, rec.loyalty_period) - SYSDATE) < 121.6
OR
(TRUNC (ADD_MONTHS (rec.loyalty_date, rec.loyalty_period) - SYSDATE) IS NULL
AND (SYSDATE - TO_DATE (rec.contract_date, 'YYYYMMDD')) > 91.2)
THEN 1
ELSE 99
END
WHERE tariff_code IN (169, 135, 136);Note: Code not tested. -
Reg - XSLT Mapping using stylesheet exception during test
Hi experts,
I am new to SAP PI. Currently i am working on PI's XSLT Mapping using Stylusstudio.
I got error *Transformer Configuration Exception occurred when loading XSLT <name>.xsl; details: Could not compile stylesheet*.
I tried test by selecting SAP XML TOOLKIT too, then i am facing
*com.sap.engine.lib.xml.parser.NestedSAXParseException: Fatal Error: com.sap.engine.lib.xml.parser.ParserException: XMLParser: Prefix 'a' is not mapped to a namespace (:main:, row:4, col:15)(:main:, row=4, col=15) -> com.sap.engine.lib.xml.parser.ParserException: XMLParser: Prefix 'a' is not mapped to a namespace (:main:, row:4, col:15)*.
Kindly help me to overcome this issue.Hi,
Simply your XSLT file is not a well-formed XML. Check your syntax. You could also use the XML Tools plugin to Notepad++ to help you determine where the syntax error is precisely.
Moreover, this error "Prefix 'a' is not mapped to a namespace" might mean that you are using a tag: <a:something>, but "a" is not properly declared as a namespace, for instance like: xmlns:a="something.sap.com".
Hope this helps,
Greg -
Import statement using DATA BUFFER
Hi All,
I am using RFC enabled FM using STARTING NEW TASK, We cannot import data from FM back to the program when we use this statement. So I am exporting data into DATA BUFFER in the FM and trying to import data in the main program. Can you please tell me how can I import data from SHARED MEMORY, Below is my code.
call function 'YPMLR_SITEBAL_DETAILS'
starting new task 'ID'
exporting
s_cyl = s_cyl-low
s_lifnr = s_lifnr-low
s_lstyp = s_lstyp-low
tables
s_zlocn = lt_zlocn
gt_zmlr_mld = gt_zmlr_mld
gt_zmlr_lp = gt_zmlr_lp
gt_zmlr_mlp = gt_zmlr_mlp
gt_zmc_loc = gt_zmc_loc.
"IMPORT e_rand_no TO e_rand_no from MEMORY ID 'RAND'.
Here is the export statement used in FM,
EXPORT e_rand_no FROM e_rand_no TO DATA BUFFER XSTR.Hi,
Check this link for Export to database instead of memory..
http://help.sap.com/abapdocu/en/ABAPEXPORT_DATA_CLUSTER_MEDIUM.htm
Import from database instead of memory
http://help.sap.com/abapdocu/en/ABAPIMPORT_MEDIUM.htm -
Converting a delete statement using VPD policies and context
Hello,
I'm trying to convert a delete statement in a update statement using VPD policies and context.
+/* Supose the user 'user1' already exists. This is an application user */+
conn user1/pwd
create table user1.test_a (
id number(4),
description varchar2(100),
deleted number(1)
+);+
alter table user1.test_a add constraint test_a_pk primary key (id);
insert into user1.test_a (1, 'abc', 0);
insert into user1.test_a (2, 'def', 0);
commit;
I'd like to convert each physical deletion into a logical deletion: statements like "delete from user1.test_a where id = 1" must be converted into "update user1.test_a set deleted = 1 where id = 1".
I've found the following way: I will create a policy to avoid physical deletion. Additionally, the policy function should update the deletion flag too.
conn user1/pwd
+/* Create context package */+
create or replace package user1.pkg_security_context is
procedure p_set_ctx(
i_test_a_id in user1.test_a.id %type
+);+
end;
+/+
create or replace package body user1.pkg_security_context is
procedure p_set_ctx (
i_test_a_id in user1.test_a.id %type
+) is+
begin
dbms_session.set_context( 'user1_ctx', 'test_a_id', i_test_a_id );
end;
end;
+/+
show errors
+/* Create trigger to set the context before deletion */+
create or replace trigger user1.test_a_bef_trg
before delete on user1.test_a
for each row
declare
pragma autonomous_transaction;
begin
-- only commits the preceding update, not the delete that fired the trigger.
commit;
user1.pkg_security_context.p_set_ctx( :old.id );
end;
+/+
show errors
create context user1_ctx using user1.pkg_security_context;
+/* Policy function */+
create or replace function user1.f_policy_chk_dels (
object_schema in varchar2,
object_name in varchar2
+) return varchar2+
is
out_string varchar2(400) default '1=2 ';
+/*+
* out_string is the return value.
* - 'WHERE 1=2' means 'nothing to access'
begin
if ( loc_logged_usr_authorized > 0 ) then
+/*+
* Set the flag deleted to 1
update user1.test_a set deleted = 1 where id = sys_context( 'user1_ctx', 'test_a_id' );
out_string := out_string || 'or 1=1 ';
end if;
return out_string;
end;
+/+
show errors
+/*+
* Create policy
begin
dbms_rls.add_policy(
object_schema => 'user1' ,
object_name => 'test_a' ,
policy_name => 'policy_chk_dels' ,
function_schema => 'user1' , -- function schema
policy_function => 'f_policy_chk_dels' , -- policy function
statement_types => 'DELETE'
+);+
end;
+/+
When I try to delete a record of the table test_a:
conn user1/pwd
SQL> delete from ilogdia.oplsimulaciones sim where sim.id = 9999;
+0 rows deleted+
No rows has been deleted, but the update stmt does not work. That means, the "deleted" flag has not been updated.
Any ideas?
Thank you in advance.
Marco A. Serrano
Edited by: albrotar on Oct 15, 2012 8:42 AM
Edited by: albrotar on Oct 15, 2012 8:42 AM
Edited by: albrotar on Oct 15, 2012 8:43 AMThe policy function is applied once per statement execution. The policy function executes first and the UPDATE statement, presumably, updates no rows because the context is not yet populated. The row-level populates the context (I'm assuming that your session can even see context values populated by an autonomous transaction-- I would guess it could but I'd have to test that) after the UPDATE statement is already complete. The COMMIT in the row-level trigger is also pointless-- it only applies to changes made by the current autonomous transaction, of which there are none-- it cannot apply to changes made in other autonomous transactions. Declaring the row-level trigger to use autonomous transactions doesn't seem to accomplish anything other than to open the question of whether the values set in the context by the autonomous transaction are visible in the caller's transaction.
Even if this, somehow, did work, using autonomous transactions would be a very bad idea since Oracle is free to roll-back a partially executed statement (and the work done by its triggers) and re-execute it. Oracle does that with some regularity to maintain write consistency.
Justin -
Need to execute Long Insert (6000 characters)statement using dbms_sql.parse
I built an insert statement which is more than 6000 (six thousand) characters stored into two varchar2 variables. I need to execute the statement using dbms_sql.parse.
I tryed using dbms_sql.varchar2s variable and having no success.
Any ideas or examples?
Thanks
JK6000 chars shouldn't be a problem.
What do you mean with no success?Did you get any error?
Any ideas or examples?See this thread
Re: execute immediate with a string longer than 32767 -
Code Sample: Easy RFC Lookup From XSLT Mappings Using a Java Helper Class
Hi everyone,
This is just a shameless plug for my article: <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/05a3d62e-0a01-0010-14bc-adc8efd4ee14">Easy RFC Lookup From XSLT Mappings Using a Java Helper Class</a>. I hope you're interested in reading it, and I welcome your comments in this thread.
Kind regards,
ThorstenHi Stefan. Thanks for your post. I have already done that. It still does not work. As a base for my java helper class I have usesd Thorstens code.
The problem is quite confusing. I will try to ouline both issues here.
First of all, when try to test from within the Operation Mapping, I always get a java error saying it cannot find the communication channel (it is there and working because I have tested it with the RFCLookup in graphical mapping). I have found a way to work around this, and that is to uncheck the "Use SAP XMLToolkit" checkbox --> switch to test tab, enter my ReceiverService in the parameter tab (header parameter) --> switch back to Definition tab, check the "Use SAP XMLToolkit" checkbox --> switch to Test tab and run the test. Then the XSLT and call to java helper class will work. Of course this is not really something you want to do all the time. Maybe there is a bug.
Secondly, it never works when I try to do it "live". I am using a file adapter to pick up one file, convert it and a file adapter to drop the converted file. I get the following error code in SXMB_MONI.
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!-- Request Message Mapping -->
<SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
<SAP:Category>Application</SAP:Category>
<SAP:Code area="MAPPING">TRANSFORMER_CONF_EX</SAP:Code>
<SAP:P1>ATJ_Accounting2XML_Accounting.xsl</SAP:P1>
<SAP:P2>http://rd.accounting.logica.com</SAP:P2>
<SAP:P3>fd552c30-bad9-11dd-9761-c21dac1b818c</SAP:P3>
<SAP:P4>-1</SAP:P4>
<SAP:AdditionalText />
<SAP:Stack>TransformerConfigurationException triggered while loading XSLT mapping ATJ_Accounting2XML_Accounting.xsl; http://rd.accounting.logica.comfd552c30-bad9-11dd-9761-c21dac1b818c-1</SAP:Stack>
<SAP:Retry>M</SAP:Retry>
</SAP:Error>
Using an XSLT without a call to a java helper class, works just fine.
I am totally at a loss here. Any more input would be much appreciated.
/Patrik -
'Missing select' error for update statement using WITH clause
Hi,
I am getting the below error for update statement using WITH clause
SQL Error: ORA-00928: missing SELECT keyword
UPDATE A
set A.col1 = 'val1'
where
A.col2 IN (
WITH D AS
SELECT col2 FROM
(SELECT col2, MIN(datecol) col3 FROM DS
WHERE <conditions>
GROUP BY PATIENT) D2
WHERE
<conditions on A.col4 and D2.col3>Hi,
The format of a query using WITH is:
WITH d AS
SELECT ... -- sub_query
SELECT ... -- main query
You don't have a main query. The keyword FROM has to come immediately after the right ')' that ends the last WITH clause sub-query.
That explains the problem based on what you posted. I can't tell if the real problem is in the conditions that you didn't post.
I hope this answers your question.
If not, post a complete test script that people can run to re-create the problem and test their ideas. Include a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all the tables involved, and the results you want from that data.
In the case of a DML operation (such as UPDATE) the sample data should show what the tables are like before the DML, and the results will be the contents of the changed table(s) after the DML.
Explain, using specific examples, how you get those results from that data.
Always say what version of Oracle you're using (e.g. 11.2.0.2.0).
See the forum FAQ: https://forums.oracle.com/message/9362002 -
Need to Improve pefromance for select statement using MSEG table
Hi all,
We are using a select statement using MSEG table
which takes a very long time to run the program which is scheduled in back ground.
Please see the history below.;
1) Previously this program was using SELECT-ENDSELECT statement inside the loop i.e.
LOOP AT I_MCHB.
To get Material Doc. Details
SELECT MBLNR
MJAHR
ZEILE INTO (MSEG-MBLNR,MSEG-MJAHR,MSEG-ZEILE)
UP TO 1 ROWS
FROM MSEG
WHERE CHARG EQ I_MCHB-CHARG
AND MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
ENDSELECT.
Endloop.
The program was taking 1 hr for 20 k data
2)The above statement was replaced by ALL ENTRIES to remove the SELECT-ENDSELECT from the loop.
***GET MATERIAL DOC NUMBER AND FINANCIAL YEAR DETAILS FROM MSEG TABLE
SELECT MBLNR
MJAHR
ZEILE
MATNR
CHARG
WERKS
LGORT
INTO TABLE I_MSEG
FROM MSEG
FOR ALL ENTRIES IN I_MCHB
WHERE CHARG EQ I_MCHB-CHARG
AND MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
3)After getting the further technical analysis from BASIS team , And with the suggestion to optimize the program by changing the INDEX RANGE SCAN to
MSEG~M.
SELECT MBLNR
MJAHR
ZEILE
MATNR
CHARG
WERKS
LGORT
INTO TABLE I_MSEG
FROM MSEG
FOR ALL ENTRIES IN I_MCHB
WHERE MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
At present the program is taking 3 to 4 hrs in back ground .
The table is complete table scan using index
MSEG~M.
Please suggest to improve the performance of this
many many thanks
deepakThe benchmark should be the join, and I can not see how any of your solutions can be faster than the join
SELECT .....
INTO TABLE ....
UP TO 1 ROWS
FROM mchb as a
INNER JOIN mseg as b
ON amatnr EQ bmatnr
AND awerks EQ bwerks
AND algort EQ blgort
And acharg EQ bcharg
WHERE a~ ....
The WHERE condition must come from the select on MCHB, the field list from the total results
you want.
If you want to compare, must compare your solutions plus the select to fill I_MCHB.
Siegfried
Edited by: Siegfried Boes on Dec 20, 2007 2:28 PM -
Performance tunning for select statements using likp lips and vbrp
Dear all,
I have a report where i am using select statements using first on likp the for all entries of likp i am taking data from lips and then for all entries in lips i am taking data from vbrp by matching VGBEL and VGPOS. Now the problem is that when it fetches data from vbrp it is taking lot of time around 13mins. to fetch data from vbrp. How can i overcome the problem.
regards
AmitHi,
there is also no secondary index for preceding document in VBFA table.
You will also have to create it here.
Regards,
Przemysław -
What are the database resources when collecting stats using dbms_stats
Hello,
We have tables that contain stale stats and would want to collect stats using dbms_stats with estiamte of 30%. What are the database resources that would be consummed when dbms_stats is used on a table? Also, would the table be locked during dbms_stats? Thank you.1) I'm not sure what resources you're talking about. Obviously, gathering statistics requires I/O since you've got to read lots of data from the table. It requires CPU particularly if you are gathering histograms. It requires RAM to the extent that you'll be doing sorts in PGA and to the extent that you'll be putting blocks in the buffer cache (and thus causing other blocks to age out), etc. Depending on whether you immediately invalidate query plans, you may force other sessions to start doing a lot more hard parsing as well.
2) You cannot do DDL on a table while you are gathering statistics, but you can do DML. You would generally not want to gather statistics while an application is active.
Justin -
How to write SELECT statement using tables ekko,ekpo and eket?
Hi,
I got a problem in performance tuning using below tables?
how to write SELECT statement using tables EKKO,EKPO and EKET and in conditon ( WHERE clause) use only fields
ekko~ebeln IN ebeln
ekko~loekz EQ ' '
ekko~lifnr IN lifnr
ekko~ekorg IN ekorg
ekko~ekgrp IN ekgrp
ekpo~werks IN werks
ekpo~pstyp EQ '3'
ekpo~loekz EQ space
ekpo~elikz EQ space
ekpo~menge NE 0
eket~rsnum NE space.
Thanks in Advance.
bye.Hi,
ekko~ebeln IN ebeln
ekko~loekz EQ ' '
ekko~lifnr IN lifnr
ekko~ekorg IN ekorg
ekko~ekgrp IN ekgrp
ekpo~werks IN werks
ekpo~pstyp EQ '3'
ekpo~loekz EQ space
ekpo~elikz EQ space
ekpo~menge NE 0 " Remove this from where clause
eket~rsnum NE space. " Remove this from where clause
' instead delete the entries after fetching into the table
DELETE it_itab WHERE menge EQ '0' AND rsnum EQ ' '.
Regards
Bala Krishna -
Any way to limit memory which XSLT processor uses?
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - ProductionWe use the xmltype.transform method to transform XSLT. We got oodles of memory (25Gb assigned to Oracle) but we do use a lot of that......
Is there a way to limit the amount of memory that the XSLT engine uses so as to avoid out of memory errors?
Errors in file /ora/oracle/diag/rdbms/prod01/PROD01/trace/PROD01_j001_16149.trc:
ORA-27102: out of memory
Linux-x86_64 Error: 12: Cannot allocate memoryThe XML file was just under 20Gb in size. I regularly see 10Gb xslt transformations using around 2Gb ram (via the top command on linux). I have no visibility to what the memory consumption was the time of the failure.Probably most of the heavy consumption is PGA related.
http://docs.oracle.com/cd/B28359_01/server.111/b28274/memory.htm#i49320
Have a look at the AWR history views concerning memory consumption. ADDM, ASH, AWR etc should give you more insight. See also the awr/addm/ash/-rpt.sql scripts in $ORACLE_HOME/rdbms/admin on the database server. You (officially) would need Diagnostic en Tuning pack licenses though, so be warned (even for "touching" those views). Probably the heavy consumption of memory is or (wrong) storage related or inefficient code... -
Is it possible that my update stats used only correct tables?
Whenever there is a schedule maintenance run I receive a error:
Executing the query "UPDATE STATISTICS [Perf].[PerfHourly_F65954CD35A54..." failed with the following error: "Table 'PerfHourly_F65954CD35A549E886A48E53F148F277' does not exist.". Possible failure reasons: Problems with the query, "ResultSet"
property not set correctly, parameters not set correctly, or connection not established correctly.
Is it possible that my update stats used only correct tables?
ThanksUse below script ...(change if required)
USE [dbname]
go
DECLARE @mytable_id INT
DECLARE @mytable VARCHAR(100)
DECLARE @owner VARCHAR(128)
DECLARE @SQL VARCHAR(256)
SELECT @mytable_id = MIN(object_id)
FROM sys.tables WITH(NOLOCK)
WHERE is_ms_shipped = 0
WHILE @mytable_id IS NOT NULL
BEGIN
SELECT @owner = SCHEMA_NAME(schema_id), @mytable = name
FROM sys.tables
WHERE object_id = @mytable_id
SELECT @SQL = 'UPDATE STATISTICS '+ QUOTENAME(@owner) +'.' + QUOTENAME(@mytable) +' WITH ALL, FULLSCAN;'
Print @SQL
EXEC (@SQL)
SELECT @mytable_id = MIN(object_id)
FROM sys.tables WITH(NOLOCK)
WHERE object_id > @mytable_id
AND is_ms_shipped = 0
END
Or use below for required table only but it will not execute only generate script, make change as per ur requirements:
SELECT X.*,
ISNULL(CASE
WHEN X.[Total Rows]<=1000
THEN
CASE
WHEN [Percent Modified] >=20.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN --20% Small Table Rule'
END
WHEN [Percent Modified] = 100.00
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN --100% No real Stats Rule'
--WHEN X.[Rows Modified] > 1000
--THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN --1000 Rows Modified Rule'
ELSE
CASE
WHEN X.[Total Rows] > 1000000000 --billion rows
THEN CASE
WHEN [Percent Modified] > 0.1
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 1B Big Table Rule'
END
WHEN X.[Total Rows] > 100000000 --hundred million rows
THEN CASE
WHEN [Percent Modified] > 1.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 100M Big Table Rule'
END
WHEN X.[Total Rows] > 10000000 --ten million rows
THEN CASE
WHEN [Percent Modified] > 2.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 10M Big Table Rule'
END
WHEN X.[Total Rows] > 1000000 --million rows
THEN CASE
WHEN [Percent Modified] > 5.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 1M Big Table Rule'
END
WHEN X.[Total Rows] > 100000 --hundred thousand rows
THEN CASE
WHEN [Percent Modified] > 10.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 100K Big Table Rule'
END
WHEN X.[Total Rows] > 10000 --ten thousand rows
THEN CASE
WHEN [Percent Modified] > 20.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 10K Big Table Rule'
END
END
END,'') AS [Statistics SQL]
FROM (
SELECT DISTINCT
DB_NAME() AS [Database],
S.name AS [Schema Name],
T.name AS [Table Name],
I.rowmodctr AS [Rows Modified],
P.rows AS [Total Rows],
CASE
WHEN I.rowmodctr > P.rows
THEN 100
ELSE CONVERT(decimal(8,2),((I.rowmodctr * 1.0) / P.rows * 1.) * 100.0)
END AS [Percent Modified]
FROM
sys.partitions P
INNER JOIN sys.tables T ON P.object_Id = T.object_id
INNER JOIN sys.schemas S ON T.schema_id = S.schema_id
INNER JOIN sysindexes I ON P.object_id = I.id
WHERE P.index_id in (0,1)
AND I.rowmodctr > 0
) X
WHERE [Rows Modified] > 1000
ORDER BY [Rows Modified] DESC
Please click "Propose As Answer"
if a post solves your problem, or "Vote As Helpful" if a post has been useful
to you
Maybe you are looking for
-
Configuration steps for Quality and Batch Management.
Hi Friends, We are implementing QM along with Batch Management for testing the material like purity,pathology tests. I am new to Quality and Batch Management. Could any one guide me, how to configure material master for QM and Batch to test the mater
-
We are having the following requirement in India Localization Taxes. We (Marketing Company) buy Items(Ice Creams) from our production company(Separate Legal Entity) to sell to our Customers. This item suffers Excise Duty (8.8% of MRP) and Sales Tax(1
-
Scrap Percentage Field in Asset Master
Hii Experts, scrap value in main asset and sub number different. i want to have scrap value different between main asset and sub number example asset number 123-0 scrap value = 1 asset number 123-1 scrap value = 0 asset number 123-2 scrap value = 0 I
-
WSAECONNRESET:connection reset by peer
If SAP login is left idle for 10/12 minutes it logos out with this message. *HOP connection to partner broken WSAECONNRESET:connection reset by peer do you want to see the detailed error description* +HOP: connection to partner broken time
-
hi, i`ve beeing having problems with pipeline order of movieclip actions, like enterframe, onMouseMove. I would like to sort the pipeline but i can`t find where. I`ve already changed depths but it dosen`t change witch movieclip enterframe goes first.