Custom color scheme xml not getting used
I have a custom color scheme. I used the advanced settings.
Some of the color scheme settings are getting picked up but I don't think any of the advanced ones are.
The advanced settings ARE getting written to the color scheme's xml document (XcelsiuscustomThemes/mySchemeName.xml
)... so that's good. Here's an example:
<ListView>
<headerStyleName.color>0xE1EFF7</headerStyleName.color>
<textSelectedColor>0x333333</textSelectedColor>
<headerStyleName.textSelectedColor>0x333333</headerStyleName.textSelectedColor>
</ListView>
Is there someplace I need to point to the XcelsiuscustomThemes/mySchemeName.xml ? So that those styles get used ?
Incidentally, If I edit the programfiles(x86)/.../themes/DefaultMapping.xml document -- then any changes are reflected... so it seems like I just need to be able to tell xcelsius to use the XcelsiuscustomThemes/mySchemeName.xml instead of the DefaultMapping.xml
Hi Hema
They should be located here:
C:\Program Files\Business Objects\Xcelsius\assets\themes\custom
OR:
C:\Documents and Settings\<User Name>\Application Data\XcelsiuscustomThemes
Regards
Charles
Edited by: Charles Davies on Oct 5, 2009 5:11 PM
Similar Messages
-
BizTalk Server 2013 R2 EDI EDIFACT custom Target Namespace is not being used
I have a client that has a partner that doesn't use the "out of the box" BizTalk EDIFACT 97A DELFOR schema. I'm using BizTalk Server 2013 R2. I need to use a custom target namespace. I have created a custom schema for EFACT_D97A_DELFOR with
a different namespace http://schemas.microsoft.com/BizTalk/EDI/EDIFACT/2006_<partnername>.
In the Parties agreement/Transaction Set Settings/Local Host Settings, I have the following in a separate row under the "Default", UNH2.1=DELFOR, UNH2.2=D, UNH2.3=97A and TargetNamespace=http://schemas.microsoft.com/BizTalk/EDI/EDIFACT/2006_<partnername>
It wants to use the out of the box schema from the error below but I need it to use the custom schema. If I change the input file with a UH2.5 value along with changing the root node to match it EFACT_D97A_DELFOR_<UNH2.5), it works but the partner
will not be sending this, so it's not an option.
Error details: An output message of the component "Unknown " in receive pipeline "Microsoft.BizTalk.Edi.DefaultPipelines.EdiReceive, Microsoft.BizTalk.Edi.EdiPipelines, Version=3.0.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
is suspended due to the following error:
Error encountered during parsing. The Edifact transaction set with id '' contained in interchange (without group) with id '000010080', with sender id '<senderid>', receiver id '<receiverid>' is being suspended with following
errors:
Error: 1 (Miscellaneous error)
70: Finding the document specification by message type "http://schemas.microsoft.com/BizTalk/EDI/EDIFACT/2006#EFACT_D97A_DELFOR" failed. Verify the schema deployed properly.
Error: 2 (Miscellaneous error)
71: Transaction Set or Group Control Number Mismatch
Error: 3 (Miscellaneous error)
29: Invalid count specified at interchange, group or message level
Any thoughts for this issue?
Thanks,
SeanAll,
Because the trading partner is not sending the UNH2.5, UNG2.1 and UNG2.2 values in the incoming message, the EDIFACT custom targetnamespace schema will not be used. I created a decode custom pipeline component where I inject the UNH2.5 element,changed
the rootnode to the trading partner specific schema to EFACT_D97A_DELFOR_XYZ (where "XYZ" is the value in the UNH2.5) and in the Party agreement->TransactionSetSettings->LocalHostSettings default row specifying the UNH2.1, UNH2.2
and UNH2.5, it will then use the custom schema(http://schemas.microsoft.com/BizTalk/EDI/EDIFACT/2006#EFACT_D97A_DELFOR_XYZ. I could've injected the UNG segment
and used a custom target namespace and not changed the rootnode of the custom schema but didn't want to add all that extra segment information.
The example below assumes that the UNH segment only has values "up to" UNH2.4 only and appends the UNH2.5. Uses the UNA segement to dynamically set the componentDataElementSeparator.
//UNA segment e.g. UNA:+.? '
//this example above has ' as the segmentTerminator, : as the componentDataElementSeparator, + as the dataElementSeparator
string unaSegment = messageBody.Substring(0, messageBody.ToUpper().IndexOf("UNA") + 9);
string componentDataElementSeparator = unaSegment.Substring(3, 1);
string dataElementSeparator = unaSegment.Substring(4, 1);
string segmentTerminator = unaSegment.Substring(8, 1);
string UNH_Segment = messageBody.Substring(messageBody.ToUpper().IndexOf("UNH"));
UNH_Segment = UNH_Segment.Substring(0, UNH_Segment.IndexOf(segmentTerminator));
//inject UNH2_5 in the existing UNH_Segment
messageBody = messageBody.Replace(UNH_Segment,
UNH_Segment + componentDataElementSeparator + UNH2_5);
Thanks All for all your help!!
Sean Boman -
Custom Color Scheme in Xcelsius
Hi all,
I want to create custom Color Scheme in Xcelsius. I have referred the following blog
[/people/john.kurgan/blog/2009/07/20/obtaining-the-perfect-custom-color-scheme-in-xcelsius-part-2|/people/john.kurgan/blog/2009/07/20/obtaining-the-perfect-custom-color-scheme-in-xcelsius-part-2]
That blog has XML files .I downloaded that XML and put in the following folder.
C:\Program Files\Business Objects\Xcelsius\assets\themes\custom
But that Color Scheme is not displaying in Xcelsius.
If I put XML file in the following folder (C:\Program Files\Business Objects\Xcelsius\assets\themes\built-in ) I can use that color Scheme.
What may be the problem?
Help me in this regard.
Thanks &Regards
Hemalatha JHi Hema
They should be located here:
C:\Program Files\Business Objects\Xcelsius\assets\themes\custom
OR:
C:\Documents and Settings\<User Name>\Application Data\XcelsiuscustomThemes
Regards
Charles
Edited by: Charles Davies on Oct 5, 2009 5:11 PM -
PDF Portfolio -- custom color scheme
Acrobat 9: Is it possible to save a custom color scheme to use with multiple portfolios?
Hi Hema
They should be located here:
C:\Program Files\Business Objects\Xcelsius\assets\themes\custom
OR:
C:\Documents and Settings\<User Name>\Application Data\XcelsiuscustomThemes
Regards
Charles
Edited by: Charles Davies on Oct 5, 2009 5:11 PM -
Index not getting used in the query(Query performance improvement)
Hi,
I am using oracle 10g version and have this query:
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from Action_type at,
action_history ah,
sensitivity_audit sa,
logical_entity le,
feed_static fs,
feed_book_status fbs,
feed_instance fi,
marsnode bk
where at.description = 'Regress Positions'
and fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and fi.most_recent = 'Y'
and bk.close_date is null
and ah.time_draft = 'after'
and sa.close_action_id is null
and le.close_action_id is null
and at.action_type_id = ah.action_type_id
and ah.action_id = sa.create_action_id
and le.logical_entity_id = sa.type_id
and sa.feed_id = fs.feed_id
and sa.book_id = bk.node_id
and sa.feed_instance_id = fi.feed_instance_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
union
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from feed_book_status fbs,
marsnode bk,
feed_instance fi,
feed_static fs,
feed_book_status_history fbsh,
Action_type at,
Action_history ah
where fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and ah.action_type_id = 103
and bk.close_date is null
and ah.time_draft = 'after'
-- and ah.action_id = fbs.action_id
and fbs.book_id = bk.node_id
and fbs.book_id = fbsh.book_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
and fbsh.create_action_id = ah.action_id
and at.action_type_id = ah.action_type_id
union
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from feed_book_status fbs,
marsnode bk,
feed_instance fi,
feed_static fs,
feed_book_status_history fbsh,
Action_type at,
Action_history ah
where fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and ah.action_type_id = 101
and bk.close_date is null
and ah.time_draft = 'after'
and fbs.book_id = bk.node_id
and fbs.book_id = fbsh.book_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
and fbsh.create_action_id = ah.action_id
and at.action_type_id = ah.action_type_id;This is the execution plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 231 | 43267 | 104K (85)|
| 1 | SORT UNIQUE | | 231 | 43267 | 104K (85)|
| 2 | UNION-ALL | | | | |
| 3 | NESTED LOOPS | | 1 | 257 | 19540 (17)|
| 4 | NESTED LOOPS | | 1 | 230 | 19539 (17)|
| 5 | NESTED LOOPS | | 1 | 193 | 19537 (17)|
| 6 | NESTED LOOPS | | 1 | 152 | 19534 (17)|
|* 7 | HASH JOIN | | 213 | 26625 | 19530 (17)|
|* 8 | TABLE ACCESS FULL | LOGICAL_ENTITY | 12 | 264 | 2 (0)|
|* 9 | HASH JOIN | | 4267 | 429K| 19527 (17)|
|* 10 | HASH JOIN | | 3602 | 90050 | 1268 (28)|
|* 11 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 46826 | 22 (5)|
|* 12 | TABLE ACCESS FULL | FEED_INSTANCE | 335K| 3927K| 1217 (27)|
|* 13 | TABLE ACCESS FULL | SENSITIVITY_AUDIT | 263K| 19M| 18236 (17)|
| 14 | TABLE ACCESS BY INDEX ROWID | FEED_STATIC | 1 | 27 | 1 (0)|
|* 15 | INDEX UNIQUE SCAN | IDX_FEED_STATIC_FI | 1 | | 0 (0)|
|* 16 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
|* 17 | INDEX RANGE SCAN | PK_MARSNODE | 3 | | 2 (0)|
|* 18 | TABLE ACCESS BY INDEX ROWID | ACTION_HISTORY | 1 | 37 | 2 (0)|
|* 19 | INDEX UNIQUE SCAN | PK_ACTION_HISTORY | 1 | | 1 (0)|
|* 20 | TABLE ACCESS BY INDEX ROWID | ACTION_TYPE | 1 | 27 | 1 (0)|
|* 21 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
|* 22 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
| 23 | NESTED LOOPS | | 115 | 21505 | 42367 (28)|
|* 24 | HASH JOIN | | 114 | 16644 | 42023 (28)|
| 25 | NESTED LOOPS | | 114 | 13566 | 42007 (28)|
|* 26 | HASH JOIN | | 114 | 12426 | 41777 (28)|
|* 27 | HASH JOIN | | 957 | 83259 | 41754 (28)|
|* 28 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | 91760 | 30731 (28)|
| 29 | NESTED LOOPS | | 9570K| 456M| 10234 (21)|
| 30 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | 27 | 1 (0)|
|* 31 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
| 32 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| 209M| 10233 (21)|
|* 33 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 79244 | 22 (5)|
| 34 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | 10 | 2 (0)|
|* 35 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | 1 (0)|
| 36 | TABLE ACCESS FULL | FEED_STATIC | 2899 | 78273 | 16 (7)|
|* 37 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | 2 (0)|
|* 38 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
| 39 | NESTED LOOPS | | 115 | 21505 | 42367 (28)|
|* 40 | HASH JOIN | | 114 | 16644 | 42023 (28)|
| 41 | NESTED LOOPS | | 114 | 13566 | 42007 (28)|
|* 42 | HASH JOIN | | 114 | 12426 | 41777 (28)|
|* 43 | HASH JOIN | | 957 | 83259 | 41754 (28)|
|* 44 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | 91760 | 30731 (28)|
| 45 | NESTED LOOPS | | 9570K| 456M| 10234 (21)|
| 46 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | 27 | 1 (0)|
|* 47 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
| 48 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| 209M| 10233 (21)|
|* 49 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 79244 | 22 (5)|
| 50 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | 10 | 2 (0)|
|* 51 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | 1 (0)|
| 52 | TABLE ACCESS FULL | FEED_STATIC | 2899 | 78273 | 16 (7)|
|* 53 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | 2 (0)|
------------------------------------------------------------------------------------------------------and the predicate info
Predicate Information (identified by operation id):
7 - access("LE"."LOGICAL_ENTITY_ID"="SA"."TYPE_ID")
8 - filter("LE"."CLOSE_ACTION_ID" IS NULL)
9 - access("SA"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
10 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
11 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
12 - filter("FI"."MOST_RECENT"='Y')
13 - filter("SA"."CLOSE_ACTION_ID" IS NULL)
15 - access("FI"."FEED_ID"="FS"."FEED_ID")
filter("SA"."FEED_ID"="FS"."FEED_ID")
16 - filter("BK"."CLOSE_DATE" IS NULL)
17 - access("SA"."BOOK_ID"="BK"."NODE_ID")
18 - filter("AH"."TIME_DRAFT"='after')
19 - access("AH"."ACTION_ID"="SA"."CREATE_ACTION_ID")
20 - filter("AT"."DESCRIPTION"='Regress Positions')
21 - access("AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
22 - filter("BK"."CLOSE_DATE" IS NULL)
24 - access("FI"."FEED_ID"="FS"."FEED_ID")
26 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
27 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
28 - filter("AH"."ACTION_TYPE_ID"=103 AND "AH"."TIME_DRAFT"='after')
31 - access("AT"."ACTION_TYPE_ID"=103)
33 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
35 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
37 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
38 - filter("BK"."CLOSE_DATE" IS NULL)
40 - access("FI"."FEED_ID"="FS"."FEED_ID")
42 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
43 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
44 - filter("AH"."ACTION_TYPE_ID"=101 AND "AH"."TIME_DRAFT"='after')
47 - access("AT"."ACTION_TYPE_ID"=101)
49 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
51 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
53 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
Note
- 'PLAN_TABLE' is old versionIn this query, mainly the ACTION_HISTORY and FEED_BOOK_STATUS_HISTORY tables are getting accessed fullly though there are indexes createdon them like this
ACTION_HISTORY
ACTION_ID column Unique index
FEED_BOOK_STATUS_HISTORY
(FEED_INSTANCE_ID, BOOK_ID, COB_DATE, VERSION) composite indexI tried all the best combinations however the indexes are not getting used anywhere.
Could you please suggest some way so the query will perform better way.
Thanks,
AashishHi Mohammed,
This is what I got after your method of execution plan
SQL_ID 4vmc8rzgaqgka, child number 0
select distinct bk.name "Book Name" , fs.feed_description "Feed Name" , fbs.cob_date
"Cob" , at.description "Data Type" , ah.user_name " User" , ah.comments "Comments"
, ah.time_draft from Action_type at, action_history ah, sensitivity_audit sa, logical_entity
le, feed_static fs, feed_book_status fbs, feed_instance fi, marsnode bk where at.description =
'Regress Positions' and fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011' and
fi.most_recent = 'Y' and bk.close_date is null and ah.time_draft='after' and
sa.close_action_id is null and le.close_action_id is null and at.action_type_id =
ah.action_type_id and ah.action_id=sa.create_action_id and le.logical_entity_id = sa.type_id
and sa.feed_id = fs.feed_id and sa.book_id = bk.node_id and sa.feed_instance_id =
fi.feed_instance_id and fbs.feed_instance_id = fi.feed_instance_id and fi.feed_id = fs.feed_id
union select distinct bk.name "Book Name" , fs.
Plan hash value: 1006571916
| Id | Operation | Name | E-Rows | OMem | 1Mem | Used-Mem |
| 1 | SORT UNIQUE | | 231 | 6144 | 6144 | 6144 (0)|
| 2 | UNION-ALL | | | | | |
| 3 | NESTED LOOPS | | 1 | | | |
| 4 | NESTED LOOPS | | 1 | | | |
| 5 | NESTED LOOPS | | 1 | | | |
| 6 | NESTED LOOPS | | 1 | | | |
|* 7 | HASH JOIN | | 213 | 1236K| 1236K| 1201K (0)|
|* 8 | TABLE ACCESS FULL | LOGICAL_ENTITY | 12 | | | |
|* 9 | HASH JOIN | | 4267 | 1023K| 1023K| 1274K (0)|
|* 10 | HASH JOIN | | 3602 | 1095K| 1095K| 1296K (0)|
|* 11 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
|* 12 | TABLE ACCESS FULL | FEED_INSTANCE | 335K| | | |
|* 13 | TABLE ACCESS FULL | SENSITIVITY_AUDIT | 263K| | | |
| 14 | TABLE ACCESS BY INDEX ROWID | FEED_STATIC | 1 | | | |
|* 15 | INDEX UNIQUE SCAN | IDX_FEED_STATIC_FI | 1 | | | |
|* 16 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
|* 17 | INDEX RANGE SCAN | PK_MARSNODE | 3 | | | |
|* 18 | TABLE ACCESS BY INDEX ROWID | ACTION_HISTORY | 1 | | | |
|* 19 | INDEX UNIQUE SCAN | PK_ACTION_HISTORY | 1 | | | |
|* 20 | TABLE ACCESS BY INDEX ROWID | ACTION_TYPE | 1 | | | |
|* 21 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
|* 22 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
| 23 | NESTED LOOPS | | 115 | | | |
|* 24 | HASH JOIN | | 114 | 809K| 809K| 817K (0)|
| 25 | NESTED LOOPS | | 114 | | | |
|* 26 | HASH JOIN | | 114 | 868K| 868K| 1234K (0)|
|* 27 | HASH JOIN | | 957 | 933K| 933K| 1232K (0)|
|* 28 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | | | |
| 29 | NESTED LOOPS | | 9570K| | | |
| 30 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | | | |
|* 31 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
| 32 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| | | |
|* 33 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
| 34 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | | | |
|* 35 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | | |
| 36 | TABLE ACCESS FULL | FEED_STATIC | 2899 | | | |
|* 37 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | | |
|* 38 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
| 39 | NESTED LOOPS | | 115 | | | |
|* 40 | HASH JOIN | | 114 | 743K| 743K| 149K (0)|
| 41 | NESTED LOOPS | | 114 | | | |
|* 42 | HASH JOIN | | 114 | 766K| 766K| 208K (0)|
|* 43 | HASH JOIN | | 957 | 842K| 842K| 204K (0)|
|* 44 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | | | |
| 45 | NESTED LOOPS | | 9570K| | | |
| 46 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | | | |
|* 47 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
| 48 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| | | |
|* 49 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
| 50 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | | | |
|* 51 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | | |
| 52 | TABLE ACCESS FULL | FEED_STATIC | 2899 | | | |
|* 53 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | | |
Predicate Information (identified by operation id):
7 - access("LE"."LOGICAL_ENTITY_ID"="SA"."TYPE_ID")
8 - filter("LE"."CLOSE_ACTION_ID" IS NULL)
9 - access("SA"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
10 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
11 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
12 - filter("FI"."MOST_RECENT"='Y')
13 - filter("SA"."CLOSE_ACTION_ID" IS NULL)
15 - access("FI"."FEED_ID"="FS"."FEED_ID")
filter("SA"."FEED_ID"="FS"."FEED_ID")
16 - filter("BK"."CLOSE_DATE" IS NULL)
17 - access("SA"."BOOK_ID"="BK"."NODE_ID")
18 - filter("AH"."TIME_DRAFT"='after')
19 - access("AH"."ACTION_ID"="SA"."CREATE_ACTION_ID")
20 - filter("AT"."DESCRIPTION"='Regress Positions')
21 - access("AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
22 - filter("BK"."CLOSE_DATE" IS NULL)
24 - access("FI"."FEED_ID"="FS"."FEED_ID")
26 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
27 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
28 - filter(("AH"."ACTION_TYPE_ID"=103 AND "AH"."TIME_DRAFT"='after'))
31 - access("AT"."ACTION_TYPE_ID"=103)
33 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
35 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
37 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
38 - filter("BK"."CLOSE_DATE" IS NULL)
40 - access("FI"."FEED_ID"="FS"."FEED_ID")
42 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
43 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
44 - filter(("AH"."ACTION_TYPE_ID"=101 AND "AH"."TIME_DRAFT"='after'))
47 - access("AT"."ACTION_TYPE_ID"=101)
49 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
51 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
53 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
Note
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level
122 rows selected.
Elapsed: 00:00:02.18The action_type_id column is of NUMBER type. -
Indexes not getting used in schema having chinese data
I have a database with chinese data in it. When i execute few queries in that schema it does not use the available indexes of that table. Query takes long time to execute and the temp tablespace gets full. But when i execute the same query in another schema having english data the query executes quickly and uses all the indexes.
I tried gathering database statistics and rebuilding the indexes but that did not work out as well.
Can any body tell me whether the index creation differs for foreign languages? Do i need to create the indexes differently then normally we create?
why the indexes are not being used in the schema having chinese data?
Edited by: user621442 on Dec 17, 2009 10:03 AMuser621442 wrote:
I have a database with chinese data in it. When i execute few queries in that schema it does not use the available indexes of that table. Query takes long time to execute and the temp tablespace gets full. But when i execute the same query in another schema having english data the query executes quickly and uses all the indexes.
I tried gathering database statistics and rebuilding the indexes but that did not work out as well.
Can any body tell me whether the index creation differs for foreign languages? Do i need to create the indexes differently then normally we create?
why the indexes are not being used in the schema having chinese data?
Edited by: user621442 on Dec 17, 2009 10:03 AMHi,
I do not think so index would behave differently for different languages, yes sorting may behave in a diffierent way.
Can you post the explain plan from both the database.
And also nls_sort parameter from both the database.
I believe that you have order by clause which is not able to sort using index.
Regards
Anurag -
Dummy XML not getting generated from empty file by J2EE adapter module
Hi All,
i know when XI gets an empty input text file, it does not generate a send message for it in sender communication channel.
in my scenario, if i get a file with data, i have to generate an XML message for it using file content conversion - this i have done...........
but if i get an empty text file, then i have to generate a dummy XML send message for it for my BPM.......
So i made a J2EE adapter module to generate dummy xml for empty file.....<b>when i give a file with data in it, then my adapter module is called..... but when i give an empty file, then my adapter module is not called</b>.........
<b><i>Can anybody suggest why the module processor is not invoking my customer-adapter module when an empty file is given.............but the module processor is invoking my customer-adapter module when a file with data is given</i>.</b>
Thanks,
Rajeev GuptaHi Amit,
Below is the code of process method which i used:
<i>public ModuleData process(ModuleContext moduleContext,
ModuleData inputModuleData)
throws ModuleException
Object obj;
Message msg_audit;
AuditMessageKey amk;
try
File f = new File("/components/XITEMP/sample/PWC/check.txt");
PrintStream ps;
if (f.canWrite())
FileOutputStream fos =new FileOutputStream(f);
ps = new PrintStream(fos);
ps.println("Testing");
ps.close();
fos.close();
else
f = new File("/components/XITEMP/sample/PWC/check4.txt");
if (f.exists() ==false)
f.createNewFile();
obj = inputModuleData.getPrincipalData();
if (obj!=null)
msg_audit = (Message)obj;
amk = new AuditMessageKey(msg_audit.getMessageId(),AuditDirection.OUTBOUND);
Audit.addAuditLogEntry(amk,AuditLogStatus.SUCCESS,"FileCheck: Module called");
else
String str = new String();
String str1 = new String();
str1="<?xml version=\"1.0\" encoding=\"utf-8\" ?>";
str1+="<ns:MT_PWC_RECORD xmlns:ns=\"urn://PWC_SR3_01/PWC/Customer\">";
str1+="<RECORD_SET>";
str1+="<RECORD>";
str1+="<RECORD_DATA>BLANK_FILE</RECORD_DATA>";
str1+="</RECORD>";
str1+="</RECORD_SET>";
str1+="</ns:MT_PWC_RECORD>";
str=str1;
inputModuleData.setPrincipalData(str);
catch(Exception e)
try
File f = new File("/components/XITEMP/sample/PWC/check.txt");
PrintStream ps;
if (f.canWrite())
FileOutputStream fos =new FileOutputStream(f);
ps = new PrintStream(fos);
ps.println(e.toString());
ps.close();
fos.close();
catch(Exception ex)
return inputModuleData;
}</i>
in the above methood, i used file operations at start just to see whether the module is getting invoked or not...so when i give a data file, then the file operations are performed and messages are written in audit log.........but when i give an empty file, then the file operations are not performed - meaning the module is not getting invoked........
Thanks,
Rajeev Gupta -
Hi
I have made the simulation using Captivate 4 and published the project in Flash Player 8. This swf file, I have loaded in the flash and I have made my custom progress bar in flash 8 for it. When I drag rapidly, some time, I not getting the current slide number using this variable containerMC.rdinfoCurrentSlide;
Do you have any idea why this is happingHi Ravi,
to change the title dynamically, you need to redefine the Methode:
IF_BSP_WD_HISTORY_STATE_DESCR~GET_STATE_DESCRIPTION in your main component overview.
go through this code and modify as per your requirement.
DATA lv_bp_number TYPE string.
DATA: lr_cucobupa TYPE REF TO cl_crmcmp_b_cucobupa_impl.
TRY .
lr_cucobupa ?= get_custom_controller( if_iccmp_global_controller_con=>cucobp ).
CATCH cx_sy_move_cast_error.
RETURN.
ENDTRY.
description = cl_wd_utilities=>get_otr_text_by_alias( 'CRM_IC_APPL/IDENTIFY_CUSTOMER' ).
IF lr_cucobupa->is_bp_search_done( ) EQ abap_true.
lv_bp_number = lr_cucobupa->typed_context->customers->get_s_struct( attribute_path ='STRUCT.BP_NUMBER' component = 'BP_NUMBER') .
IF lv_bp_number IS NOT INITIAL.
CONCATENATE lv_bp_number ')' INTO lv_bp_number.
CONCATENATE description '(ID:' lv_bp_number INTO description SEPARATED BY space.
ENDIF.
ENDIF.
Thanks & Regards,
Srinivas -
PMD Custom ruleset file is not getting imported in Eclipse Luna
I am using Eclipse Luna 4.4.0 and trying to import the custom PMD ruleset in Eclipse but I see the OK button being disabled.
I read it on some post which says it may be because of PMD ruleset being from old version of PMD.
However, I modified it to match PMD 5.0 but still I see the same issue i.e. OK button is not getting enabled.
I am attaching the PMD ruleset file.
can someone suggest what needs to be done to fix the issue ?
SaurabhHi Kali
Before doing translations have you set these 2 profile options at site level
Fnd Xliff Export Root Path / FND_XLIFF_EXPORT_ROOT_PATH
You will set this root path to generate the full path where the XLIFF files are exported when you extract your translated personalization’s using the Extract Translation Files page in OA Personalization Framework. T
Fnd Xliff Import Root Path / FND_XLIFF_IMPORT_ROOT_PATH
Use this profile option to set the root path used to derive the full path from where the Xliff files are uploaded when you use the Upload Translations page in OA Personalization Framework to upload translated personalizations
Please go through this page .If it helps
http://apps2fusion.com/at/46-an/240-translating-personalizations-stored-in-mds
Thanks
AJ -
Index not getting used in spite of hints
Its Oracle 10g Release 10.2.0.4.0 Hi All,
I have this query in which there is are indexes on Intrument table like this:
Instrument:
idx 1 : (INSTRUMENT_ID, END_COB_DATE, CLOSE_ACTION_ID, PRODUCT_SUB_TYPE_ID, BEGIN_COB_DATE)
idx 2 : ( INSTRUMENT_ID, INSTRUMENT_VN, END_COB_DATE, CLOSE_ACTION_ID)
idx 3 : (CLOSE_ACTION_ID, END_COB_DATE)I tried all the possible ways but none of the indexes are getting used causing full table scans of this table. I need some guidance on how can I avoid this FTS so the query can run fast and use the index on Instrument table:
query:
select distinct i.instrument_id,
i.name,
case
when (mn2.display_name != 'DEBT PRIORITY CLASS' and
mn2.display_name is not null) then
mn2.display_name
else
mn1.display_name
end "DEBT_PRIORITY_CLASS"
from instrument i, inst_debt id
left join marsnode mn1 on (id.debt_priority_class_id = mn1.node_id and
mn1.close_date is null and
mn1.type_id = 58412926883279)
left join marsnodelink mnl1 on (mn1.node_id = mnl1.node_id and
mnl1.close_date is null and
mnl1.begin_cob_date <=
TO_DATE('27-Oct-2010', 'DD-Mon-YYYY') and
mnl1.end_cob_date >
TO_DATE('27-Oct-2010', 'DD-Mon-YYYY'))
left join marsnode mn2 on (mnl1.parent_id = mn2.node_id and
mn2.close_date is null and
mn2.type_id = 58412926883279)
where i.instrument_id = id.instrument_id
and i.instrument_vn = id.instrument_vn
AND i.end_cob_date > TO_DATE('27-Oct-2010', 'DD-Mon-YYYY')
AND i.close_action_id is null
AND i.product_sub_type_id = 3
AND i.begin_cob_date <= TO_DATE('27-Oct-2010', 'DD-Mon-YYYY')This is the execution plan
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 2026K| 407M| | 509K (20)|
| 1 | HASH UNIQUE | | 2026K| 407M| 879M| 509K (20)|
|* 2 | HASH JOIN RIGHT OUTER | | 2026K| 407M| | 426K (23)|
|* 3 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 501 | 23046 | | 239 (3)|
|* 4 | INDEX RANGE SCAN | FKI_38576_TYPE_ID | 10159 | | | 34 (6)|
|* 5 | HASH JOIN RIGHT OUTER | | 2026K| 318M| | 425K (23)|
|* 6 | TABLE ACCESS FULL | MARSNODELINK | 330 | 15510 | | 6560 (16)|
|* 7 | HASH JOIN RIGHT OUTER | | 2026K| 228M| | 419K (23)|
|* 8 | TABLE ACCESS BY INDEX ROWID| MARSNODE | 501 | 23046 | | 239 (3)|
|* 9 | INDEX RANGE SCAN | FKI_38576_TYPE_ID | 10159 | | | 34 (6)|
|* 10 | HASH JOIN | | 2026K| 139M| 34M| 418K (23)|
| 11 | TABLE ACCESS FULL | INST_DEBT | 1031K| 22M| | 1665 (30)|
*|* 12 | TABLE ACCESS FULL | INSTRUMENT | 2062K| 96M| | 413K (23)|*
--------------------------------------------------------------------------------------------------predicate info
2 - access("MNL1"."PARENT_ID"="MN2"."NODE_ID"(+))
3 - filter("MN2"."CLOSE_DATE"(+) IS NULL)
4 - access("MN2"."TYPE_ID"(+)=58412926883279)
5 - access("MN1"."NODE_ID"="MNL1"."NODE_ID"(+))
6 - filter("MNL1"."CLOSE_DATE"(+) IS NULL AND "MNL1"."END_COB_DATE"(+)>TO_DATE('
2010-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "MNL1"."BEGIN_COB_DATE"(+)<=TO_DATE('
2010-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
7 - access("ID"."DEBT_PRIORITY_CLASS_ID"="MN1"."NODE_ID"(+))
8 - filter("MN1"."CLOSE_DATE"(+) IS NULL)
9 - access("MN1"."TYPE_ID"(+)=58412926883279)
10 - access("I"."INSTRUMENT_ID"="ID"."INSTRUMENT_ID" AND
"I"."INSTRUMENT_VN"="ID"."INSTRUMENT_VN")
12 - filter("I"."PRODUCT_SUB_TYPE_ID"=3 AND "I"."CLOSE_ACTION_ID" IS NULL AND
"I"."END_COB_DATE">TO_DATE(' 2010-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."BEGIN_COB_DATE"<=TO_DATE(' 2010-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Regards,
AashishAashish S. wrote:
I tried all the possible ways but none of the indexes are getting used causing full table scans of this table. I need some guidance on how can I avoid this FTS so the query can run fast and use the index on Instrument table:I assume the last part of the above statement is what you actually need to achieve (i.e. improve execution time of the query) and the query not using index is what you think the "cause" for the actual "problem". I will try to answer the actual "problem". Based on what you have posted, some observations/suggestions
1) Your plan shows the query is expected to retrieve 2026K rows. Are you sure you need to retrieve that many records? You may want to revisit the "requirement" here.
2) Continuing above point, you may want to post details of how much time the query is taking to execute at present and how much time do you expect it to take. Another most important details will be how are you measuring the query execution time. With that huge number of records, it is quite possible that more time is being spent in just transferring the query results to the "client" than actual time taken by server to execute the query.
3) If what you have posted is the order of columns in the indexes on INSTRUMENT table, then which index do you think will help the query execution and how? The order of columns suggest that none of the indexes will be good enough and that seems to be the right choice.
4) Your predicate section states that filter predicate on INSTRUMENT table generates 2062K rows. How many records exist in INSTRUMENT table? You will need to have many times more records (besides other factors like ordering of table data etc.) in the table to justify the indexed access to fetch these huge number of rows.
5) Finally, you may want to verify whether the statistics on tables and indexes used by the query are up-to-date.
Hope this helps. -
Hi,
Oracle version 10g
I have created a partitioned table and it has the below columns:
The table is partitioned on the created_date.
How data gets inserted into this table:
we have a staging table which will get first populated and then using Exchange partitionis populated.
statement which populates rel_table
' ALTER TABLE ' || table1 ||
' EXCHANGE PARTITION ' ||partitionname ||
' WITH TABLE ' || table2 ||
' WITH VALIDATION UPDATE GLOBAL INDEXES';
after exchange partition the indexs gets rebuild
The table would have millions of records.
Now if i issue a query
plz help...But the indexed column is used in the join, so the index should get used right?Not necessarily. How many rows are in the table? How many rows have that column > 0?
Please read these:
When your query takes too long
When your query takes too long ...
How to Post a SQL statement tuning request
HOW TO: Post a SQL statement tuning request - template posting -
Color Scheme Updates Not Working
Color schemeing is hugely helpful for me. I recently migrated from a Win7 machine to a Mac and have been trying to change the color scheme. I uploaded a dark scheme I found, that worked okay, but when I go in to make some additional tweaks, they simply don't save. I'll make the change > click OK > go back go the Preferences menu, review the change I made, but it won't show. As if the change didn't happen at all.
Quit the app after making the change doesn't seem to really help either.
Anyone experience anything like this?
Thanks!I may be a little confused, but if you are working with a gray or white layer the color burn will not work as there is no color to work on. Does that make sense?
-
Custom page layouts are not getting displayed in ribbon
Hi,
I have created a custom page layout under welcome category. This is getting displayed in pages library while creating a page and also in page layout option from site settings which is fine. But when a page is in edit mode if we click page->page layouts
from ribbon, all the default page layouts are getting displayed except my custom page layout.
Please advise.
Regards,
Chaitanya.you have a subsite in a site collection •the publishing feature is activated in the subsite, but not in the site collection •you want to edit/add page layouts on the site collection level, to be used within your subsite •when you open the site collection
in SharePoint Designer, you can't see Page Layouts in the Site Objects panel You now have 2 options: you can either activate the publishing feature in the site collection (and the Page Layouts link comes back), or you can use the All Files link and browse
to the master pages and page layouts library (_catalogs > masterpage.) Either option will do, unless you really don't want the publishing feature in the site collection.
To fix the issue go to Site Actions -> Site Settings in the upper left corner.
Under "Site Collection Administration" click on "Site Collection Features".
Look for "SharePoint Server Publishing Infrastructure" and activate it. It might take a moment to load.
Next return to "Site Settings" and click on "Manage Site Features"
Look for "SharePoint Server Publishing" and activate it. It might take a moment to load.
If this helped you resolve your issue, please mark it Answered -
Hi, I' using a hp 1514n for printing photos. It's attached to my Win7 64bit computer. But the colors are not suitable. Printing is in high quality but the coloras are wrong (i.e. instead of a bule sky I get it turquois).
all patches installaed and also the lastest drivers. What else can I do?
The files are ok, because I get the right colors when printign the files at another hp color laser jet with win7 (32 bit)
Where can I get a suitable icc file?I'd use the GutenPrint drivers. They're usually better than the drivers that HP ships, anyway, and the latest version of GutenPrint will work with Leopard while the HP drivers for a printer that old will not. The drivers at <http://www.linuxfoundation.org/en/OpenPrinting/MacOSX/hpijs> also explicitly support the DJ960C. Those drivers are usually written by HP employees, are often superior to the drivers HP uses, and, like the GutenPrint drivers, are free.
-
Sap customizing request pop-up not getting generated
Hello Experts,
I am facing a problem while generating customizing transport request, the transport request creation pop-up is not coming,
Checked in scc4 - customizing chenges allowed + automatic recording is set and client role is customizing , but still i am unable to see any pop-up for the transport request creation . Please help!!!!
Regards
Shubhangi PandeyHi,
You might have more success raising this question in the relevant functional forum, transaction OOEG does not seem to use generated table maintenance but here on the ABAP development forum you're pretty likely to keep getting the same answer.
Regards,
Nick
Maybe you are looking for
-
A definitive answer? : Can you use Audition on a Mac??!
I have heard all sorts of conflicting views on this and I thought someone here would be able to set the record straight! I wanted to get a Mac, because I like them, but I use Audition all the time for my work. Is a Mac a good idea? If not, could anyo
-
How to get the report of user activities in a KM portal
Hi Gurus, Is there is any standard reports which will show the actions performed by different users in the KM portal. Thanks & Regards, Vipin
-
User to write/read in file folder
Hi Gurus, We are testing File receiver with NFS protocol and the server is running on UNIX system. So Which user will write/read these files in the foder.We could not able to write files even though we have given right access. Kum
-
How to See PC's on a wireless network
I have recently created a wireless network in my home for my wife's desktop, my laptop and our TiVo DVR. I can see the DVR from both PC's as well as transfer files to and from the DVR to either PC. Question I have is why can't the two PC's see eac
-
Email accounts not highlighted, can't change settings
Email accounts not highlighted, can't change settings