Query on Performance (Which Select is better)
Hi All,
Database Version: 11.2.0.3.0
SELECT CSU.LOCATION ship_to_number,
(SELECT location
FROM seacds.hz_cust_site_uses_all_nv
WHERE SITE_USE_ID = CSU.BILL_TO_SITE_USE_ID
AND SITE_USE_CODE = 'BILL_TO'
AND CSU.STATUS = 'A') bill_to_num,
flv.lookup_code Corp_Code,
(SELECT cust_type
FROM seagrs.seagrs_customer_profile
WHERE ship_to_num = csu.location) customer_type,
p.party_name customer_name,
l.address1 address,
l.city city,
L.STATE STATE,
l.country country,
csu.location ship_to_num
FROM seacds.hz_cust_site_uses_all_nv csu,
seacds.hz_party_sites_nv ps,
seacds.hz_parties_nv p,
seacds.hz_cust_acct_sites_all_nv cas,
seacds.hz_locations_nv l,
seacds.fnd_lookup_values_nv flv
WHERE csu.site_use_code = 'SHIP_TO'
AND csu.status = 'A'
AND ps.party_site_id = cas.party_site_id
AND cas.cust_acct_site_id = csu.cust_acct_site_id
AND flv.description = p.party_name
AND csu.location = '44408001'
AND PS.STATUS = 'A'
AND ps.location_id = l.location_id
and ps.party_id = p.party_id
AND csu.org_id in (SELECT org.operating_unit
from seacds.mtl_parameters_nv mpn,
seacds.org_organization_definition_jv org
WHERE mpn.organization_id = org.organization_id
AND mpn.organization_code in
(SELECT erp_org_code
FROM seagrs.seagrs_rtn_location
where rtn_loc_type = 'DC'));
Select P.Party_Number, Lvn.Lookup_Code, Lcsu.location
From Seacds.Hz_Cust_Site_Uses_All_Nv Csu,
seacds.hz_cust_site_uses_all_nv Lcsu,
seacds.hz_party_sites_nv ps,
seacds.hz_parties_nv p,
Seacds.Hz_Cust_Acct_Sites_All_Nv Cas,
Seacds.Hz_Locations_Nv L,
seacds.fnd_lookup_values_nv LVN
WHERE csu.site_use_code = 'SHIP_TO'
AND csu.status = 'A'
AND ps.party_site_id = cas.party_site_id
And Cas.Cust_Acct_Site_Id = Csu.Cust_Acct_Site_Id
And Csu.Location = '44408001'--p_ship_to_num_in
and ps.status = 'A'
AND ps.location_id = l.location_id
And Ps.Party_Id = P.Party_Id
AND Lvn.Lookup_Type = 'SEAOE_CORP_CODES'
And Lvn.Description = P.Party_Name
And Lcsu.Site_Use_Id = Csu.Bill_To_Site_Use_Id
And Lcsu.Site_Use_Code = 'BILL_TO'
AND csu.org_id = (SELECT org.operating_unit
from seacds.mtl_parameters_nv mpn,
seacds.org_organization_definition_jv org
WHERE mpn.organization_id = org.organization_id
AND mpn.organization_code in
(SELECT erp_org_code
From Seagrs.Seagrs_Rtn_Location
Where Rtn_Loc_Type = 'DC'In the above two queries the first one is taking 1.2 seconds and the second one is taking 2 seconds, i hope the way the second one is written is an optimized way, but why it takes more time.
Thanks and Regards
Srinivas
Hi Karthik,
Agreed, will perform the SQL trace, but as of now Database is brought down for maintenance. Will get back once it's up. As you have mentioned the difference is negligible, but wish to know which was the appropriate way of selecting the records out of the two. I will perform SQL Trace and get back. Thanks for your response.
Thanks and Regards
Srinivas
Similar Messages
-
Selecting better query from performance point of view.
Hi friends,
I have one situation which following example represents exactly.
we have a table marks.
Table Marks ( Student_No number,exam_code number, Mark_subject1 number, Mark_subject2 number, Mark_subject3 number).
What I want is to prepare a table Results in following way.
I need to insert one record for each student's each total Marks of each subject.
Like
Marks:
Student_No...........Exam_code........Mark_subject1........Mark_subject2........Mark_subject3
.........1.......................1.....................10.........................15..........................12
.........1.......................2.....................15.........................15..........................10
.........2.......................1.....................10.........................10..........................10
.........2.......................2.....................17.........................17..........................10
Then I want to populate results table with following data.
Student...................Subject..............TotalMarks
.....1.......................Subject1..................25
.....1.......................Subject2..................30
.....1.......................Subject3..................22
.....2.......................Subject1..................27
.....2.......................Subject2..................27
.....2.......................Subject3..................20
This needs to be done within one procedure.
I can do it by two ways.
1)
insert into Results select student_no, 'Subject1',sum(Mark_Subject1) from marks group by student_no;
insert into Results select student_no,'Subject2',sum(Mark_Subject2) from Marks group by student_no;
insert into Results select student_no,'Subject2',sum(Mark_Subject3) from Marks group by student_no;
2)
For i in (select student_no,sum(mark_subject1) sub1, sum(mark_subject2) sub2, sum(mark_subject3) sub3 from marks)
loop
insert into Results values(i.student_no,'Subject1',i.sub1);
insert into Results values(i.student_no,'Subject2',i.sub2);
insert into Results values(i.student_no,'Subject3',i.sub3);
end loop;
If we use first way, 3 times the table will be accessed and will be processed (sorted) for "grouped by" and all the resultant data will be inserted.
If we use second way, the marks table will be accessed and processed for group by for only once. But for each record of result set, 3 inserts will be done.
I am confused about which would be better way provided the number of records in marks table is around 1,00,000.
Please help me deciding the better way.
Regards,
Dipali..I would avoid cursor for loops if at all possible.
-
How to find for which select statement performance is more
hi gurus
can anyone suggest me
if we have 2 select statements than
how to find for which select statement performance is more
thanks®ards
kals.hi check this..
1 .the select statement in which the primary and secondary keys are used will gives the good performance .
2.if the select statement had select up to i row is good than the select single..
go to st05 and check the performance..
regards,
venkat -
If i have two option to select between XI or BW which one in better and why
if i have two option to select between XI or BW which one in better and why
both in terms of money and in terms of my career growth.......Sheetika,
XI if you are good in JAVA.The rest is same for both XI and BW.
K.Kiran. -
Which method is better to the performance?
Using SQLs at front side or using stored procedures at back side, which method is better to the performance?
jetq wrote:
In my view, it maybe have other difference, for example,
Using stored procedure, you don't need to recompile the script every time to be executed,
and use the existing execute plan.what if first time procedure is called after DB start?
PL/SQL does not have EXPLAIN PLAN; only SQL does.
But using SQL statement from application layer may be different.different than what exactly.
SQL is SQL & can only be executed by SQL Engine inside the DB.
SQL statement does not know or care about how it got to the DB.
DB does not know or care from where SQL originated. -
Same algorithm in function and procedure then which one will better?
Why pl sql function is better to computes a value instead of procedure?
If I apply same algorithm in function and procedure then which one will perform better?It's not a matter of performance, it is more a matter of how it is going to be used.
A function can be used as an expression in an assignment or in a query.
my_var := my_func(my_param);
select my_var(my_col) from my_table;But it can just return a single value (which can be a complex value like a nested table or object or ref cursor, but still a single value.)
The procedure often is more used to perform an action that does not return anything.
execute_invoicing(my_invoice_id);Or procedures can be used if you need multiple return values.
my_proc(my_input, my_output_1, my_output_2, my_output_3);But the procedure cannot be used in an assignment expression or a select query.
Performance wise procedures and functions are completely identical. It is only a matter of what action they perform and how you are going to use them. -
Lower performance for select in our setup
We are currently doing performance testing for timesten for 2000 users. The application is a java project and is deployed on weblogic server. We are facing very poor performance with timesten. The response time for the same code with Oracle db is 0.116 second , however with timesten it is coming about 9 seconds.
We have tried both the client-server connection as well as direct connection .
The sql query is just a select statement which gets the count of records from the database . Our requirement is read only and we are not writing anything in timesten . We are caching data from oracle db in timesten tables and running our query on it .
The details of the environment and the timesten database are as follows.
1.)Timesten is intalled on RHEL 5 64 bit machine . The output of ttversion of the same is
TimesTen Release 11.2.1.9.0 (64 bit Linux/x86_64) (TTEAG:23388) 2012-03-19T21:35:54Z
Instance admin: tteag
Instance home directory: /timestendb/TimesTen/TTEAG
Group owner: ttadmin
Daemon home directory: /timestendb/TimesTen/TTEAG/info
PL/SQL enabled.
2.)This machine has currently 10 cpu . The cpu details is
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5675 @ 3.07GHz
stepping : 2
cpu MHz : 3066.886
cache size : 12288 KB
physical id : 1
siblings : 5
core id : 0
cpu cores : 5
apicid : 32
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 6133.77
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
3.)The memory details for the machine are :-
MemTotal: 148449320 kB
MemFree: 45912888 kB
Buffers: 941548 kB
Cached: 94945804 kB
SwapCached: 48 kB
Active: 93980700 kB
Inactive: 5289636 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148449320 kB
LowFree: 45912888 kB
SwapTotal: 147455984 kB
SwapFree: 147455732 kB
Dirty: 616 kB
Writeback: 412 kB
AnonPages: 3383108 kB
Mapped: 298540 kB
Slab: 2848180 kB
PageTables: 19772 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 221549572 kB
Committed_AS: 102509964 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 340988 kB
VmallocChunk: 34359395635 kB
HugePages_Total: 128
HugePages_Free: 96
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
4.)We have permanent and temporary both databases.
4 a )The dsn entry for permanent timesten db is :-
[TTEAG]
Driver=/timestendb/TimesTen/TTEAG/lib/libtten.so
DataStore=/timesten03/TimesTen/database/TTEAG/TT_1121
LogDir=/timesten04/TimesTen/logs/TTEAG
PermSize=50000
TempSize=2000
DatabaseCharacterSet=AL32UTF8
OracleNetServiceName=EAG
Connections=2047
#MemoryLock=4
PLSQL=1
Output of monitor command for it
Command> monitor;
TIME_OF_1ST_CONNECT: Wed Sep 12 14:39:40 2012
DS_CONNECTS: 1574
DS_DISCONNECTS: 799
DS_CHECKPOINTS: 746
DS_CHECKPOINTS_FUZZY: 746
DS_COMPACTS: 0
PERM_ALLOCATED_SIZE: 51200000
PERM_IN_USE_SIZE: 70188
PERM_IN_USE_HIGH_WATER: 70188
TEMP_ALLOCATED_SIZE: 2048000
TEMP_IN_USE_SIZE: 26561
TEMP_IN_USE_HIGH_WATER: 869386
SYS18: 0
TPL_FETCHES: 0
TPL_EXECS: 0
CACHE_HITS: 0
PASSTHROUGH_COUNT: 0
XACT_BEGINS: 738998
XACT_COMMITS: 739114
XACT_D_COMMITS: 0
XACT_ROLLBACKS: 0
LOG_FORCES: 746
DEADLOCKS: 0
LOCK_TIMEOUTS: 0
LOCK_GRANTS_IMMED: 765835
LOCK_GRANTS_WAIT: 0
SYS19: 0
CMD_PREPARES: 13
CMD_REPREPARES: 0
CMD_TEMP_INDEXES: 0
LAST_LOG_FILE: 1
REPHOLD_LOG_FILE: -1
REPHOLD_LOG_OFF: -1
REP_XACT_COUNT: 0
REP_CONFLICT_COUNT: 0
REP_PEER_CONNECTIONS: 0
REP_PEER_RETRIES: 0
FIRST_LOG_FILE: 1
LOG_BYTES_TO_LOG_BUFFER: 305952
LOG_FS_READS: 0
LOG_FS_WRITES: 747
LOG_BUFFER_WAITS: 0
CHECKPOINT_BYTES_WRITTEN: 955832
CURSOR_OPENS: 727934
CURSOR_CLOSES: 728366
SYS3: 0
SYS4: 0
SYS5: 0
SYS6: 0
CHECKPOINT_BLOCKS_WRITTEN: 6739
CHECKPOINT_WRITES: 9711
REQUIRED_RECOVERY: 0
SYS11: 0
SYS12: 1
TYPE_MODE: 0
SYS13: 0
SYS14: 0
SYS15: 0
SYS16: 0
SYS17: 0
SYS9:
4 b.) The dsn of temporary db is :-
[TTEAGTMP]
Driver=/timestendb/TimesTen/TTEAG/lib/libtten.so
DataStore=/timesten03/TimesTen/database/TTEAGTMP/TT_1121
LogDir=/timesten04/TimesTen/logs/TTEAGTMP
Temporary=1
AutoCreate=1
PermSize=20000
TempSize=20000
DatabaseCharacterSet=AL32UTF8
OracleNetServiceName=EAG
Connections=2047
#MemoryLock=4
PLSQL=2
PLSQL_MEMORY_ADDRESS=20000000
The output of monitor command is :-
Command> monitor;
TIME_OF_1ST_CONNECT: Tue Sep 11 14:00:34 2012
DS_CONNECTS: 4609
DS_DISCONNECTS: 4249
DS_CHECKPOINTS: 894
DS_CHECKPOINTS_FUZZY: 893
DS_COMPACTS: 0
PERM_ALLOCATED_SIZE: 20480000
PERM_IN_USE_SIZE: 70198
PERM_IN_USE_HIGH_WATER: 70560
TEMP_ALLOCATED_SIZE: 20480000
TEMP_IN_USE_SIZE: 15856
TEMP_IN_USE_HIGH_WATER: 326869
SYS18: 0
TPL_FETCHES: 0
TPL_EXECS: 0
CACHE_HITS: 0
PASSTHROUGH_COUNT: 0
XACT_BEGINS: 1005281
XACT_COMMITS: 1005661
XACT_D_COMMITS: 0
XACT_ROLLBACKS: 6
LOG_FORCES: 8
DEADLOCKS: 0
LOCK_TIMEOUTS: 8
LOCK_GRANTS_IMMED: 2031645
LOCK_GRANTS_WAIT: 2
SYS19: 0
CMD_PREPARES: 149
CMD_REPREPARES: 0
CMD_TEMP_INDEXES: 0
LAST_LOG_FILE: 0
REPHOLD_LOG_FILE: -1
REPHOLD_LOG_OFF: -1
REP_XACT_COUNT: 0
REP_CONFLICT_COUNT: 0
REP_PEER_CONNECTIONS: 0
REP_PEER_RETRIES: 0
FIRST_LOG_FILE: 0
LOG_BYTES_TO_LOG_BUFFER: 12515480
LOG_FS_READS: 0
LOG_FS_WRITES: 36
LOG_BUFFER_WAITS: 0
CHECKPOINT_BYTES_WRITTEN: 0
CURSOR_OPENS: 928766
CURSOR_CLOSES: 929343
SYS3: 0
SYS4: 0
SYS5: 0
SYS6: 0
CHECKPOINT_BLOCKS_WRITTEN: 0
CHECKPOINT_WRITES: 0
REQUIRED_RECOVERY: 0
SYS11: 0
SYS12: 1
TYPE_MODE: 0
SYS13: 0
SYS14: 0
SYS15: 0
SYS16: 0
SYS17: 0
SYS9:
5.)The weblogic installed is version 10.3.4 . There are three managed servers in it and they are in a cluster . The timesten data source is created in the weblogic . The application uses the jndi name to connect to the data source using jdbc driver .
6 .) Cache groups
We have 7 cachegroups created for the 7 tables. Indexes have also been created on the tables which are same as the source oracle db from where data is cached .
The cachegroups details is :-
Cache Group CACHEADM.PROCESS_INSTANCE_B_T_UG:
Cache Group Type: User Managed
Autorefresh: Yes
Autorefresh Mode: Incremental
Autorefresh State: On
Autorefresh Interval: 1 Minute
Autorefresh Status: ok
Aging: No aging defined
Root Table: EAGBE00.PROCESS_INSTANCE_B_T
Table Type: Propagate
7.)For client server connection. On the client machine weblogic 10.3.4 is installed with two managed servers .
The ttversion for timesten client is :-
TimesTen Release 11.2.1.9.0 (64 bit HPUX/IPF) (TTEAG) 2012-03-20T00:45:11Z
Instance home directory: /globalapp/app/TimesTen/TTEAG
Group owner: oinstall
8.) The oracle database is 10.2.0.4.0
Please let us know if you see anything wrong with this setup or if we are doing something wrong.Please find the details as follows split in two posts:-
1. Full cache group definition including any additional indexes you have created (e.g.
ones from Oracle DB)
a.) Cache group details – total 7
create USERMANAGED cache group TASK_INSTANCE_T_UG
AUTOREFRESH MODE INCREMENTAL INTERVAL 1 MINUTES STATE ON
from eagbe00.TASK_INSTANCE_T
TKIID varbinary(16) not null primary key,
NAME VARCHAR2(220) not null,
NAMESPACE VARCHAR2(254) not null,
TKTID varbinary(16),
TOP_TKIID varbinary(16) not null,
FOLLOW_ON_TKIID varbinary(16),
APPLICATION_NAME VARCHAR2(220),
APPLICATION_DEFAULTS_ID varbinary(16),
CONTAINMENT_CONTEXT_ID varbinary(16),
PARENT_CONTEXT_ID varbinary(16),
STATE NUMBER(10) not null,
KIND NUMBER(10) not null,
AUTO_DELETE_MODE NUMBER(10) not null,
HIERARCHY_POSITION NUMBER(10) not null,
TYPE VARCHAR2(254),
SVTID varbinary(16),
SUPPORTS_CLAIM_SUSPENDED NUMBER(5) not null,
SUPPORTS_AUTO_CLAIM NUMBER(5) not null,
SUPPORTS_FOLLOW_ON_TASK NUMBER(5) not null,
IS_AD_HOC NUMBER(5) not null,
IS_ESCALATED NUMBER(5) not null,
IS_INLINE NUMBER(5) not null,
IS_SUSPENDED NUMBER(5) not null,
IS_WAITING_FOR_SUBTASK NUMBER(5) not null,
SUPPORTS_DELEGATION NUMBER(5) not null,
SUPPORTS_SUB_TASK NUMBER(5) not null,
IS_CHILD NUMBER(5) not null,
HAS_ESCALATIONS NUMBER(5),
START_TIME TIMESTAMP(6),
ACTIVATION_TIME TIMESTAMP(6),
LAST_MODIFICATION_TIME TIMESTAMP(6),
LAST_STATE_CHANGE_TIME TIMESTAMP(6),
COMPLETION_TIME TIMESTAMP(6),
DUE_TIME TIMESTAMP(6),
EXPIRATION_TIME TIMESTAMP(6),
FIRST_ACTIVATION_TIME TIMESTAMP(6),
DEFAULT_LOCALE VARCHAR2(32),
DURATION_UNTIL_DELETED VARCHAR2(254),
DURATION_UNTIL_DUE VARCHAR2(254),
DURATION_UNTIL_EXPIRES VARCHAR2(254),
CALENDAR_NAME VARCHAR2(254),
JNDI_NAME_CALENDAR VARCHAR2(254),
JNDI_NAME_STAFF_PROVIDER VARCHAR2(254),
CONTEXT_AUTHORIZATION NUMBER(10) not null,
ORIGINATOR VARCHAR2(128),
STARTER VARCHAR2(128),
OWNER VARCHAR2(128),
ADMIN_QTID varbinary(16),
EDITOR_QTID varbinary(16),
POTENTIAL_OWNER_QTID varbinary(16),
POTENTIAL_STARTER_QTID varbinary(16),
READER_QTID varbinary(16),
PRIORITY NUMBER(10),
SCHEDULER_ID VARCHAR2(254),
SERVICE_TICKET VARCHAR2(254),
EVENT_HANDLER_NAME VARCHAR2(64),
BUSINESS_RELEVANCE NUMBER(5) not null,
RESUMES TIMESTAMP(6),
SUBSTITUTION_POLICY NUMBER(10) not null,
DELETION_TIME TIMESTAMP(6),
VERSION_ID NUMBER(5) not null,
PROPAGATE
create USERMANAGED cache group WORK_ITEM_T_UG
AUTOREFRESH MODE INCREMENTAL INTERVAL 1 MINUTES STATE ON
from eagbe00.WORK_ITEM_T
WIID varbinary(16) not null primary key,
PARENT_WIID varbinary(16),
OWNER_ID VARCHAR2(128),
GROUP_NAME VARCHAR2(128),
EVERYBODY NUMBER(5) not null,
EXCLUDE NUMBER(5) not null,
QIID varbinary(16),
OBJECT_TYPE NUMBER(10) not null,
OBJECT_ID varbinary(16) not null,
ASSOCIATED_OBJECT_TYPE NUMBER(10) not null,
ASSOCIATED_OID varbinary(16),
REASON NUMBER(10) not null,
CREATION_TIME TIMESTAMP(6) not null,
KIND NUMBER(10) not null,
AUTH_INFO NUMBER(10) not null,
VERSION_ID NUMBER(5) not null,
PROPAGATE
create USERMANAGED cache group RETRIEVED_USER_T_UG
AUTOREFRESH MODE INCREMENTAL INTERVAL 1 MINUTES STATE ON
from eagbe00.RETRIEVED_USER_T
QIID varbinary(16) not null,
OWNER_ID VARCHAR2(128) not null,
REASON NUMBER(10) not null,
ASSOCIATED_OID varbinary(16),
VERSION_ID NUMBER(5) not null,
primary key (QIID, OWNER_ID),
PROPAGATE
create USERMANAGED cache group PROCESS_INSTANCE_B_T_UG
AUTOREFRESH MODE INCREMENTAL INTERVAL 1 MINUTES STATE ON
from eagbe00.PROCESS_INSTANCE_B_T
PIID varbinary(16) not null primary key,
PTID varbinary(16) not null,
STATE NUMBER(10) not null,
PENDING_REQUEST NUMBER(10) not null,
CREATED TIMESTAMP(6),
STARTED TIMESTAMP(6),
COMPLETED TIMESTAMP(6),
LAST_STATE_CHANGE TIMESTAMP(6),
LAST_MODIFIED TIMESTAMP(6),
NAME VARCHAR2(220) not null,
PARENT_NAME VARCHAR2(220),
TOP_LEVEL_NAME VARCHAR2(220) not null,
COMPENSATION_SPHERE_NAME VARCHAR2(100),
STARTER VARCHAR2(128),
DESCRIPTION VARCHAR2(254),
INPUT_SNID varbinary(16),
INPUT_ATID varbinary(16),
INPUT_VTID varbinary(16),
OUTPUT_SNID varbinary(16),
OUTPUT_ATID varbinary(16),
OUTPUT_VTID varbinary(16),
FAULT_NAME VARCHAR2(254),
TOP_LEVEL_PIID varbinary(16) not null,
PARENT_PIID varbinary(16),
PARENT_AIID varbinary(16),
TKIID varbinary(16),
TERMIN_ON_REC NUMBER(5) not null,
AWAITED_SUB_PROC NUMBER(5) not null,
IS_CREATING NUMBER(5) not null,
PREVIOUS_STATE NUMBER(10),
EXECUTING_ISOLATED_SCOPE NUMBER(5) not null,
SCHEDULER_TASK_ID VARCHAR2(254),
RESUMES TIMESTAMP(6),
PENDING_SKIP_REQUEST NUMBER(5) not null,
UNHANDLED_EXCEPTION VARBINARY(16),
VERSION_ID NUMBER(5) not null,
PROPAGATE
create USERMANAGED cache group PROCESS_TEMPLATE_B_T_UG
AUTOREFRESH MODE INCREMENTAL INTERVAL 1 MINUTES STATE ON
from eagbe00.PROCESS_TEMPLATE_B_T
PTID varbinary(16) not null primary key,
NAME VARCHAR2(220) not null,
DEFINITION_NAME VARCHAR2(220),
DISPLAY_NAME VARCHAR2(64),
APPLICATION_NAME VARCHAR2(220),
DISPLAY_ID NUMBER(10) not null,
DESCRIPTION VARCHAR2(254),
DOCUMENTATION varchar2(4),
EXECUTION_MODE NUMBER(10) not null,
IS_SHARED NUMBER(5) not null,
IS_AD_HOC NUMBER(5) not null,
STATE NUMBER(10) not null,
VALID_FROM TIMESTAMP(6) not null,
TARGET_NAMESPACE VARCHAR2(250),
CREATED TIMESTAMP(6) not null,
AUTO_DELETE NUMBER(5) not null,
EXTENDED_AUTO_DELETE NUMBER(10) not null,
VERSION VARCHAR2(32),
SCHEMA_VERSION NUMBER(10) not null,
ABSTRACT_BASE_NAME VARCHAR2(254),
S_BEAN_LOOKUP_NAME VARCHAR2(254),
S_BEAN60_LOOKUP_NAME VARCHAR2(254),
E_BEAN_LOOKUP_NAME VARCHAR2(254),
PROCESS_BASE_NAME VARCHAR2(254),
S_BEAN_HOME_NAME VARCHAR2(254),
E_BEAN_HOME_NAME VARCHAR2(254),
BPEWS_UTID varbinary(16),
WPC_UTID varbinary(16),
BUSINESS_RELEVANCE NUMBER(5) not null,
ADMINISTRATOR_QTID varbinary(16),
READER_QTID varbinary(16),
A_TKTID varbinary(16),
A_TKTIDFOR_ACTS varbinary(16),
COMPENSATION_SPHERE NUMBER(10) not null,
AUTONOMY NUMBER(10) not null,
CAN_CALL NUMBER(5) not null,
CAN_INITIATE NUMBER(5) not null,
CONTINUE_ON_ERROR NUMBER(5) not null,
IGNORE_MISSING_DATA NUMBER(10) not null,
PROPAGATE
create USERMANAGED cache group TASK_TEMPL_LDESC_T_UG
AUTOREFRESH MODE INCREMENTAL INTERVAL 1 MINUTES STATE ON
from eagbe00.TASK_TEMPL_LDESC_T
TKTID varbinary(16) not null,
LOCALE VARCHAR2(32) not null,
CONTAINMENT_CONTEXT_ID varbinary(16) not null,
DISPLAY_NAME VARCHAR2(64),
DESCRIPTION VARCHAR2(254),
DOCUMENTATION varchar2(4) ,
primary key (TKTID, LOCALE),
PROPAGATE
create USERMANAGED cache group QUERY_VAR_INSTANCE_T_UG
AUTOREFRESH MODE INCREMENTAL INTERVAL 1 MINUTES STATE ON
from eagbe00.QUERYABLE_VARIABLE_INSTANCE_T
PKID varbinary(16) not null primary key,
CTID varbinary(16) not null,
PIID varbinary(16) not null,
PAID varbinary(16) not null,
VARIABLE_NAME VARCHAR2(254) not null,
PROPERTY_NAME VARCHAR2(255) not null,
PROPERTY_NAMESPACE VARCHAR2(254) not null,
TYPE NUMBER(10) not null,
GENERIC_VALUE VARCHAR2(512),
STRING_VALUE VARCHAR2(512),
NUMBER_VALUE NUMBER(20),
DECIMAL_VALUE NUMBER,
TIMESTAMP_VALUE TIMESTAMP(6),
VERSION_ID NUMBER(5) not null,
PROPAGATE
select count(1) from eagbe00.task, eagbe00.work_item, eagbe00.process_instance pi,eagbe00.TASK_TEMPL_DESC
where WORK_ITEM.OBJECT_ID = TASK.TKIID AND TASK.TKTID = TASK_TEMPL_DESC.TKTID
AND pi.piid = TASK.containment_ctx_id AND TASK.KIND = 105 AND TASK.STATE = 2
and WORK_ITEM.REASON IN (1,4) and TASK.IS_ESCALATED = 0 AND TASK.SUSPENDED = 0
and TASK.IS_INLINE = 1
AND WORK_ITEM.OWNER_ID = '169403'
SELECT count(1 )
FROM (SELECT DISTINCT TA.TKIID , TA.ACTIVATED , TA.COMPLETED , TTD.DISPLAY_NAME , TA.ORIGINATOR ,
TA.STATE ,QP1.NAME ,QP1.STRING_VALUE ,QP2.NAME AS NAME1,QP2.STRING_VALUE AS STRING_VALUE1,
QP3.NAME AS NAME2,QP3.STRING_VALUE AS STRING_VALUE2,QP4.NAME AS NAME3,QP4.STRING_VALUE AS STRING_VALUE3,
QP5.NAME AS NAME4,QP5.STRING_VALUE AS STRING_VALUE4
FROM EAGBE00.TASK TA
LEFT JOIN EAGBE00.QUERY_PROPERTY QP3 ON (TA.CONTAINMENT_CTX_ID = QP3.PIID)
LEFT JOIN EAGBE00.QUERY_PROPERTY QP5 ON (TA.CONTAINMENT_CTX_ID = QP5.PIID)
LEFT JOIN EAGBE00.QUERY_PROPERTY QP4 ON (TA.CONTAINMENT_CTX_ID = QP4.PIID)
LEFT JOIN EAGBE00.QUERY_PROPERTY QP2 ON (TA.CONTAINMENT_CTX_ID = QP2.PIID)
LEFT JOIN EAGBE00.QUERY_PROPERTY QP1 ON (TA.CONTAINMENT_CTX_ID = QP1.PIID),
EAGBE00.WORK_ITEM WI,
EAGBE00.TASK_TEMPL_DESC TTD
WHERE (WI.OBJECT_ID = TA.TKIID AND TA.TKTID = TTD.TKTID) AND (TA.KIND IN (105 ) and TA.STATE IN (2 ,8 )
and TA.IS_ESCALATED =0 and TA.SUSPENDED =0 and TA.IS_INLINE =1 and WI.REASON IN (1 ,4 )
AND WI.EVERYBODY =0 and QP1.NAME ='starter'
and QP2.NAME ='applicationName' and upper(QP2.STRING_VALUE) like '%GESS%'
and QP3.NAME ='subType' and QP4.NAME ='description' and QP5.NAME ='additionalInfo' and WI.OWNER_ID ='169403' )
ORDER BY TA.ACTIVATED DESC)
CREATE USERMANAGED CACHE GROUP WORK_ITEM_TIMESTEN_UG
AUTOREFRESH MODE INCREMENTAL INTERVAL 1 MINUTES STATE ON FROM
EAGBE00.WORK_ITEM_TIMESTEN
(WIID VARBINARY(16) ,
PARENT_WIID VARBINARY(16),
OWNER_ID VARCHAR2(128),
GROUP_NAME VARCHAR2(128),
EVERYBODY NUMBER(5) ,
EXCLUDE NUMBER(5) ,
QIID VARBINARY(16),
OBJECT_TYPE NUMBER(10) ,
OBJECT_ID VARBINARY(16) ,
ASSOCIATED_OBJECT_TYPE NUMBER(10) ,
ASSOCIATED_OID VARBINARY(16),
REASON NUMBER(10) ,
CREATION_TIME TIMESTAMP(6) ,
KIND NUMBER(10) ,
AUTH_INFO NUMBER(10) ,
VERSION_ID NUMBER(5),
SEQ_NO_PK NUMBER(6),
primary key (SEQ_NO_PK),
PROPAGATE
b .) Indexes
create index EAGBE00.TI_ACOID on EAGBE00.TASK_INSTANCE_T (APPLICATION_DEFAULTS_ID);
create index EAGBE00.TI_CCID on EAGBE00.TASK_INSTANCE_T (CONTAINMENT_CONTEXT_ID);
create index EAGBE00.TI_NAME on EAGBE00.TASK_INSTANCE_T (NAME);
create index EAGBE00.TI_PARENT on EAGBE00.TASK_INSTANCE_T (PARENT_CONTEXT_ID);
create index EAGBE00.TI_SERVICET on EAGBE00.TASK_INSTANCE_T (SERVICE_TICKET);
create index EAGBE00.TI_STATE on EAGBE00.TASK_INSTANCE_T (STATE);
create index EAGBE00.TI_ST_KND_TI_NAME on EAGBE00.TASK_INSTANCE_T (STATE, KIND, TKIID, NAME);
create index EAGBE00.TI_TI_KND_ST on EAGBE00.TASK_INSTANCE_T (TKIID, KIND, STATE);
create index EAGBE00.TI_TK_TOPTK on EAGBE00.TASK_INSTANCE_T (TKTID, TKIID, TOP_TKIID);
create index EAGBE00.TI_TOPTKIID on EAGBE00.TASK_INSTANCE_T (TOP_TKIID);
create index EAGBE00.TI_TT_KND on EAGBE00.TASK_INSTANCE_T (TKTID, KIND);
create index EAGBE00.WI_ASSOBJ_REASON on EAGBE00.WORK_ITEM_T (ASSOCIATED_OID, ASSOCIATED_OBJECT_TYPE, REASON, PARENT_WIID);
create index EAGBE00.WI_AUTH_E on EAGBE00.WORK_ITEM_T (AUTH_INFO, EVERYBODY);
create index EAGBE00.WI_AUTH_G on EAGBE00.WORK_ITEM_T (AUTH_INFO, GROUP_NAME);
create index EAGBE00.WI_AUTH_GR_O_E on EAGBE00.WORK_ITEM_T (AUTH_INFO, GROUP_NAME, OWNER_ID, EVERYBODY);
create index EAGBE00.WI_AUTH_L on EAGBE00.WORK_ITEM_T (EVERYBODY, GROUP_NAME, OWNER_ID, QIID);
create index EAGBE00.WI_AUTH_O on EAGBE00.WORK_ITEM_T (AUTH_INFO, OWNER_ID DESC);
create index EAGBE00.WI_AUTH_R on EAGBE00.WORK_ITEM_T (AUTH_INFO, REASON DESC);
create index EAGBE00.WI_AUTH_U on EAGBE00.WORK_ITEM_T (AUTH_INFO, QIID);
create index EAGBE00.WI_GROUP_NAME on EAGBE00.WORK_ITEM_T (GROUP_NAME);
create index EAGBE00.WI_OBJID_TYPE_QIID on EAGBE00.WORK_ITEM_T (OBJECT_ID, OBJECT_TYPE, QIID);
create index EAGBE00.WI_OBJID_TYPE_REAS on EAGBE00.WORK_ITEM_T (OBJECT_ID, OBJECT_TYPE, REASON);
create index EAGBE00.WI_OT_OID_RS on EAGBE00.WORK_ITEM_T (OBJECT_TYPE, OBJECT_ID, REASON);
create index EAGBE00.WI_OWNER on EAGBE00.WORK_ITEM_T (OWNER_ID, OBJECT_ID, REASON, OBJECT_TYPE);
create index EAGBE00.WI_PARENT_WIID on EAGBE00.WORK_ITEM_T (PARENT_WIID);
create index EAGBE00.WI_QIID on EAGBE00.WORK_ITEM_T (QIID);
create index EAGBE00.WI_QI_OID_OWN on EAGBE00.WORK_ITEM_T (QIID, OBJECT_ID, OWNER_ID);
create index EAGBE00.WI_QI_OID_RS_OWN on EAGBE00.WORK_ITEM_T (QIID, OBJECT_ID, REASON, OWNER_ID);
create index EAGBE00.WI_QRY on EAGBE00.WORK_ITEM_T (OBJECT_ID, REASON, EVERYBODY, OWNER_ID);
create index EAGBE00.WI_REASON on EAGBE00.WORK_ITEM_T (REASON);
create index EAGBE00.WI_WI_QI on EAGBE00.WORK_ITEM_T (WIID, QIID);
create index EAGBE00.RUT_ASSOC on EAGBE00.RETRIEVED_USER_T (ASSOCIATED_OID);
create index EAGBE00.RUT_OWN_QIDESC on EAGBE00.RETRIEVED_USER_T (OWNER_ID, QIID DESC);
create index EAGBE00.RUT_OWN_QIID on EAGBE00.RETRIEVED_USER_T (OWNER_ID, QIID);
create index EAGBE00.RUT_QIID on EAGBE00.RETRIEVED_USER_T (QIID);
create unique index EAGBE00.PIB_NAME on EAGBE00.PROCESS_INSTANCE_B_T (NAME);
create index EAGBE00.PIB_PAP on EAGBE00.PROCESS_INSTANCE_B_T (PARENT_PIID);
create index EAGBE00.PIB_PAR on EAGBE00.PROCESS_INSTANCE_B_T (PARENT_AIID);
create index EAGBE00.PIB_PIID_PTID_STAT on EAGBE00.PROCESS_INSTANCE_B_T (PIID, PTID, STATE, STARTER, STARTED);
create index EAGBE00.PIB_PIID_STATE on EAGBE00.PROCESS_INSTANCE_B_T (PIID, STATE);
create index EAGBE00.PIB_PTID on EAGBE00.PROCESS_INSTANCE_B_T (PTID);
create index EAGBE00.PIB_STATE on EAGBE00.PROCESS_INSTANCE_B_T (STATE);
create index EAGBE00.PIB_TOP on EAGBE00.PROCESS_INSTANCE_B_T (TOP_LEVEL_PIID);
create index EAGBE00.PTB_NAME on EAGBE00.PROCESS_TEMPLATE_B_T (PTID, NAME);
create unique index EAGBE00.PTB_NAME_VALID on EAGBE00.PROCESS_TEMPLATE_B_T (NAME, VALID_FROM);
create index EAGBE00.PTB_NAME_VF_STATE on EAGBE00.PROCESS_TEMPLATE_B_T (NAME, VALID_FROM, STATE, PTID);
create index EAGBE00.PTB_STATE_PTID on EAGBE00.PROCESS_TEMPLATE_B_T (STATE, PTID);
create index EAGBE00.PTB_TOP_APP on EAGBE00.PROCESS_TEMPLATE_B_T (APPLICATION_NAME);
create index EAGBE00.TTLD_CCID on EAGBE00.TASK_TEMPL_LDESC_T (CONTAINMENT_CONTEXT_ID);
create index EAGBE00.TTLD_TKTID on EAGBE00.TASK_TEMPL_LDESC_T (TKTID);
create index EAGBE00.TTLD_TT_LOC on EAGBE00.TASK_TEMPL_LDESC_T (TKTID, LOCALE DESC);
create index EAGBE00.QVI_PI_CT_PA on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, CTID, PAID);
create index EAGBE00.QVI_PI_DEC on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, DECIMAL_VALUE);
create index EAGBE00.QVI_PI_GEN_VALUE on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, GENERIC_VALUE);
create index EAGBE00.QVI_PI_NAMESPACE on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, PROPERTY_NAMESPACE);
create index EAGBE00.QVI_PI_NUM on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, NUMBER_VALUE);
create index EAGBE00.QVI_PI_PROPNAME on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, PROPERTY_NAME);
create index EAGBE00.QVI_PI_STR_VALUE on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, STRING_VALUE);
create index EAGBE00.QVI_PI_TIME on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, TIMESTAMP_VALUE);
create index EAGBE00.QVI_PI_VARNAME on EAGBE00.QUERYABLE_VARIABLE_INSTANCE_T (PIID, VARIABLE_NAME);
c.)Views - 5 views
CREATE VIEW EAGBE00.TASK
(tkiid, activated, applic_defaults_id, applic_name, business_relevance, completed, containment_ctx_id,
ctx_authorization, due, expires, first_activated, follow_on_tkiid, is_ad_hoc, is_escalated, is_inline,
is_wait_for_sub_tk, kind, last_modified, last_state_change, name, name_space, originator, owner, parent_context_id,
priority, started, starter, state, support_autoclaim, support_claim_susp, support_delegation, support_sub_task,
support_follow_on, hierarchy_position, is_child, suspended, tktid, top_tkiid, type, resumes)
AS
SELECT TKIID, ACTIVATION_TIME,
APPLICATION_DEFAULTS_ID, APPLICATION_NAME,
BUSINESS_RELEVANCE, COMPLETION_TIME,
CONTAINMENT_CONTEXT_ID, CONTEXT_AUTHORIZATION,
DUE_TIME, EXPIRATION_TIME,
FIRST_ACTIVATION_TIME, FOLLOW_ON_TKIID,
IS_AD_HOC, IS_ESCALATED,
IS_INLINE, IS_WAITING_FOR_SUBTASK,
KIND, LAST_MODIFICATION_TIME,
LAST_STATE_CHANGE_TIME, NAME,
NAMESPACE, ORIGINATOR, OWNER,
PARENT_CONTEXT_ID, PRIORITY, START_TIME,
STARTER, STATE, SUPPORTS_AUTO_CLAIM,
SUPPORTS_CLAIM_SUSPENDED, SUPPORTS_DELEGATION,
SUPPORTS_SUB_TASK, SUPPORTS_FOLLOW_ON_TASK,
HIERARCHY_POSITION, IS_CHILD, IS_SUSPENDED,
TKTID, TOP_TKIID, TYPE,
RESUMES
FROM EAGBE00.TASK_INSTANCE_T
CREATE VIEW EAGBE00.TASK
(tkiid, activated, applic_defaults_id, applic_name, business_relevance, completed, containment_ctx_id,
ctx_authorization, due, expires, first_activated, follow_on_tkiid, is_ad_hoc, is_escalated, is_inline,
is_wait_for_sub_tk, kind, last_modified, last_state_change, name, name_space, originator, owner, parent_context_id,
priority, started, starter, state, support_autoclaim, support_claim_susp, support_delegation, support_sub_task,
support_follow_on, hierarchy_position, is_child, suspended, tktid, top_tkiid, type, resumes)
AS
SELECT TKIID, ACTIVATION_TIME,
APPLICATION_DEFAULTS_ID, APPLICATION_NAME,
BUSINESS_RELEVANCE, COMPLETION_TIME,
CONTAINMENT_CONTEXT_ID, CONTEXT_AUTHORIZATION,
DUE_TIME, EXPIRATION_TIME,
FIRST_ACTIVATION_TIME, FOLLOW_ON_TKIID,
IS_AD_HOC, IS_ESCALATED,
IS_INLINE, IS_WAITING_FOR_SUBTASK,
KIND, LAST_MODIFICATION_TIME,
LAST_STATE_CHANGE_TIME, NAME,
NAMESPACE, ORIGINATOR, OWNER,
PARENT_CONTEXT_ID, PRIORITY, START_TIME,
STARTER, STATE, SUPPORTS_AUTO_CLAIM,
SUPPORTS_CLAIM_SUSPENDED, SUPPORTS_DELEGATION,
SUPPORTS_SUB_TASK, SUPPORTS_FOLLOW_ON_TASK,
HIERARCHY_POSITION, IS_CHILD, IS_SUSPENDED,
TKTID, TOP_TKIID, TYPE,
RESUMES
FROM EAGBE00.TASK_INSTANCE_T
CREATE VIEW EAGBE00.WORK_ITEM
(wiid, owner_id, group_name, everybody, object_type, object_id, assoc_object_type, assoc_oid, reason, creation_time,
qiid, kind)
AS
SELECT WORK_ITEM_T.WIID, WORK_ITEM_T.OWNER_ID, WORK_ITEM_T.GROUP_NAME,
WORK_ITEM_T.EVERYBODY, WORK_ITEM_T.OBJECT_TYPE, WORK_ITEM_T.OBJECT_ID,
WORK_ITEM_T.ASSOCIATED_OBJECT_TYPE, WORK_ITEM_T.ASSOCIATED_OID, WORK_ITEM_T.REASON,
WORK_ITEM_T.CREATION_TIME, WORK_ITEM_T.QIID, WORK_ITEM_T.KIND
FROM EAGBE00.WORK_ITEM_T
WHERE WORK_ITEM_T.AUTH_INFO = 1
UNION ALL SELECT WORK_ITEM_T.WIID, WORK_ITEM_T.OWNER_ID, WORK_ITEM_T.GROUP_NAME,
WORK_ITEM_T.EVERYBODY, WORK_ITEM_T.OBJECT_TYPE, WORK_ITEM_T.OBJECT_ID,
WORK_ITEM_T.ASSOCIATED_OBJECT_TYPE, WORK_ITEM_T.ASSOCIATED_OID, WORK_ITEM_T.REASON,
WORK_ITEM_T.CREATION_TIME, WORK_ITEM_T.QIID, WORK_ITEM_T.KIND
FROM EAGBE00.WORK_ITEM_T
WHERE WORK_ITEM_T.AUTH_INFO = 2
UNION ALL SELECT WORK_ITEM_T.WIID, WORK_ITEM_T.OWNER_ID, WORK_ITEM_T.GROUP_NAME,
WORK_ITEM_T.EVERYBODY, WORK_ITEM_T.OBJECT_TYPE, WORK_ITEM_T.OBJECT_ID,
WORK_ITEM_T.ASSOCIATED_OBJECT_TYPE, WORK_ITEM_T.ASSOCIATED_OID, WORK_ITEM_T.REASON,
WORK_ITEM_T.CREATION_TIME, WORK_ITEM_T.QIID, WORK_ITEM_T.KIND
FROM EAGBE00.WORK_ITEM_T
WHERE WORK_ITEM_T.AUTH_INFO = 3
UNION ALL SELECT WORK_ITEM_T.WIID, RETRIEVED_USER_T.OWNER_ID, WORK_ITEM_T.GROUP_NAME,
WORK_ITEM_T.EVERYBODY, WORK_ITEM_T.OBJECT_TYPE, WORK_ITEM_T.OBJECT_ID,
WORK_ITEM_T.ASSOCIATED_OBJECT_TYPE, WORK_ITEM_T.ASSOCIATED_OID, WORK_ITEM_T.REASON,
WORK_ITEM_T.CREATION_TIME, WORK_ITEM_T.QIID, WORK_ITEM_T.KIND
FROM EAGBE00.WORK_ITEM_T, EAGBE00.RETRIEVED_USER_T
WHERE WORK_ITEM_T.AUTH_INFO = 0 AND WORK_ITEM_T.QIID = RETRIEVED_USER_T.QIID
CREATE VIEW "EAGBE00"."PROCESS_INSTANCE" ("PTID",
"PIID","NAME","STATE","CREATED","STARTED","COMPLETED",
"PARENT_NAME","TOP_LEVEL_NAME","PARENT_PIID","TOP_LEVEL_PIID",
"STARTER","DESCRIPTION","TEMPLATE_NAME","TEMPLATE_DESCR",
"RESUMES","CONTINUE_ON_ERROR") AS
SELECT EAGBE00.PROCESS_INSTANCE_B_T.PTID,
EAGBE00.PROCESS_INSTANCE_B_T.PIID,
EAGBE00.PROCESS_INSTANCE_B_T.NAME,
EAGBE00.PROCESS_INSTANCE_B_T.STATE,
EAGBE00.PROCESS_INSTANCE_B_T.CREATED,
EAGBE00.PROCESS_INSTANCE_B_T.STARTED,
EAGBE00.PROCESS_INSTANCE_B_T.COMPLETED,
EAGBE00.PROCESS_INSTANCE_B_T.PARENT_NAME,
EAGBE00.PROCESS_INSTANCE_B_T.TOP_LEVEL_NAME,
EAGBE00.PROCESS_INSTANCE_B_T.PARENT_PIID,
EAGBE00.PROCESS_INSTANCE_B_T.TOP_LEVEL_PIID,
EAGBE00.PROCESS_INSTANCE_B_T.STARTER,
EAGBE00.PROCESS_INSTANCE_B_T.DESCRIPTION,
EAGBE00.PROCESS_TEMPLATE_B_T.NAME,
EAGBE00.PROCESS_TEMPLATE_B_T.DESCRIPTION,
EAGBE00.PROCESS_INSTANCE_B_T.RESUMES,
EAGBE00.PROCESS_TEMPLATE_B_T.CONTINUE_ON_ERROR
FROM EAGBE00.PROCESS_INSTANCE_B_T,
EAGBE00.PROCESS_TEMPLATE_B_T
WHERE EAGBE00.PROCESS_INSTANCE_B_T.PTID =
EAGBE00.PROCESS_TEMPLATE_B_T.PTID
CREATE VIEW EAGBE00.TASK_TEMPL_DESC AS
SELECT TASK_TEMPL_LDESC_T.TKTID, TASK_TEMPL_LDESC_T.LOCALE, TASK_TEMPL_LDESC_T.DESCRIPTION,
TASK_TEMPL_LDESC_T.DISPLAY_NAME
FROM eagbe00.TASK_TEMPL_LDESC_T
CREATE VIEW eagbe00.QUERY_PROPERTY
(piid, variable_name, name, namespace, generic_value, string_value, number_value, decimal_value, timestamp_value)
AS
SELECT QUERYABLE_VARIABLE_INSTANCE_T.PIID, QUERYABLE_VARIABLE_INSTANCE_T.VARIABLE_NAME,
QUERYABLE_VARIABLE_INSTANCE_T.PROPERTY_NAME, QUERYABLE_VARIABLE_INSTANCE_T.PROPERTY_NAMESPACE,
QUERYABLE_VARIABLE_INSTANCE_T.GENERIC_VALUE, QUERYABLE_VARIABLE_INSTANCE_T.STRING_VALUE,
QUERYABLE_VARIABLE_INSTANCE_T.NUMBER_VALUE, QUERYABLE_VARIABLE_INSTANCE_T.DECIMAL_VALUE,
QUERYABLE_VARIABLE_INSTANCE_T.TIMESTAMP_VALUE
FROM eagbe00.QUERYABLE_VARIABLE_INSTANCE_T
; -
Serious performance problem - SELECT DISTINCT x.JDOCLASSX FROM x
I am noticing a huge performance problem when trying to access a member that
is lazily loaded:
MonitorStatus previousStatus = m.getStatus();
This causes the following query to be executed:
SELECT DISTINCT MONITORSTATUSX.JDOCLASSX FROM MONITORSTATUSX
This table has 3 million records and this SQL statement takes 3 minutes to
execute! Even worse, my app heavily uses threads, so this statement is
executed in each of the 32 threads. As a result the application stops.
Is there any way that I can optimize this? And more importantly, can Kodo
handle a multithreaded app like this with a huge database? I've been having
a lot of performance problems since I've started doing stress & load
testing, and I'm thinking Kodo isn't ready for this type of application.
Thanks,
MichaelYou can prevent this from happening by explicitly enumerating the valid
persistent types in a property. See
http://docs.solarmetric.com/manual.html#com.solarmetric.kodo.PersistentTypes
for details.
>
Inconveniently, this nugget of performance info is not listed in the
optimization guide. I'll add in an entry for it.This setting did in fact prevent the query from running which fixed the
problem. It definitely belongs in the optimization guide.
And more importantly, can Kodo
handle a multithreaded app like this with a huge database? I've beenhaving
a lot of performance problems since I've started doing stress & load
testing, and I'm thinking Kodo isn't ready for this type of application.I'd like to find out more information about details about your issues. We
do a decent amount of stress / load testing internally, but there are
always use cases that we don't test. Please send me an email (I'm assuming
that [email protected] is not really your address) and let's
figure out some way to do an analysis of what you're seeing.This email is just for posting to usenet, to avoid spam. I'm now running my
app through stress/load testing so I hope to discover any remaining issues
before going into production. As of this morning the system seems to be
performing quite well. Now the biggest performance problem for me is the
lack of what I think is called "outer join". I know you'll have this in 3.0
but I'm suprised you don't have this already because not having it really
affects performance. I already had to code one query by hand with JDBC due
to this. It was taking 15+ minutes with Kodo and with my JDBC version it
only takes a few seconds. There are lots of anti-JDO people and performance
issues like this really give them ammunition. Overall I just have the
impression that Kodo hasn't been used on many really large scale projects
with databases that have millions of records.
Thanks for configuration fix,
Michael -
Which solution for better perfomance?
I'm writing java application based on XML. This application have to store very large XML file into DB (XML file is about 1000MB large). My solution is to divide it into smaller (100MB) parts (because of memory resources) and store it in DB. I have one XMLType table based on object-relational storage (so at the end there will be 10 rows each of 100MB size).
XML file looks like:
<students>
<student id="1">
...// 5 nested elements or collections of elements
</student>
<student id="2">
</student>
<student id="3">
</student>
</students>
</students>
Now I need to get java object Student which correponds to <student>element. My solution is to select whole <student> element and use JAXB to convert it to Java object. While solving a problem with selecting <student> element (just in case you know a solution for my another problem: How to select specific element from a XML document using JDBC? another question raised in my mind.
Which solution has better performance when I don't need to select relational data but I'm interested in xml fragment?
1) Use object-relational storage
2) Use CLOB storage
As I figured out object-relational storage is better for selecting data like student name. But is it also better when I need whole xml fragment?
Isn't my solution completely wrong? I'm quite a newbie in XML DB so it's possible that I missed better solution.
Thanks for any adviceI don't know which version you have regarding 11g, but probably in this case I would go for a table with a XMLType Binary XML column. To make xpath and other statements perform use Structured or Unstructured XMLIndexes to support your queries. The following has worked for far smaller XML documents but the millions we used were good for a total of >> 1 TB total storage size...
CREATE TABLE XMLTEST_DATA
( "ID" NUMBER(15,0),
"DOC" "SYS"."XMLTYPE"
) SEGMENT CREATION IMMEDIATE
NOCOMPRESS NOLOGGING
TABLESPACE "XML_DATA"
XMLTYPE COLUMN "DOC" STORE AS SECUREFILE BINARY XML
(TABLESPACE "XML_DATA"
NOCOMPRESS KEEP_DUPLICATES)
-- XMLSCHEMA "http://www.XMLTEST.com/Schema1.0.xsd"
-- ELEMENT "RECORD"
DISALLOW NONSCHEMA
PARTITION BY RANGE(id)
(PARTITION XMLTEST_DATA_PART_01 VALUES LESS THAN (100000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_02 VALUES LESS THAN (200000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_03 VALUES LESS THAN (300000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_04 VALUES LESS THAN (400000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_05 VALUES LESS THAN (500000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_06 VALUES LESS THAN (600000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_07 VALUES LESS THAN (700000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_08 VALUES LESS THAN (800000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_09 VALUES LESS THAN (900000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION XMLTEST_DATA_PART_MAX VALUES LESS THAN (MAXVALUE) TABLESPACE "XML_DATA" NOCOMPRESS
CREATE INDEX test_xmlindex on XMLTEST_data (doc) indextype is xdb.xmlindex
LOCAL
parameters ('GROUP PARENTINFO_GROUP
XMLTable test_cnt_tab_ParentInfo
''/RECORD/ParentInfo''
COLUMNS
GroupingID01 VARCHAR2(4000) PATH ''Parent/GroupingID'',
GroupingID02 VARCHAR2(4000) PATH ''Parent/Parent/GroupingID''
'); -
Urgent query regarding performance
hi
i have one query regarding performance.iam using interactive reporting and workspace.
i have all the linsence server,shared services,and Bi services and ui services and oracle9i which has metadata installed in one system(one server).data base which stores relationaldata(DB2) on another system.(i.e 2 systems in total).
in order to increase performance i made some adjustments
i installed hyperion BI server services, UI services,license server and shared services server such that all web applications (that used web sphere 5.1) such as shared services and UI services in server1(or computer1).and remaining linsence and bi server services in computer2 and i installed database(db2) on another computer3.(i.e 3 systems in total)
my query : oracle 9i which has metadata where to install that in ( computer 1 or in computer 2 )
i want to get best performance.where to install that oracle 9i which has metadata stored in it.
for any queries please reply mail
[email protected]
9930120470You should know that executing a query is always slower the first time. Then Oracle can optimise your query and store it temporary for further executions. But passing from 3 minutes to 3 seconds, maybe your original query is really, really slow. Most of the times I only win few milliseconds. If Oracle is able to optimize it to 3 seconds. You must clearly rewrite your query.
Things you should know to enhance your execution time : try to reduce the number of nested loops, nested loops give your an exponential execution time which is really slow :
for rec1 in (select a from b) loop
for rec2 in (select c from d) loop
end loop;
end loop;Anything like that is bad.
Try to avoid Cartesian products by writing the best where clause possible.
select a.a,
b.b
from a,
b
where b.b > 1This is bad and slow. -
How to improve query & loading performance.
Hi All,
How to improve query & loading performance.
Thanks in advance.
Rgrds
shobaHi Shoba
There are lot of things to improve the query and loading performance.
please refer oss note :557870 : Frequently asked questions on query performance
also refer to
weblogs:
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
This is the oss notes of FAQ on query performance
1. What kind of tools are available to monitor the overall Query Performance?
1. BW Statistics
2. BW Workload Analysis in ST03N (Use Export Mode!)
3. Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools is available to analyze a specific query in detail?
1. Transaction RSRT
2. Transaction RSRTRACE
4. Do I have an overall query performance problem?
i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
ii. You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
7. What can I do if the client proportion is high for all queries?
Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
1. Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime
10. What can I do if a query has a high database runtime?
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
5. Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
3. Check if the bandwidth for WAN connection is sufficient
and the some threads:
how can i increse query performance other than creating aggregates
How to improve query performance ?
Query performance - bench marking
may be helpful
Regards
C.S.Ramesh
[email protected] -
Asset query execution performance after upgrade from 4.6C to ECC 6.0+EHP4
Hi,guys
I am encounted a weird problems about asset query execution performance after upgrade to ECC 6.0.
Our client had migrated sap system from 4.6c to ECC 6.0. We test all transaction code and related stand report and query.
Everything is working normally except this asset depreciation query report. It is created based on ANLP, ANLZ, ANLA, ANLB, ANLC table; there is also some ABAP code for additional field.
This report execution costed about 6 minutes in 4.6C system; however it will take 25 minutes in ECC 6.0 with same selection parameter.
At first, I am trying to find some difference in table index ,structure between 4.6c and ECC 6.0,but there is no difference about it.
i am wondering why the other query reports is running normally but only this report running with too long time execution dump messages even though we do not make any changes for it.
your reply is very appreciated
Regards
BrianThanks for your replies.
I check these notes, unfortunately it is different our situation.
Our situation is all standard asset report and query (sq01) is running normally except this query report.
I executed se30 for this query (SQ01) at both 4.6C and ECC 6.0.
I find there is some difference in select sequence logic even though same query without any changes.
I list there for your reference.
4.6C
AQA0FI==========S2============
Open Cursor ANLP 38,702 39,329,356 = 39,329,356 34.6 AQA0FI==========S2============ DB Opens
Fetch ANLP 292,177 30,378,351 = 30,378,351 26.7 26.7 AQA0FI==========S2============ DB OpenS
Select Single ANLC 15,012 19,965,172 = 19,965,172 17.5 17.5 AQA0FI==========S2============ DB OpenS
Select Single ANLA 13,721 11,754,305 = 11,754,305 10.3 10.3 AQA0FI==========S2============ DB OpenS
Select Single ANLZ 3,753 3,259,308 = 3,259,308 2.9 2.9 AQA0FI==========S2============ DB OpenS
Select Single ANLB 3,753 3,069,119 = 3,069,119 2.7 2.7 AQA0FI==========S2============ DB OpenS
ECC 6.0
Perform FUNKTION_AUSFUEHREN 2 358,620,931 355
Perform COMMAND_QSUB 1 358,620,062 68
Call Func. RSAQ_SUBMIT_QUERY_REPORT 1 358,569,656 88
Program AQIWFI==========S2============ 2 358,558,488 1,350
Select Single ANLA 160,306 75,576,052 = 75,576,052
Open Cursor ANLP 71,136 42,096,314 = 42,096,314
Select Single ANLC 71,134 38,799,393 = 38,799,393
Select Single ANLB 61,888 26,007,721 = 26,007,721
Select Single ANLZ 61,888 24,072,111 = 24,072,111
Fetch ANLP 234,524 13,510,646 = 13,510,646
Close Cursor ANLP 71,136 2,017,654 = 2,017,654
We can see first open cursor ANLP ,fetch ANLP then select ANLC,ANLA,ANLZ,ANLB at 4.C.
But it changed to first select ANLA,and open cursor ANLP,then select ANLC,ANLB,ANLZ,at last fetch ANLP.
Probably,it is the real reason why it is running long time in ECC 6.0.
Is there any changes for query selcection logic(table join function) in ECC 6.0. -
Which is a better bacikup software?
I'm puzzled as to which is a better backup software:
Carbon Copy Cloner or
Super Duper which says it preserves data that would be lost during something called a roll back.
I don't what that means;
I am backing up to a firewire external drive, is that a good choice?
Need advice
ThanksI read in SuperDuper that it completely erases your drive and makes a duplicate of the HD. There are documents that I remove from my HD but would like to keep on the backup disc just in case. I may need them in the future. According to SuperDuper they would be erased because they no longer reside on the main HD. I'm looking to update new and changed items but not delete items that no longer reside on the main HD, am I making myself clear?
Yes, the free version of SuperDuper! will always erase the target drive and clone the entire source disk again. The paid version will do what you want. What I have in quotes below is directly from the registered version's menus.
1) Smart Update. "Smart Update will copy and erase what's needed to make the target drive identical to your selections from the source. The result will mimic "Erase, then copy", but will typically take a fraction of the time."
2) Copy newer files. "Any selected files already on the target drive that are older than the equivalent file on the source will be replaced. New files will also be copied; no files will be removed."
3) Copy different files. "Any selected files already on the target drive that are different (in date, size or attributes), than the same file on the source will be replaced. New files will also be copied; no files will be removed." -
I have an AirPort Extreme time capsule . I need to extend my wifi network range. Which devise is better ?? Airport express or AirPort Extreme ??
Since you have the new Time Capsule, then you will need a new AirPort Extreme to match the performance capabilities of the Time Capsule.
If best quality is preferred, you will need to connect the Time Capsule and AirPort Extreme using a wired Ethernet connection between the devices. The advantage of doing it this way is that you can locate the AirPort Extreme exactly where it is needed...and there will be no loss of signal through the Ethernet cable.
A wireless connection will result in a significant drop in performance, but it might be OK for your uses, if you want to try it that way first to see if the performance is acceptable.
It is really important that the AirPort Extreme be located where it can receive a strong wireless signal from the Time Capsule.
A line-of-sight relationship between the Time Capsule and AirPort Extreme would be the goal, with the AirPort Extreme located about half way between the Time Capsule and the general area where you need more wireless signal coverage. -
Which is a better source for an Indicator, physical or logical
Hi guys,
this is something I was wondering -
which is better for filtering out a value? (for example to Count measures with certain Indicator equal to Yes).
A) Logical (check "Use existing logical columns as the source") and the use something like CASE Logical table.Indicator Measure When 'YES' Then 1 END
or
B) Physical (create a column and in Data Type select the Column mapping and map as
CASE Physical table.Indicator Measure When 'YES' Then 1 END
Physical table is the same.
I know, it's always best to push things to DB back. But in this case, it'd be the same, it doesn't appear to me which option is better at this point.
By the way, both of them work.
Thanks
Message was edited by:
wildmightBoth should be same. You can verify this by checking the physical sql generated by using both methods. Most likely they would be same.
Maybe you are looking for
-
How to find the user - role assignments in the database for EP6 SP9?
L.S., We have a quite specific requirement: to see which users have access to our portal environment (EP6 SP9). It does not immediately matter (though would probably still be nice to know if possible) which roles users have exactly. I've been looking
-
Is there any way to treat expired SSL certs in HTTPS connections as non-secure?
Is there a way of navigating HTTPS websites as though they were HTTP, without adding any SSL exceptions? Obviously an expired/self signed SSL cert over HTTPS is no more dangerous than no encryption at all over HTTP. The Untrusted Connection dialog is
-
Can i perform SDK 7 Compilation while maintaining SDK6 look and feel for my App
In our organization we test the App using visual specs and our existing visual specs are aligned with SDK 6 look and feel. When i compile my app with SDK 7 the visual display of the App changes completely and is no longer in sync with the visual spec
-
I have arranged my tool panels (swatches/paragraph styles etc.) onto the second monitor. However, every time I restart InDesign, all my tool panels go back to one monitor and I have to drag all the tool panels back to the second monitor. Is there som
-
How to maintain WORK for General Cost activity
Hi, can anybody show me how to maintain WORK for a General Cost activity. In the detail screen I cannot find field WORK, so I also don't know how to confirm for a General Cost activity Thanks Long