SQLDeveloper 1.5.4 Table browsing performance issue
Hi all,
I had read of previous posts regarding SQLDeveloper 1.5.3 table browsing performance issues. I downloaded and installed version 1.5.4 and it appears the problem has gotten worse!
It takes ages to display rows of this particular table (the structure is shown below). It takes much longer to view it in Single Record format. Then attempting to Export the data is another frustrating exercise. By the way, TOAD does not seem to have this problem so I guess it is a SQLDeveloper bug.
Can someone help with any workarounds?
Thanks
Chiedu
Here is the table structure:
create table EMAIL_SETUP
APPL_ID VARCHAR2(10) not null,
EML_ID VARCHAR2(10) not null,
EML_DESC VARCHAR2(80) not null,
PRIORITY_NO_DM NUMBER(1) default 3 not null
constraint CC_EMAIL_SETUP_4 check (
PRIORITY_NO_DM in (1,2,3,4,5)),
DTLS_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_5 check (
DTLS_YN in ('0','1')),
ATT_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_6 check (
ATT_YN in ('0','1')),
MSG_FMT VARCHAR2(5) default 'TEXT' not null
constraint CC_EMAIL_SETUP_7 check (
MSG_FMT in ('TEXT','HTML')),
MSG_TMPLT VARCHAR2(4000) not null,
MSG_MIME_TYPE VARCHAR2(500) not null,
PARAM_NO NUMBER(2) default 0 not null
constraint CC_EMAIL_SETUP_10 check (
PARAM_NO between 0 and 99),
IN_USE_YN VARCHAR2(1) not null
constraint CC_EMAIL_SETUP_11 check (
IN_USE_YN in ('0','1')),
DFLT_USE_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_12 check (
DFLT_USE_YN in ('0','1')),
TAB_NM VARCHAR2(30) null ,
FROM_ADDR VARCHAR2(80) null ,
RPLY_ADDR VARCHAR2(80) null ,
MSG_SBJ VARCHAR2(100) null ,
MSG_HDR VARCHAR2(2000) null ,
MSG_FTR VARCHAR2(2000) null ,
ATT_TYPE_DM VARCHAR2(4) null
constraint CC_EMAIL_SETUP_19 check (
ATT_TYPE_DM is null or (ATT_TYPE_DM in ('RAW','TEXT'))),
ATT_INLINE_YN VARCHAR2(1) null
constraint CC_EMAIL_SETUP_20 check (
ATT_INLINE_YN is null or (ATT_INLINE_YN in ('0','1'))),
ATT_MIME_TYPE VARCHAR2(500) null ,
constraint PK_EMAIL_SETUP primary key (EML_ID)
)
Check Tools | Preferences | Database | Advanced Parameters and post the value you have there.
Try setting it to a small number and report if you see any improvement.
-Raghu
Similar Messages
-
Multiple table select Performance Issue
Hi,
I would like to get an opinion which from these query which is faster and has a performance issue..
SELECT EMP_ID, NAME, DEPT_NAME
FROM EMP, DEPT
WHERE EMP_ID = DEPT_ID;
or
SELECT EMP_ID, NAME, (SELECT DEPT_NAME FROM DEPT WHERE ID = P_ID)
FROM EMP
WHERE EMP_ID = P_ID;lets say that EMP_ID on Dept table is linked to EMP_ID table on EMP..Well...I don't get your design, but the two queries may return different results.
Comparing the performance doesn't make sense.
Nevertheless, the only way is to run them both and see which one is faster or see which one has the lowest IO.
There's no way we can tell you which is faster by just looking at the text of the queries.
Post some explain plans or traces. -
Sending Audio to Browser - Performance issue
Hi,
I have written a simple jsp page that retrieves an OrdAudio object from the database and sends it to the browser. I have done this before with images, which worked perfectly. For the audio files, I simply changed the image specific code to work with audio.
The problem is that for an approximately 100KB audio file, it takes a good two minutes for it to play in the browser. Over a broadband connection this seems pretty slow. Is there a way to improve this?
here is part of my code:
// create OrdAudio instance
OrdAudio media = (OrdAudio)result.getCustomDatum(1, OrdAudio.getFactory());
// set page context and send audio to browser
handler.setPageContext(pageContext);
handler.sendAudio(media);This doesn't sound right. One thing that caught me eye is that you are measuring the time that the audio "plays in the browser." Is there any chance that this 100kb file has a 100sec audio duration? How long should it take to play this audio file?
One way to debug is to change the mid-tier logic to save the file to the local disk instead of sending it back on the HTTP response. You can use the method getDataInFile(java.lang.String filename) where filename is a string representing the full path to a local fiel that you have permission to write to. If you time this call, you can measure how long it takes to fetch the data from the database.
The other issue concerns the client side HTML coding. What are you coding to accept the audio data? Browsers accept images (JPEG and GIF) natively but plugins are required to accept audio data. How is this being handled? -
CREATE TABLE AS - PERFORMANCE ISSUE
Hi All,
I am creating a table CONTROLDATA from existing tables PF_CONTROLDATA & ICDSV2_AutoCodeDetails as per the below query.
CREATE TABLE CONTROLDATA AS
SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, STRVALUE
FROM PF_CONTROLDATA CD1
JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL)
AND CD1.AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID);The above statement is taking around 10mins of time to create the table CONTROLDATA which is not acceptible in our environment. Can any one please suggest is there any way to improve the performance of the above query to create the table CONTROLDATA under a minute?
PF_CONTROLDATA has 1,50,00,000 (15million) rows and has composite index(XIF16PF_CONTROLDATA) on CONTEXTID, AUDITORDER columns and one more index(XIE1PF_CONTROLDATA) on CONTROLID column.
ICDSV2_AutoCodeDetails has only 6 rows and no indexes.
After the create table statement CONTROLDATA will have around 10,00,000 (1million) records.
Can some one give any suggestion to improve the performance of the above query?
oracle version is : 10.2.0.3
Tkprof output is:
create table CONTROLDATA2 as
SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, DATATYPE, NUMVALUE, FLOATVALUE, STRVALUE, PFDATETIME, MONTH, DAY, YEAR, HOUR, MINUTE, SECOND, UNITID, NORMALIZEDVALUE, NORMALIZEDUNITID, PARENTCONTROLVALUEID, PARENTVALUEORDER
FROM PF_CONTROLDATA CD1
JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL OR CD1.CONTROLID=AC.SYNONYM_CTRL)
AND AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.03 2 2 0 0
Execute 1 15.25 593.43 211688 4990786 6617 1095856
Fetch 0 0.00 0.00 0 0 0 0
total 2 15.25 593.47 211690 4990788 6617 1095856
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 40
********************************************************************************Explain plan output is:
Plan hash value: 2771048406
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | CREATE TABLE STATEMENT | | 1 | 105 | 3609K (1)| 14:02:20 |
| 1 | LOAD AS SELECT | CONTROLDATA2 | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID | PF_CONTROLDATA | 178K| 9228K| 55344 (1)| 00:12:55 |
| 4 | NESTED LOOPS | | 891K| 89M| 55344 (1)| 00:12:55 |
| 5 | TABLE ACCESS FULL | ICDSV2_AUTOCODEDETAILS | 5 | 260 | 4 (0)| 00:00:01 |
| 6 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 7 | BITMAP OR | | | | | |
| 8 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 9 | INDEX RANGE SCAN | XIE1PF_CONTROLDATA | | | 48 (3)| 00:00:01 |
| 10 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 11 | INDEX RANGE SCAN | XIE1PF_CONTROLDATA | | | 48 (3)| 00:00:01 |
| 12 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 13 | INDEX RANGE SCAN | XIE1PF_CONTROLDATA | | | 48 (3)| 00:00:01 |
| 14 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 15 | INDEX RANGE SCAN | XIE1PF_CONTROLDATA | | | 48 (3)| 00:00:01 |
| 16 | SORT AGGREGATE | | 1 | 16 | | |
| 17 | FIRST ROW | | 1 | 16 | 3 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN (MIN/MAX) | XIF16PF_CONTROLDATA | 1 | 16 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("AUDITORDER"= (SELECT MAX("AUDITORDER") FROM "PF_CONTROLDATA" "CD2" WHERE
"CD2"."CONTEXTID"=:B1))
9 - access("CD1"."CONTROLID"="AC"."MODTERM_CONTROL")
11 - access("CD1"."CONTROLID"="AC"."FAILED_CTRL")
13 - access("CD1"."CONTROLID"="AC"."CODE_CTRL")
15 - access("CD1"."CONTROLID"="AC"."SYNONYM_CTRL")
18 - access("CD2"."CONTEXTID"=:B1)
Note
- dynamic sampling used for this statement
********************************************************************************I tried to change the above logic even by using insert statement and APPEND hint, but still taking the same time.
Please suggest.
Edited by: 867546 on Jun 22, 2011 2:42 PMHi user2361373
i tried using nologging also but still it is taking same amout of time. Please find below the tkprof output.
create table CONTROLDATA2 NOLOGGING as
SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, DATATYPE, NUMVALUE, FLOATVALUE, STRVALUE, PFDATETIME, MONTH, DAY, YEAR, HOUR, MINUTE, SECOND, UNITID, NORMALIZEDVALUE, NORMALIZEDUNITID, PARENTCONTROLVALUEID, PARENTVALUEORDER
FROM PF_CONTROLDATA CD1
JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL OR CD1.CONTROLID=AC.SYNONYM_CTRL)
AND AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID)
call count cpu elapsed disk query current rows
Parse 1 0.03 0.03 2 2 0 0
Execute 1 13.98 598.54 211963 4990776 6271 1095856
Fetch 0 0.00 0.00 0 0 0 0
total 2 14.01 598.57 211965 4990778 6271 1095856
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 40
********************************************************************************Edited by: 867546 on Jun 22, 2011 3:09 PM
Edited by: 867546 on Jun 22, 2011 3:10 PM -
Browser performance issue when Calling BI Publisher report from OBIEE dashb
Hi Techies,
We installed OBIEE 10.1.3.3 on SunOS 5.9 unix box.This is PROD env.We are calling BIPublisher report from OBIEE dashboard by clicking the BIP reports link on OBIEE dashboard.After clikcing the link BIPublisher report is opening in another browser window.But is it taking time.
Plz tell me how can we reduce the time taken to open BIPublisher report window when we cilck on the report name in OBIEE dashboard
Thanks,Suresh
There is no out of the box method to get the EBS report to OBIEE. You could look into moving the EBS report to a mountable web server directory after it completes and then show it in OBIEE via a link or html object. You will need to rename the report of course, every time you run it its going to change names.
In an after report trigger you could call a small java class to move and rename the file for you.
Alternatively, recreate the report in the standalone version. Be aware that you are going to have to set orgs yourself, BIP is not aware of orgs outside of EBS
Tim -
Performance issues with version enable partitioned tables?
Hi all,
Are there any known performance issues with version enable partitioned tables?
Ive been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
Tanks in advance,
Vitor
Example:
Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=CHOOSE 1 249
UPDATE SIG.SIG_QUA_IMG_LT
NESTED LOOPS SEMI 1 266 249
PARTITION RANGE ALL 1 9
TABLE ACCESS FULL SIG.SIG_QUA_IMG_LT 1 259 2 1 9
VIEW SYS.VW_NSO_1 1 7 247
NESTED LOOPS 1 739 247
NESTED LOOPS 1 677 247
NESTED LOOPS 1 412 246
NESTED LOOPS 1 114 244
INDEX RANGE SCAN WMSYS.MODIFIED_TABLES_PK 1 62 2
INDEX RANGE SCAN SIG.QIM_PK 1 52 243
TABLE ACCESS BY GLOBAL INDEX ROWID SIG.SIG_QUA_IMG_LT 1 298 2 ROWID ROW L
INDEX RANGE SCAN SIG.SIG_QUA_IMG_PKI$ 1 1
INDEX RANGE SCAN WMSYS.WM$NEXTVER_TABLE_NV_INDX 1 265 1
INDEX UNIQUE SCAN WMSYS.MODIFIED_TABLES_PK 1 62
/* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */
UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1
SET z1.nextver =
SYS.ltutil.subsversion
(z1.nextver,
SYS.ltutil.getcontainedverinrange (z1.nextver,
'SIG.SIG_QUA_IMG',
'NpCyPCX3dkOAHSuBMjGioQ==',
4574,
4575
4574
WHERE z1.ROWID IN (
(SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
t2.ROWID
FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j1,
sig.sig_qua_img_lt t1,
sig.sig_qua_img_lt t2,
wmsys.wm$nextver_table j2,
(SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j3
WHERE t1.VERSION = j1.VERSION
AND t1.ima_id = t2.ima_id
AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
AND t2.nextver != '-1'
AND t2.nextver = j2.next_vers
AND j2.VERSION = j3.VERSION))Hello Vitor,
There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
Thank You,
Ben -
Performance issue of frequently data inserted tables
Hi all,
Have a table named raw_trap_store having columns as trap_id(number,PK),Source_IP(varchar2), OID(varchar2),Message(CLOB) and received_time(date).
This table is partitioned across 24 partitions where partitioning column being received_time. (every hour's data will be stored in each partition).
This table is getting inserted with 40-50 records/sec on an average. Overall number of records for a day will be around 2.8-3 million. Data will be retained for 2 days.
No updates will be happening on this table.
Performance issue:N
Need a report which involves selection of records from this table based on certain values of Source IP (filtering condition on source_ip column).
Need a report which involves selection of records from this table based on certain values of OID (filtering condition on OID column).
But, if i create an index on SourceIP and OID column, then inserts are getting slow. (have created a normal index. not partitioned index)
Please help me to address the above issue.Giving the nature of your report (based on Source_IP and OID) and the nature of your table partitioning (range partitioned by received_time), you have already made a good decision to create these particular indexes as a normal (b-tree or global) and not locally partitioned. Because if you have locally partitioned them, your reports will not eliminate partitions (because they do not include the partition key in their where clause) and hence your index range scans will scan all 24 partitions generating a lot of logical I/O
That is said, remember that generally we insert once and select many times. You have to balance that. If you are sure that it is the creation of your two indexes that has decreased the insert performance then you may set them at in an unusable state before the insert and rebuild them after. But this might be a good advice only if the volume of data to be inserted is much bigger than the existing volume of data before the insert.
And if you are not deleting from the table and the table does not contain triggers and integrity constraints (like FK constraint) then you can opt for a direct path insert using the hint /*+ append */
Best regards
Mohamed Houri
<mod. action: removed unecessary blog ref.>
Message was edited by: Nicolas.Gasparotto -
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issue when using select count on large tables
Hello Experts,
I have a requirement where i need to get count of data from a database table.Later on i need to display the count in ALV format.
As per my requirement, I have to use this select count inside a nested loops.
Below is the count snippet:
LOOP at systems assigning <fs_sc_systems>.
LOOP at date assigning <fs_sc_date>.
SELECT COUNT( DISTINCT crmd_orderadm_i~header )
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient "MANDT is referred as client
AND crmd_orderadm_iguid EQ bbp_pdigpguid
INTO w_sc_count
WHERE crmd_orderadm_i~created_at BETWEEN <fs_sc_date>-start_timestamp
AND <fs_sc_date>-end_timestamp
AND bbp_pdigp~zz_scsys EQ <fs_sc_systems>-sys_name.
endloop.
endloop.
In the above code snippet,
<fs_sc_systems>-sys_name is having the system name,
<fs_sc_date>-start_timestamp is having the start date of month
and <fs_sc_date>-end_timestamp is the end date of month.
Also the data in tables crmd_orderadm_i and bbp_pdigp is very large and it increases every day.
Now,the above select query is taking a lot of time to give the count due to which i am facing performance issues.
Can any one pls help me out to optimize this code.
Thanks,
SumanHi Choudhary Suman ,
Try this:
SELECT crmd_orderadm_i~header
INTO it_header " interna table
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient
AND crmd_orderadm_iguid EQ bbp_pdigpguid
FOR ALL ENTRIES IN date
WHERE crmd_orderadm_i~created_at BETWEEN date-start_timestamp
AND date-end_timestamp
AND bbp_pdigp~zz_scsys EQ date-sys_name.
SORT it_header BY header.
DELETE ADJACENT DUPLICATES FROM it_header
COMPARING header.
describe table it_header lines v_lines.
Hope this information is help to you.
Regards,
José -
Cache and performance issue in browsing SSAS cube using Excel for first time
Hello Group Members,
I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
Is there, any way we could reduce the time taken for first attempt ?
Best Regards, Arka Mitra.Thanks Richard and Charlie,
We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
first time after daily cube refresh.
Guys,
This is what we have done:
We have 4 cube databases and each cube db has 1-8 cubes.
1. We are doing daily cube refresh using SQL jobs as follows:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
<Object>
<DatabaseID>FINANCE CUBES</DatabaseID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
CREATE CACHE FOR [Profit Analysis] AS
{[Measures].members}
*[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
GO
I will update the post after I receive the actual im[provement from UAT/ Production environment.
Best Regards, Arka Mitra. -
Performance issue in browsing SSAS cube using Excel for first time after cube refresh
Hello Group Members,
This is a continuation of my earlier blog question -
https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices
As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.
I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around
4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube
refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.
Is there, any way we could reduce the time taken for first attempt ?
As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.
Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after
daily cube refresh?
Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take
a significant time based on the bandwidth of the network and connection.
Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?
Best Regards, Arka Mitra.Could you run the following two DMV queries filling in the name of the cube you're connecting to. Then please post back the row count returned from each of them (by copying them into Excel and counting the rows).
I want to see if this is an issue I've run across before with thousands of dimension attributes and MDSCHEMA_CUBES performance.
select [HIERARCHY_UNIQUE_NAME]
from $system.mdschema_hierarchies
where CUBE_NAME = 'YourCubeName'
select [LEVEL_UNIQUE_NAME]
from $system.mdschema_levels
where CUBE_NAME = 'YourCubeName'
Also, what version of Analysis Services is it? If you connect Object Explorer in Management Studio to SSAS, what's the exact version number it says on the top server node?
http://artisconsulting.com/Blogs/GregGalloway -
Performance issue in the table
Hi All
i have one table which is based on a VO. This VO has 150 attributes.I am facing performance issue.
first Q is i need to display only 15 attributes in the table. but other attributes are also required .
so i have two options .
1. make other as form value
or
2. make them messagetextinput and set rendered false.
i just wanted to know which will perform better.
pls helpthese attributes have some default values.
i need to pass these values to the api on some action
if i dont keep in my page then
will row.getAttribute("XYZ") still carry its value
?? -
Performance issue with MSEG table in Production
Hi,
I have written a report with 4 select queries.
First i am selecting data from VBRK table in i_vbrk. Then for all entries in i_vbrk, i am fetching records from VBRP into i_vbrp table. Then for all entries in i_vbrp, records are fetched from MKPF into i_mkpf. Then, finally for all entries in i_mkpf, records are fetched from MSEG into i_mseg table.
Performance of this report is good in Quality system, but it is very poor in Production systems. It is taking more than 20 mins to get executed. MSEG table query is taking most of the time.
I have done indexing and packet sizing on MSEG table, but still performace issue persists. So, cqan you please let me know if there is any way by which performace of the program can be improved???
Please help.
Thanks,
ArchanaHi Archana,
I was having the same issue for MKPF and MSEG , I am using INNER JOIN Condition .
SELECT
mkpf~mblnr
mkpf~mjahr
mkpf~budat
mkpf~usnam
mkpf~bktxt
mseg~zeile
mseg~bwart
mseg~prctr
mseg~matnr
mseg~werks
mseg~lgort
mseg~menge
mseg~meins
mseg~ebeln
mseg~sgtxt
mseg~shkzg
mseg~dmbtr
mseg~waers
mseg~sobkz
mkpf~xblnr
mkpf~frbnr
mseg~lifnr
INTO TABLE xmseg
FROM mkpf
INNER JOIN mseg
ON mkpfmandt EQ msegmandt AND
mkpfmblnr EQ msegmblnr AND
mkpfmjahr EQ msegmjahr
WHERE mkpf~vgart IN se_vgart
AND mkpf~budat IN se_budat
AND mkpf~usnam IN se_usnam
AND mkpf~bktxt IN se_bktxt
AND mseg~bwart IN se_bwart
AND mseg~matnr IN se_matnr
AND mseg~werks IN se_werks
AND mseg~lgort IN se_lgort
AND mseg~sobkz IN se_sobkz
AND mseg~lifnr IN se_lifnr
%_HINTS ORACLE '&SUBSTITUTE VALUES&'.
But still I have a issue in performance , Can anybody give some suggestions , please .
Regards,
Shiv -
Not Updating Customized Table when System having Performance Issue
Hi,
This is actually the same topic as "Not Updating Customized Table when System having Performance Issue" which is posted last December by Leonard Tan regarding the user exit EXIT_SAPLMBMB_001.
Recently we changed the program function module z_mm_save_hide_qty to update task. However this causes more data not updated. Hence we put back the old version (without the update task). But now it is not working as it used to be (e.g. version 1 - 10 records not updated, version 2 with update task - 20 records not updated, back to version 1 - 20 records not updated).
I tried debugging the program, however whenever I debugged, there is nothing wrong and the data is updated correctly.
Please advise if anyone has any idea why is this happening. Many thanks.
Regards,
JanetHi Janet,
you are right. This is a basic rule not to do any COMMIT or RFC calls in a user exit.
Have a look at SAP note 92550. Here they say that exit EXIT_SAPLMBMB_001 is called in the update routine MB_POST_DOCUMENT. And this routine is already called in UPDATE TASK from FUNCTION 'MB_UPDATE_TASKS' IN UPDATE TASK.
SAP also tells us not to do any updates on SAP system tables like MBEW, MARD, MSEG.
Before the exit is called, now they call 'MB_DOCUMENT_BADI' with methods MB_DOCUMENT_BEFORE_UPDATE and MB_DOCUMENT_UPDATE. Possibly you have more success implementing the BADI.
I don't know your situation and goal so this is all I can tell you now.
Good luck!
Regards,
Clemens
Maybe you are looking for
-
I was having trouble with synching my iphone 5 since the new operating system was installed. But I was waiting for my 6+ and figured it would be solved with my dandy new phone. No such luck. I downloaded a great deal of music and then added to it
-
Write the time stamp to a excel file
Dear all superusers, The title imply my question that is how to store every time stamp obtained to an excel file. But when open the file, it contains nothing. Could someone help on this matter. Regards, Lee Joon Teow Electronic Test Engineer
-
Best Practice in regards to adding showing SkinnablePopUp from the Main application file of Mobile A
Hello, I want to display a SkinnablePopUp when the user presses the back key when the current view is the first view to ask if he wants to quit. The logic (checking if the key pressed was the back key && the current view is the first view) is in the
-
2.0 causing Blue and Red Hot Spots on water glare....
This is a shot I took of some clouds over Catalina Island. When I opened it in Aperture 2.0, I noticed many Red and Blue spots in the water glare (See 2nd 100% Crop). I changed the Raw Fine Tuning to 1.0 and the spots went away. I also opened the sam
-
Hi Gurus, I am using crystal reports 2008. I have a volume parameter. When the user enters a value in the parameter it should pull up the records with the group total greater than the value entered in the parameter. Also, it should pull up the record