Cm:select performance problem with multiple likes query clause
I have query like <br>
<b>listItem like '*abc.xml*' && serviceId like '*xyz.xml*'</b><br>
Can we have two likes clauses mentioned above in the cm:select. The above is executing successfully but takes too much time to process. <br><br>
Can we simplify the above mentioned query or any solution. Please help me in this issue.<br><br>
Thanks & Regards,<br>
Murthy Nalluri
A few notes:
1. You seem to have either a VPD policy active or you're using views that add some more predicates to the query, according to the plan posted (the access on the PK_OPERATOR_GROUP index). Could this make any difference?
2. The estimates of the optimizer are really very accurate - actually astonishing - compared to the tkprof output, so the optimizer seems to have a very good picture of the cardinalities and therefore the plan should be reasonable.
3. Did you gather index statistics as well (using COMPUTE STATISTICS when creating the index or "cascade=>true" option) when gathering the statistics? I assume you're on 9i, not 10g according to the plan and tkprof output.
4. Looking at the amount of data that needs to be processed it is unlikely that this query takes only 3 seconds, the 20 seconds seems to be OK.
If you are sure that for a similar amount of underlying data the query took only 3 seconds in the past it would be very useful if you - by any chance - have an execution plan at hand of that "3 seconds" execution.
One thing that I could imagine is that due to the monthly data growth that you've mentioned one or more of the tables have exceeded the "2% of the buffer cache" threshold and therefore are no longer treated as "small tables" in the buffer cache. This could explain that you now have more physical reads than in the past and therefore the query takes longer to execute than before.
I think that this query could only be executed in 3 seconds if it is somewhere using a predicate that is more selective and could benefit from an indexed access path.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Similar Messages
-
Performance problem with relatively simple query
Hi, I've a statement that takes over 20s. It should take 3s or less. The statistics are up te date so I created an explain plan and tkprof. However, I don't see the problem. Maybe somebody can help me with this?
explain plan
SQL Statement which produced this data:
select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
| 0 | SELECT STATEMENT | | 16718 | 669K| | 22254 |
| 1 | SORT UNIQUE | | 16718 | 669K| 26M| 22254 |
| 2 | FILTER | | | | | |
|* 3 | HASH JOIN | | 507K| 19M| | 9139 |
| 4 | TABLE ACCESS FULL | PLATE | 16718 | 212K| | 19 |
|* 5 | HASH JOIN | | 507K| 13M| 6760K| 8683 |
|* 6 | HASH JOIN | | 216K| 4223K| | 1873 |
|* 7 | TABLE ACCESS FULL | SDG_USER | 1007 | 6042 | | 5 |
|* 8 | HASH JOIN | | 844K| 11M| | 1840 |
|* 9 | TABLE ACCESS FULL| SDG | 3931 | 23586 | | 8 |
| 10 | TABLE ACCESS FULL| SAMPLE | 864K| 6757K| | 1767 |
|* 11 | TABLE ACCESS FULL | ALIQUOT | 2031K| 15M| | 5645 |
| 12 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 5 | | |
| 13 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 5 | | |
| 14 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 5 | | |
| 15 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 5 | | |
Predicate Information (identified by operation id):
3 - access("SYS_ALIAS_2"."PLATE_ID"="SYS_ALIAS_1"."PLATE_ID")
5 - access("SYS_ALIAS_3"."SAMPLE_ID"="SYS_ALIAS_2"."SAMPLE_ID")
6 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
7 - filter("SDG_USER"."U_CLIENT_TYPE"='QC')
8 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
9 - filter("SYS_ALIAS_4"."STATUS"='C' OR "SYS_ALIAS_4"."STATUS"='P' OR "SYS_ALIA
S_4"."STATUS"='V')
11 - filter("SYS_ALIAS_2"."PLATE_ID" IS NOT NULL)
Note: cpu costing is off
tkprof
TKPROF: Release 9.2.0.1.0 - Production on Mon Sep 22 11:09:37 2008
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: d:\oracle\admin\nautp\udump\nautp_ora_5708.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 61
SELECT distinct p.name
FROM lims_sys.sdg sd, lims_sys.sdg_user sdu, lims_sys.sample sa, lims_sys.aliquot a, lims_sys.plate p
WHERE sd.sdg_id = sdu.sdg_id
AND sd.sdg_id = sa.sdg_id
AND sa.sample_id = a.sample_id
AND a.plate_id = p.plate_id
AND sd.status IN ('V','P','C')
AND sdu.u_client_type = 'QC'
call count cpu elapsed disk query current rows
Parse 1 0.09 0.09 0 3 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 7.67 24.63 66191 78732 0 500
total 3 7.76 24.72 66191 78735 0 500
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 61
Rows Row Source Operation
500 SORT UNIQUE
520358 FILTER
520358 HASH JOIN
16757 TABLE ACCESS FULL PLATE
520358 HASH JOIN
196632 HASH JOIN
2402 TABLE ACCESS FULL SDG_USER
834055 HASH JOIN
3931 TABLE ACCESS FULL SDG
864985 TABLE ACCESS FULL SAMPLE
2037373 TABLE ACCESS FULL ALIQUOT
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 33865)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 33865)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 33865)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 33865)
select 'x'
from
dual
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
total 3 0.00 0.00 0 3 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 61
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
begin :id := sys.dbms_transaction.local_transaction_id; end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 12 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 12 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 61
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 3 0.09 0.09 0 3 0 0
Execute 4 0.00 0.00 0 12 0 1
Fetch 2 7.67 24.63 66191 78735 0 501
total 9 7.76 24.72 66191 78750 0 502
Misses in library cache during parse: 3
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 48 0.00 0.00 0 0 0 0
Execute 54 0.00 0.01 0 0 0 0
Fetch 65 0.00 0.00 0 157 0 58
total 167 0.00 0.01 0 157 0 58
Misses in library cache during parse: 16
4 user SQL statements in session.
48 internal SQL statements in session.
52 SQL statements in session.
Trace file: d:\oracle\admin\nautp\udump\nautp_ora_5708.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
4 user SQL statements in trace file.
48 internal SQL statements in trace file.
52 SQL statements in trace file.
20 unique SQL statements in trace file.
500 lines in trace file.Edited by: RZ on Sep 22, 2008 2:27 AMA few notes:
1. You seem to have either a VPD policy active or you're using views that add some more predicates to the query, according to the plan posted (the access on the PK_OPERATOR_GROUP index). Could this make any difference?
2. The estimates of the optimizer are really very accurate - actually astonishing - compared to the tkprof output, so the optimizer seems to have a very good picture of the cardinalities and therefore the plan should be reasonable.
3. Did you gather index statistics as well (using COMPUTE STATISTICS when creating the index or "cascade=>true" option) when gathering the statistics? I assume you're on 9i, not 10g according to the plan and tkprof output.
4. Looking at the amount of data that needs to be processed it is unlikely that this query takes only 3 seconds, the 20 seconds seems to be OK.
If you are sure that for a similar amount of underlying data the query took only 3 seconds in the past it would be very useful if you - by any chance - have an execution plan at hand of that "3 seconds" execution.
One thing that I could imagine is that due to the monthly data growth that you've mentioned one or more of the tables have exceeded the "2% of the buffer cache" threshold and therefore are no longer treated as "small tables" in the buffer cache. This could explain that you now have more physical reads than in the past and therefore the query takes longer to execute than before.
I think that this query could only be executed in 3 seconds if it is somewhere using a predicate that is more selective and could benefit from an indexed access path.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Select Performance problems using the 'like' operator
I have a PL/SQL procedure that uses a cursor which contains a 'like' operator in the where clause. I have two database instances that are theoretically the same, however this code processes about 100,000 rows in 5 minutes on one database and 100,000 rows in several weeks on the other database. I know it is the 'like' operator that is causing the problem, but I don't know where to look in the database setup parameters as to what could be different between the two. Can someone point me in the right direction?
I tried to think of another way to write the query, but I really need to use the wildcard option on the data I'm searching for. The system I'm working with attaches a suffix to the end of every ID (ie. '214-222-1234-0') The suffix ('-0') increments but the rest of the ID stays the same ('-1','-2',etc...), but I want to find all of the rows where the first 12 characters are the same, so I strip off the suffix and use a wildcard '%' in its place. I tried adding the SUBSTR() function to the left hand column of the where clause, but it was even slower than using the 'like' operator. I know its a sound query, I just can't figure out why it works fine on one database and not the other.
-
Problems with VO's query, clause: QRSLT
Hi Gurus,
I got a problem here with a vo's query, the query its fine running in sqldeveloper and toad, but when I ran it on VO's the query doesn't show me any result, what can I do to fix this problem?
Please, I hope you can help me with this one, I'll be pending to your answer.
Heres the query that runs fine in sqldeveloper or toad and returns me a lot of results
SELECT ss.actual_sin, ss.idsin
FROM xx.xx_seg_ctrl_da d
INNER JOIN xx.xx_seg_sin ss
ON (ss.idsin = d.idsin)
WHERE d.status = 0 AND ss.idsin IN ('102_d','487_b','101_m','201_d')
ORDER BY DECODE(ss._ACTUAL_SIN, 'RED','a', 'YELLOW','b','GREEN','c')
and here's the query when run on the VO's component and doesn't return any result
SELECT * FROM (SELECT ss.actual_sin, ss.idsin
FROM xx.xx_seg_ctrl_da d
INNER JOIN xx.xx_seg_sin ss
ON (ss.idsin = d.idsin)
WHERE d.status = 0
ORDER BY DECODE(ss._ACTUAL_SIN, 'RED','a', 'YELLOW','b','GREEN','c')) QRSLT WHERE IDSIN IN ('102_d','487_b','101_m','201_d')
have a nice one.
Best Regards,
mentorthe second query is for rowcount. Enable the logger and see what value is getting passed as bind variable.
-
Performance problem with query on bkpf table
hi good morning all ,
i ahave a performance problem with a below query on bkpf table .
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
WHERE budat IN s_budat.
is ther any possibility to improve the performanece by using index .
plz help me ,
thanks in advance ,
regards ,
srinivashi,
if u can add bukrs as input field or if u have bukrs as part of any other internal table to filter out the data u can use:
for ex:
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
WHERE budat IN s_budat
and bukrs in s_bukrs.
or
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
for all entries in itab
WHERE budat IN s_budat
and bukrs = itab-bukrs.
Just see , if it is possible to do any one of the above?? It has to be verified with ur requirement. -
Performance Problem with Query load
Hello,
after the upgrade to SPS 23, we have some problems with loading a Query. Before the Upgrade, the Query runs 1-3 minutes, and now more then 40 minutes.
Does anyone have an idea?
Regards
MarcoHi,
Suggest executing the Query in RSRT transaction by choosing the option ' Execute+Debugger' to further analyze where exactly the query is taking time.
Make sure choosing the appropriate 'Query Display' (List/BEx Analyzer/ HTML) option before Executing the query in the Debugger mode since the display option also effect the query run time.
Hope this info helps!
Bala Koppuravuri -
Performance problem with sdn_nn - new 10g install
I am having a performance problem with sdn_nn after migrating to a new machine. The old Oracle version was 9.0.1.4. The new is 10g. The new machine is faster in general. Most (non-spatial) batch processes run in half the time. However, the below statement is radically slower. The below statement ran in 45 minutes before. On the new machine it hasn't finished after 10 hours. I am able to get a 5% sample of the customers to finish in 45 minutes.
Does anyone have any ideas on how to approach this problem? Any chance something isn't installed correctly on the new machine (the nth version of the query finishe, albeit 20 times slower)?
Appreciate any help. Thanks.
- Jack
create table nearest_store
as
select /*+ ordered */
a.customer_id,
b.store_id nearest_store,
round(mdsys.sdo_nn_distance(1),4) distance
from customers a,
stores b
where mdsys.sdo_nn(
b.geometry,
a.geometry,
'sdo_num_res=1, unit=mile',
1
) = 'TRUE'
;Dan,
customers 110,000 (involved in this query)
stores 28,000
Here is the execution plan on the current machine:
CREATE TABLE STATEMENT cost = 81947
LOAD AS SELECT
PX COORDINATOR
PX SEND QC (RANDOM) :TQ10000
ROW NESTED LOOPS
1 1 PX BLOCK ITERATOR
1 1ROW TABLE ACCESS FULL CUSTOMERS
1 3 PARTITION RANGE ALL
1 3 TABLE ACCESS BY LOCAL INDEX ROWID STORES
DOMAIN INDEX STORES_SIDX
I can't capture the execution plan on the old database. It is gone. I don't remember it being any different from the above (full scan customers, probe stores index once for each row in customers).
I am trying the query without the create table (just doing a count). I'll let you know on that one.
I am at 10.0.1.3.
Here is how I created the index:
create index stores_sidx
on stores(geometry)
indextype is mdsys.spatial_index LOCAL
Note that the stores table is partitioned by range on store type. There are three store types (each in its own partition). The query returns the nearest store of each type (three rows per customer). This is by design (based on the documented behavior of sdo_nn).
In addition to running the query without the CTAS, I am also going try running it on a different machine tonight. I let you know how that turns out.
The reason I ask about the install, is that the Database Quick Installation Guide for Solaris says this:
"If you intend to use Oracle JVM or Oracle interMedia, you must install the Oracle Database 10g Products installation type from the Companion CD. This installation optimizes the performance of those products on your system."
And, the Database Installlation Guide says:
"If you plan to use Oracle JVM or Oracle interMedia, Oracle strongly recommends that you install the natively compiled Java libraries (NCOMPs) used by those products from the Oracle Database 10g Companion CD. These libraries are required to improve the performance of the products on your platform."
Based on that, I am suspicious that maybe I have the product installed on the new machine, but not correctly (forgot to set fast=true).
Let me know if you think of anything else you'd like to see. Thanks.
- Jack -
Performance problem with table COSS...
Hi
Anyone encountered performance problem with these table : COSS, COSB, COEP
Best Regards>
gsana sana wrote:
> Hi Guru's
>
> this is the select Query which is taking much time in Production. so please help me to improve the preformance with BSEG.
>
> this is my select query:
>
> select bukrs
> belnr
> gjahr
> bschl
> koart
> umskz
> shkzg
> dmbtr
> ktosl
> zuonr
> sgtxt
> kunnr
> from bseg
> into table gt_bseg1
> for all entries in gt_bkpf
> where bukrs eq p_bukrs
> and belnr eq gt_bkpf-belnr
> and gjahr eq p_gjahr
> and buzei in gr_buzei
> and bschl eq '40'
> and ktosl ne 'BSP'.
>
> UR's
> GSANA
Hi,
This is what I know and please if any expert think its incorrect, please do correct me.
BSEG is a cluster table with BUKRS, BELNR, GJAHR and BUZEI as the key whereas other key will be stored in database as raw data thus SAP apps will need to convert that raw data first if we are using other keys in where condition. Hence, I suggest to use up to buzei in the where condition and filter other condition in internal table level like using Delete statement. Hope its help.
Regards,
Abraham -
CONNECT BY, Performance problems with hierarchical SQL
Hello,
I have a performance problem with the following SQL:
table name: testtable
columns: colA, colB, colC, colD, colE, colF
Following hierarchical SQL:
SELECT colA||colB||colC AS A1, colD||colE||colF AS B1, colA, colB, colC, level
FROM testable
CONNECT BY PRIOR A1 = B1
START WITH A1 = 'aaa||bbbb||cccc'
With big tables the performance of this construct is very bad. I can't use functional based indexes to create an Index on "colA||colB||colC" and "colD||colE||colF"
Has anyone an idea how I can get a real better performance for this hierarchical construct, if I have to combine multiple colums? Or is there any better way (with an PL/SQL or view trigger) solve something like this ??
Thanks in advance for your investigation :)
CarmenWhy not
CONNECT BY PRIOR colA = colD
and PRIOR colB = colE
and ...
? It is not the same thing, but I suspect my version is correct:-) -
Performance Problems with "For all Entries" and a big internal table
We have big Performance Problems with following Statement:
SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
FOR ALL ENTRIES IN gt_zmon_help
WHERE
status = 'IAI200' AND
logdat IN gs_dat AND
ztrack = gt_zmon_help-ztrack.
In the internal table gt_zmon_help are over 1000000 entries.
Anyone an Idea how to improve the Performance?
Thank you!>
Matthias Weisensel wrote:
> We have big Performance Problems with following Statement:
>
>
SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
> FOR ALL ENTRIES IN gt_zmon_help
> WHERE
> status = 'IAI200' AND
> logdat IN gs_dat AND
> ztrack = gt_zmon_help-ztrack.
>
> In the internal table gt_zmon_help are over 1000000 entries.
> Anyone an Idea how to improve the Performance?
>
> Thank you!
You can't expect miracles. With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab? How many records is the select bringing back? I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table.
In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time. -
Performance problem with recordset object
Hi,
I have a performance problem with record set object. Is it true using RecordSet object as argument to the method will cause any perfomance degradation?
here is my code.
finStmtList = selectFinancialStatements(rs, shortFormat, latestOnly, true);
I changed the code by populating the recordset in the method that called the above statement. previously the population of valueobject is within the select FinancialStatements.
Now the method looks like
finStmtList = selectFinancialStatements(ValueObject, shortFormat, latestOnly, true);
Is this fix will result in any performance gain?Can any one suggest me that the above RecordSet Object contains large number of records that counts up to 1000 rows and having 30 columns? I tested the application. But the performance varies from time to time.
Is there any other way to fine tune it? -
Performance problem with Oracle
We are currently getting a system developed in Unix/Weblogic/Tomcat/Oracle environment. We have developed a screen that contains 5 or 6 different parameters to select from. We could select multiple parameters in each of these selections. The idea behind the subsequent screens is to attach information to already existing data/ possible future data that matches the selection criteria.
Based on these selections, existing data located within the system in a table is searched and those that match are selected. Also new rows are created in the table against combinations that do not currently have a match. Frequently multiple parameters are selected, and 2000 different combinations need to be searched in the table. Of these selections, only about 100 or 200 combinations will be available in existing data. So the system is having to insert 1800 rows. The user meanwhile waits for the system to come up with data based on their selections. The user is not willing to wait more than 30 seconds to get to the next screen. In the above mentioned scenario, the system takes more than an hour to insert the new records and bring the information up. We need suggestions to see if the performance can be improved this drastically. If not what are the alternatives? ThanksThe #1 cause for performance problems with Oracle is not using it correctly.
I find it hard to believe that with the small data volumes mentioned, that you can have perfornance problems.
You need to perform a sanity check. Are you using Oracle correctly? Do you know what bind variables are? Are you using indexes correctly? Are you using PL/SQL correctly? Is the instance setup correctly? What about storage, are you using SAME (RAID10) or something else? Etc.
Facts. Oracle peforms exceptionally well. Oracle exceptionally well.
Simple example from a benchmark I did on this exact same subject. App-tier developers not understanding and not using Oracle correctly. Incorrect usage of Oracle doing a 100,000 SQL statements. 24+ minutes elapsed time. Doing those exact same 100,000 SQL statement correctly (using bind variables) - 8 seconds elapsed time. (benchmark using Oracle 10.1.0.3 on a Sunfire V20z server)
But then you need to use Oracle correctly. Are you familiar with the Oracle Concepts Guide? Have you read the Oracle Application Developer Fundamentals Guide? -
Performance problem with CR SDK
Hi,
I'am currently on a customer site and I have the following problem :
The client have a performance problem with a J2EE application wich call a Crystal report with th CR SDK. To reproduce the problem on the local machine (the CR server), I have developped a little jsp page wich used the Crystal SDK to open a Crystal report on the server (this report is based on a XML data source), setting the new data source (with a new xml data flow) and refresh the report in PDF format.
The problem is that the 2 first sequences take about 5 seconde each (5 sec for the opening report and 5 seconds for the setting data source). Then the total process take about 15 seconds to open and refresh the document that is very long for a little document.
The document is a 600Ko file, the xml source is a 80Ko file.
My jsp page is directly deployed on the tomcat of the Crystal Report Server (CRXIR2 without Service Pack).
The Filestore and the MySQL database are on the CR server.
The server is a 4 quadripro (16 proc) with 16Go of RAM and is totally dedicated to Crystal Report. For the moment, there is no activity on the server (it is also used for the test).
The mains jsp orders are the followings :
IEnterpriseSession es = CrystalEnterprise.getSessionMgr().logon("administrator", "", "EDITBI:6400", "secEnterprise");
IInfoStore infoStore = (IInfoStore) es.getService("", "InfoStore");
IInfoObjects infoObjects = infoStore.query("SELECT * FROM CI_INFOOBJECTS WHERE SI_NAME='CPA_EV' AND SI_INSTANCE=0 ");
IInfoObject report = (IInfoObject) infoObjects.get(0);
IReportAppFactory reportAppFactory = (IReportAppFactory)es.getService("RASReportFactory");
ReportClientDocument reportClientDoc = reportAppFactory.openDocument(report.getID(), 0, null);
IXMLDataSet xmlDataSet = new XMLDataSet();
xmlDataSet.setXMLData(new ByteArray(ligne_data_xml));
xmlDataSet.setXMLSchema(new ByteArray(ligne_schema_xml));
DatabaseController db = reportClientDoc.getDatabaseController();
db.setDataSource(xmlDataSet, "", "");
ByteArrayInputStream bt = (ByteArrayInputStream)reportClientDoc.getPrintOutputController().export(ReportExportFormat.PDF);
My question is : does this method is the good one to do this ?
Thank's in advance for your help
Best regards
EmmanuelHi,
My problem is not resolved and I have'nt news from the support.
If you have any idea/info, don't forget me
Thank's in advance
Emmanuel -
Performance problem with Integration with COGNOS and Bex
Hi Gems
I have a performance problem with some of my queries when integrating with the COGNOS
My query is simple which gets the data for the date interval : "
From Date: 20070101
To date:20070829
When executing the query in the Bex it takes 2mins but when it is executed in the COGNOS it takes almost 10mins and above..
Any where can we debug the report how the data is sending to the cognos. Like debugging the OLEDB ..
and how to increase the performance.. of the query in the Cognos ..
Thanks in Advance
Regards
AKHi,
Please check the following CA Unicenter config files on the SunMC server:
- is the Event Adapter (ea-start) running ?, without these daemon no event forwarding is done the CA Unicenter nor discover from Ca unicenter is working.
How to debug:
- run ea-start in debug mode:
# /opt/SUNWsymon/SunMC-TNG/sbin/ea-start -d9
- check if the Event Adaptor is been setup,
# /var/opt/SUNWsymon/SunMC-TNG/cfg_sunmctotng
- check the CA log file
# /var/opt/SUNWsymon/SunMC-TNG/SunMCToTngAdaptorMain.log
After that is all fine check this side it explains how to discover an SunMC agent from CA Unicenter.
http://docs.sun.com/app/docs/doc/817-1101/6mgrtmkao?a=view#tngtrouble-6
Kind Regards -
I don't know why, but when I select a folder with multiple docs contained within it, or multiple files, the Compress option is grayed out under the File menu, nor is it available when I right-click. Please help.
Yes it is, but I have the same problem of the option does not appear with "right click" (I'm actually control-clicking), and COMPRESS is grayed out under the file menu. I'd like to fix it.
Hmm. After finding the Archive Utility,* I tried archiving the folder. It came out as .cpgz, ugh. Found there was a preference to make .zip the default. It worked!
And now…the menus are not grayed out. ?!?! OK.'
Hope this helps someone.
*You cannot find this via search, apparently it is a hidden file, you have to click from the root drive through to /System/Library/CoreServices (Thanks for the file path Alberto!)
Maybe you are looking for
-
Hi, I am trying to add a new check box to a custom screen. This check box must be connected to a new custom field of KNA1 table. I have created a new z data element (CHAR) and a new z field on KNA1. I have created the check box on the custom screen u
-
Exceptions in BI 7.0 Query Designer
Hi Guys, I have a requirement where based on Net Supply and Product classification, conditional formatting need to be applied. Below is the scenario. Product Class Week1 Week2 Week3 Week4 Week
-
Percentage Field like in MS Excel [SOLVED]
Hi I have a NUMBER(15,3) field in which users can enter percentages. If a user enters 20 (%), the customer wants 0.2 to be stored in the DB, like in MS Excel. As far as I know there is no such data type for fields, is it? Is there a simple solution f
-
I lost my ipad and its offline can i find it still?
Can anyone tell me if I can find my ipad if its offline?
-
Why Firefox doesn't show the captcha on the site swisscom mobile?
I can see the captcha image from most of the website, but not of swisscom mobile site. On other computer I don't have any trouble with this site. So I think it's a security problem. Ma computer should block something. But it doesn't work neither when