RE: Case 59063: performance issues w/ C TLIB and Forte3M

Hi James,
Could you give me a call, I am at my desk.
I had meetings all day and couldn't respond to your calls earlier.
-----Original Message-----
From: James Min [mailto:jminbrio.forte.com]
Sent: Thursday, March 30, 2000 2:50 PM
To: Sharma, Sandeep; Pyatetskiy, Alexander
Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
Hello,
I just want to reiterate that we are very committed to working on
this issue, and that our goal is to find out the root of the problem. But
first I'd like to narrow down the avenues by process of elimination.
Open Cursor is something that is commonly used in today's RDBMS. I
know that you must test your query in ISQL using some kind of execute
immediate, but Sybase should be able to handle an open cursor. I was
wondering if your Sybase expert commented on the fact that the server is
not responding to commonly used command like 'open cursor'. According to
our developer, we are merely following the API from Sybase, and open cursor
is not something that particularly slows down a query for several minutes
(except maybe the very first time). The logs show that Forte is waiting for
a status from the DB server. Actually, using prepared statements and open
cursor ends up being more efficient in the long run.
Some questions:
1) Have you tried to do a prepared statement with open cursor in your ISQL
session? If so, did it have the same slowness?
2) How big is the table you are querying? How many rows are there? How many
are returned?
3) When there is a hang in Forte, is there disk-spinning or CPU usage in
the database server side? On the Forte side? Absolutely no activity at all?
We actually have a Sybase set-up here, and if you wish, we could test out
your database and Forte PEX here. Since your queries seems to be running
off of only one table, this might be the best option, as we could look at
everything here, in house. To do this:
a) BCP out the data into a flat file. (character format to make it portable)
b) we need a script to create the table and indexes.
c) the Forte PEX file of the app to test this out.
d) the SQL staement that you issue in ISQL for comparison.
If the situation warrants, we can give a concrete example of
possible errors/bugs to a developer. Dial-in is still an option, but to be
able to look at the TOOL code, database setup, etc. without the limitations
of dial-up may be faster and more efficient. Please let me know if you can
provide this, as well as the answers to the above questions, or if you have
any questions.
Regards,
At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
James, Ken:
FYI, see attached response from our Sybase expert, Dani Sasmita. She has
already tried what you suggested and results are enclosed.
++
Sandeep
-----Original Message-----
From: SASMITA, DANIAR
Sent: Wednesday, March 29, 2000 6:43 PM
To: Pyatetskiy, Alexander
Cc: Sharma, Sandeep; Tenerelli, Mike
Subject: Re: FW: Case 59063: Select using LIKE has performance
issues
w/ CTLIB and Forte 3M
We did that trick already.
When it is hanging, I can see what is doing.
It is doing OPEN CURSOR. But not clear the exact statement of the cursor
it is trying to open.
When we run the query directly to Sybase, not using Forte, it is clearly
not opening any cursor.
And running it directly to Sybase many times, the response is always
consistently fast.
It is just when the query runs from Forte to Sybase, it opens a cursor.
But again, in the Forte code, Alex is not using any cursor.
In trying to capture the query,we even tried to audit any statementcoming
to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
James Min
Technical Support Engineer - Forte Tools
Sun Microsystems, Inc.
1800 Harrison St., 17th Fl.
Oakland, CA 94612
james.minsun.com
510.869.2056
==============================================
Support Hotline: 510-451-5400
CUSTOMERS open a NEW CASE with Technical Support:
http://www.forte.com/support/case_entry.html
CUSTOMERS view your cases and enter follow-up transactions:
http://www.forte.com/support/view_calls.html

Earthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

Similar Messages

  • MacBook Pro performance issues w/2nd monitor and FCP7

    I have this MacBook Pro bought brand-new in January 2010:
      Model Name: MacBook Pro
      Model Identifier: MacBookPro5,2
      Processor Name: Intel Core 2 Duo
      Processor Speed: 3.06 GHz
      Number Of Processors: 1
      Total Number Of Cores: 2
      L2 Cache: 6 MB
      Memory: 8 GB
      Bus Speed: 1.07 GHz
    and until today had never attached a second monitor to it. Today I hooked up my Samsung 24" to do some dual screen editing in Final Cut 7.0.3. I was unable to play back my video at full speed in the second monitor, and after a few seconds of skippy playback I'd get that error message about unable to play back video at full speed and to check my RT settings. I was using a Mini DisplayPort to DVI adapter. My computer has no issues playing the video in the laptop's monitor at any resolution and any quality settings (I've never changed the RT settings or anything else in the menu ever but I tried every combination this time). I then tried using my TV as a 2nd monitor with an HDMI adapter. Same performance issues. I then tried my friend's newer 13" MBP 8,1 and it performed flawlessly with the same project & footage. I feel like my $3,000 computer should outperform a $1,200 one even if mine is a year and a half older. Any advice?
    Chris

    Wow, you posted this perfectly to coincide with an identical problem, albeit using Logic Pro 9.1.5 rather than FCP.
    Last week, I purchased a 23" external monitor to use alongside my "flagship" 2011 15" hi-res, 2.3 i7 Macbook Pro with 8Gb of RAM.
    It is connected via a mini-DVI to D-sub analog (not that that should matter?) and all appeared fine.
    The first issue I had was with my MBP's fan now running CONSTANTLY, when I have the second monitor attached. Even when the machine is completely idle.
    When using the machine to record audio, this is a fairly hefty problem and not something I had anticipated - indeed why would I anticipate such a thing?
    What is far, far worse though is that over the last few days I have had repeated problems with performance drop-outs and errors in Logic and I have trying to fathom out why. Realising that the only major system change made, was the above monitor connection, I ran some tests.
    I restarted my MBP, no other apps were running and with my new 23" monitor attached acting as main display with MBP built in display on as secondary
    I loaded up a fairly demanding Logic project which was hitting 40% to 60% CPU usage when using the built in MBP display last week
    I ran activity monitor and had CPU usage history open
    The above project now repeatedly overloads and playback halts in a given 8 bar section - with CPU at 80% most of the time
    I disconnected the external display, no shut down, I just let the machine switch to the built in 15".
    Started the same project, the same 8 bar section and hey presto - CPU usage back down to 40% to 60%
    The above was reflected in the CPU usage history with the graph showing CPU use down by about a half, when running this Logic project WITHOUT the external display.
    There is a very useful benchmark Logic project that has been used as a test by many users to gauge Logic performance on given Apple hardware.
    The project has about 100 tracks pre-configured with CPU intensive plugins, designed to tax the CPU.
    The idea is that you load up the project with tracks muted, press play and then unmute the tracks steadily until Logic us unable to play contiunously because of a system performance error.
    On my MBP, with the external monitor NOT attached, I can play back around 50 of the audio tracks in this benchmark project.
    With the monitor attached, I can get about 22 tracks playing.... which is actually a far worse a performance drop (-50% I think!?) than with the first example!
    I did also try with just the external monitor attached and not the MBP display and performance was about 10% better than with dual monitors - so still extremely poor, to say the least.
    This machine is the flagship MBP and has a dedicated AMD Radeon HD6750 GPU which should take care of most if not ALL graphics processing - I mean it's capable of running some pretty demanding games!
    Putting aside the issue of constant fan noise, there is no reason AT ALL, why using an external monitor should tax the i7 CPU this way - it's not as though Logic is graphically demanding... far from it.
    I am on 10.6.8, Logic 9.1.5, all apps up to date via "Software Update".
    I will of course, be contacting Apple...

  • Performance issues involving tables S031 and S032

    Hello gurus,
    I am having some performance issues. The program involves accessing data from S031 and S032.  I have pasted the SELECT statements below.  I have read through the forums for past postings regarding performance, but I wanted to know if there is anything that stands out as being the culprit of very poor performance, and how it can be corrected.  I am fairly new to SAP, so I apologize if I've missed an obvious error.  From debugging the program, it seems the 2nd select statement is taking a very long time to process. 
    GT_S032: approx. 40,000 entries
    S031:    approx. 90,000 entries
    MSEG:    approx. 115,000 entries
    MKPF:    approx. 100,000 entries
    MARA:    approx. 90,000 entries
    SELECT
      vrsio          "Version
      werks          "Plan
      lgort          "Storage Location
      matnr          "Material
      ssour          "Statistic(s) origin                  
    FROM s032
    INTO TABLE gt_s032
    WHERE ssour = space                                           AND   vrsio = c_000                                           AND   werks = gw_werks.
    IF sy-subrc = 0.
      SELECT
        vrsio        "Version
        werks        "Plant
        spmon        "Period to analyze - month
        matnr        "Material
        lgort        "Storage Location
        wzubb        "Valuated stock receipts value
        wagbb        "Value of valuated stock being issued
      FROM s031
      INTO TABLE gt_s031
      FOR ALL ENTRIES IN gt_s032
      WHERE ssour = gt_s032-ssour                                     
      AND   vrsio = gt_s032-vrsio                                     
      AND   spmon IN r_spmon
      AND   sptag = '00000000'                                      
      AND   spwoc = '000000'                                          
      AND   spbup = '000000'                               
      AND   werks = gt_s032-werks
      AND   matnr = gt_s032-matnr
      AND   lgort = gt_s032-lgort
      AND   ( wzubb <> 0 OR wagbb <> 0 ).
    ELSE.
      WRITE: 'No data selected'(m01).
      EXIT.
    ENDIF.
    SORT gt_s032 BY vrsio werks lgort matnr.
    SORT gt_s031 BY vrsio werks spmon matnr lgort.
    SELECT
      p~werks          "Plant
      p~matnr          "Material
      p~mblnr          "Document Number
      p~mjahr          "Document Year
      p~bwart          "Movement type
      p~dmbtr          "Amount in local currency
      t~shkzg          "Debit/Credit indicator
    INTO TABLE gt_scrap
    FROM mkpf AS h
    INNER JOIN mseg AS p
       ON hmblnr = pmblnr
      AND hmjahr = pmjahr
    INNER JOIN mara AS m
       ON pmatnr = mmatnr
    INNER JOIN t156 AS t
       ON pbwart = tbwart
    WHERE h~budat => gw_duepr-begda
      AND h~budat <= gw_duepr-endda
      AND p~werks = gw_werks.
    Thanks so much for your help,
    Jayesh

    Issue with table s031 and with for all entries.
    Hi,
    I have following code in which select statement on s031 is
    taking long time and after that it shows a dump. What should I do instead of
    exceeding the time limit of execution of an abap program.
    TYPES:
      BEGIN OF TY_MTL,  " Material Master
        MATNR TYPE MATNR,   " Material Code
        MTART TYPE MTART,   " Material Type
        MATKL TYPE MATKL,   " Material Group
        MEINS TYPE MEINS,   " Base unit of Measure
        WERKS TYPE WERKS_D, " Plant
        MAKTX TYPE MAKTX,   " Material description (Short Text)
        LIFNR TYPE LIFNR,   " vendor code
        NAME1 TYPE NAME1_GP, " vendor name
        CITY  TYPE ORT01_GP, " City of Vendor
        Y_RPT TYPE P DECIMALS 3, "Yearly receipt
        Y_ISS TYPE P DECIMALS 3, "Yearly Consumption
        M_OPG TYPE P DECIMALS 3, "Month opg
        M_OPG1 TYPE P DECIMALS 3,
        M_RPT TYPE P DECIMALS 3, "Month receipt
        M_ISS TYPE P DECIMALS 3, "Month issue
        M_CLG TYPE P DECIMALS 3, "Month Closing
        D_BLK TYPE P DECIMALS 3, "Block Stock,
        D_RPT TYPE P DECIMALS 3, "Today receipt
        D_ISS TYPE P DECIMALS 3, "Day issues
        TL_FL(2) TYPE C,
        STATUS(4) TYPE C,
    END OF TY_MTL,
    BEGIN OF TY_OPG     , " Opening File
           SPMON TYPE SPMON,   " Period to analyze - month
           WERKS TYPE WERKS_D, " Plant
           MATNR TYPE MATNR,   " Material No
           BASME TYPE MEINS,
           MZUBB TYPE MZUBB,   " Receipt Quantity
           WZUBB TYPE WZUBB,
           MAGBB TYPE MAGBB,   " Issues Quantity
           WAGBB TYPE WAGBB,
    END OF TY_OPG,
    DATA :
           T_M  TYPE STANDARD TABLE OF TY_MTL INITIAL SIZE 0,
           WA_M TYPE TY_MTL,
           T_O  TYPE STANDARD TABLE OF TY_OPG INITIAL SIZE 0,
           WA_O TYPE TY_OPG.
    DATA: smonth1      TYPE spmon.  
    SELECT
      a~matnr
      a~mtart
      a~matkl
      a~meins
      b~werks
      INTO TABLE t_m FROM mara AS a
      INNER JOIN marc AS b
      ON a~matnr = b~matnr
    *  WHERE a~mtart EQ s_mtart
      WHERE a~matkl IN s_matkl
      AND b~werks IN s_werks
      AND b~matnr IN s_matnr   .
      endif.
    SELECT spmon
           werks
           matnr
           basme
           mzubb
           WZUBB
           magbb
           wagbb
            FROM s031 INTO TABLE t_o
            FOR ALL ENTRIES IN t_m
            WHERE matnr = t_m-matnr
            AND werks IN s_werks
              AND spmon le smonth1
              AND basme = t_m-meins.

  • Performance Issue in Query using = and =

    Hi,
    I have a performance issue in using condition like:
    SELECT * FROM A WHERE ITEM_NO>='M-1130' AND ITEM_NO<='M-9999'.
    Item_No is a varchar2 field and the field contains Numberical as well as string values.
    Can anyone help to solve the issue.
    Thanks and Regards

    How can you say it is a performance issue with the condition? Do you have execution plan? If yes, post it between [pre] and [/pre] tags. Like this.
    [pre]SQL> explain plan for
      2  select sysdate
      3  from dual
      4  /
    Explained.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 1546270724
    | Id  | Operation        | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  FAST DUAL       |      |     1 |     2   (0)| 00:00:01 |
    8 rows selected.
    SQL>
    [/pre]

  • Performance issue with high CPU and IO

    Hi guys,
    I am encountering huge user response time on a production system and I don’t know how to solve it.
    Doing some extra tests and using the instrumentation that we have in the code we concluded that the DB is the bottleneck.
    We generated some AWR reports and noticed the CPU was in top wait events. Also noticed that in a random manner some simple sql take a long time to execute. We activated the sql trace on the system and noticed that for very simple SQLs (unique index access on one table) we have huge exec times. 9s
    In the trace file the huge time we had it in fetch area: 9.1s cpu and elapsed 9.2.
    And no or very small waits for this specific SQL.
    it seems like the bottle neck is on the CPU but at that point there were very few processes running on the DB. Why can we have such a big cpu wait on a simple select? This is a machine with 128 cores. We have quicker responses on machines smaller/busier than this.
    We noticed that we had a huge db_cache_size (12G) and after we scale it down we noticed some improvements but not enough. How can I prove that there is a link between high CPU and big cache_size? (there was not wait involved in SQL execution). what can we do in the case we need big DB cache size?
    The second issue is that I tried to execute an sql on a big table (FTS on a big table. no join). Again on that smaller machine it runs in 30 seconds and on this machine it runs in 1038 seconds.
    Also generated a trace for this SQL on the problematic machine:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1    402.08    1038.31    1842916    6174343          0           1
    total        3    402.08    1038.32    1842916    6174343          0           1
      db file sequential read                     12419        0.21         40.02
      i/o slave wait                             135475        0.51        613.03
      db file scattered read                     135475        0.52        675.15
      log file switch completion                      5        0.06          0.18
      latch: In memory undo latch                     6        0.00          0.00
      latch: object queue header operation            1        0.00          0.00
    ********************************************************************************The high CPU is present here also but here I have huge wait on db file scattered read.
    Looking at the session with the select the AWG_wait for db scattered read was 0.5. on the other machine it is like 0.07.
    I though this is an IO issue. I did some IO tests at SO level and it seems like the read and writes operation are very fast…much faster than the machine that has the awg_wait smaller. Why the difference in waits?
    One difference between these two DBs is that the problem one has the db block size = 16k and the other one has 8k.
    I received some reports done at OS level on CPU and IO usage on the problematic machine (in normal operations). It seems like the CPU is very used and the IO stays very low.
    On the other machine, the smaller and the faster one, it is other way around.
    What is the problem here? How can I test further? Can I link the high CPU to low/slow IO?
    we have 10G on sun os with ASM.
    Thanks in advance.

    Yes, there are many things you can and should do to isolate this. But first check MOS Poor Performance With Oracle9i and 10g Releases When Using Dynamic Intimate Shared Memory (DISM) [ID 1018855.1] isn't messing you up to start.
    Also, be sure and post exact patch levels for both Oracle and OS.
    Be sure and check all your I/O settings and see what MOS has to say about those.
    Are you using ASSM? See Long running update
    Since it got a little better with shrinking the SGA size, that might indicate (wild speculation here, something like) one of the problems is simply too much thrashing within the SGA, as oracle decides "small" objects being full scanned in memory is faster than range scans (or whatever) from disk, overloading the cpu, not allowing the cpu to ask for other full scans from I/O. Possibly made worse by row level locking, or some other app issue that just does too much cpu.
    You probably have more than one thing wrong. High fetch count might mean you need to adjust the array size on the clients.
    Now that that is all out of the way, if you still haven't found the problem, go through http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Edit: Oh, see Solaris 10 memory management conflicts with Automatic PGA Memory Management [ID 460424.1] too.
    Edited by: jgarry on Nov 15, 2011 1:45 PM

  • Performance issue related to Wrapper and variable value retrievel

    If I have a array of int(primitive array) and on the other hand if I have an array of it's corresponding Wrapper class , while dealing is there is any performance difference betwen these 2 cases . If in my code I am doing to conversion from primitive to wrapper object , is that affecting my performnace as there is already a concept of auto-boxing.
    Another issue is that if I acces the value of a variable name (defined in in superclass) in subclass by ' this.getName() ' rather than ' this.name ' . is there ne performance diffreance between the 2 cases.

    If I have a array of int(primitive array) and on the
    other hand if I have an array of it's corresponding
    Wrapper class , while dealing is there is any
    performance difference betwen these 2 cases . If in
    my code I am doing to conversion from primitive to
    wrapper object , is that affecting my performnace as
    there is already a concept of auto-boxing.I'm sure there is. It's probably not worth worrying about until you profile your application and determine it's actually an issue.
    Another issue is that if I acces the value of a
    variable name (defined in in superclass) in subclass
    by ' this.getName() ' rather than ' this.name ' .
    is there ne performance diffreance between the 2
    cases.Probably, but that also depends on what precisely getName() is doing doesn't it? This is a rather silly thing to be worrying about.

  • Performance issue with select query and for all entries.

    hi,
    i have a report to be performance tuned.
    the database table has around 20 million entries and 25 fields.
    so, the report fetches the distinct values of two fields using one select query.
    so, the first select query fetches around 150 entries from the table for 2 fields.
    then it applies some logic and eliminates some entries and makes entries around 80-90...
    and then it again applies the select query on the same table using for all entries applied on the internal table with 80-90 entries...
    in short,
    it accesses the same database table twice.
    so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
    is around 80-90 entries too much for using "for all entries"?
    the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
    i really cant find the way out...
    please help.

    chinmay kulkarni wrote:Chinmay,
    Even though you tried to ask the question with detailed explanation, unfortunately it is still not clear.
    It is perfectly fine to access the same database twice. If that is working for you, I don't think there is any need to change the logic. As Rob mentioned, 80 or 8000 records is not a problem in "for all entries" clause.
    >
    > so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
    >
    It is not clear what you tried to do here. Did you try to bring all 20 million records into an internal table? That will certainly cause the program to short dump with memory shortage.
    > the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
    >
    That is fine. Actually, it is better (performance wise) to do much of the work in ABAP than writing a complex WHERE clause that might bog down the database.

  • Performance Issue on Traditional Import and Export

    Scenario
    =====
    Oracle 9i Enterprise Edition (9.2.0.8)
    Windows Server 2003 (32 bit)
    --- to ---
    Oracle 11g Enterprise Edition (11.2.0.3)
    Windows Server 2008 Standard R2 (64 bit)
    Hi to all
    I'm doing a upgrade from 9i to 11g and i am using native imp/exp to migrate those data.. For my 1st round of testing, I have done the following:
    1) Full DB Export from 9i. exp system/<pwd>@db FULL=Y FILE=export.dmp log=export.log
    Encountered warning "EXP-00091: Exporting questionable statistics." (On hindsight, I know that I need to set the characterset as per my server before exporting) Nevertheless, I proceeded on with this 8.4GB dmp file that has the warning "EXP-00091: Exporting questionable statistics." The characterset in 9i is US7ASCII. My export took 1 hour, my 9i is actually only a small 26GB.
    2) Full import to 11g. My 11g is a newly created DB with characterset WE8MSWIN1252. I know that schemas and objects will be automatically created in 11g.
    3) However, the problem I face is that this importing of data has been running for the past 4 hours and counting and it is still not done.
    My question is:
    Is it because of the difference in the characterset in the dmp file and 11g that is causing this importing to be slow? Could it be that characterset conversion is causing high overhead?
    OR
    Is it because I exported all the indexes from 9i and now during importing, it is taking a long time to create those indexes in 11g? Should I have export full but set index=F and create a index creation script from 9i and run it in 11g, so as to save time?
    OR
    Both of the above is causing the importing to keep on running? Or ??
    Edited by: moslee on Nov 21, 2012 11:54 PM
    Edited by: moslee on Nov 22, 2012 12:01 AM

    Hi to all
    All my tablespaces in oracle 9i database is on 4096 block_size... But I pre-created those same tablespaces in 11g and they are on 8192 block_size..
    Am I right to say that it is the differences in block size that is slowing down this importing?
    If so, how can I solve this problem? If I use "IGNORE=Y" in my import statement, will it help? Thanks..
    I think i will follow my 9i db and create the tablespaces in 11g in 4096 block_size...
    Here's the new server (11g) specs (i know this is not officially supported by oracle yet):
    Win Server 2012
    HP Proliant DL360p
    Intel Xeon CPU @ 2GHz
    8GB Ram
    ======
    Logfile
    ======
    C:\>imp system/<pwd>@dbnew full=y file=export.dmp
    Import: Release 9.2.0.1.0 - Production on Thu Nov 22 11:29:08 2012
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
    Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V09.02.00 via conventional path
    Warning: the objects were exported by OEM, not by you
    import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    IMP-00017: following statement failed with ORACLE error 29339:
    "CREATE UNDO TABLESPACE "UNDOTBS" BLOCKSIZE 4096 DATAFILE 'F:\ORADATA\UNDO\"
    "UNDOTBS.DBF' SIZE 5120 , 'F:\ORADATA\UNDO\UNDOTBS1.DBF' SIZE 5120 "
    " EXTENT MANAGEMENT LOCAL "
    IMP-00003: ORACLE error 29339 encountered
    ORA-29339: tablespace block size 4096 does not match configured block sizes
    IMP-00017: following statement failed with ORACLE error 29339:
    "CREATE TEMPORARY TABLESPACE "TEMP" BLOCKSIZE 4096 TEMPFILE 'D:\ORADATA\TEM"
    "P\TEMP.DBF' SIZE 7524 AUTOEXTEND ON NEXT 20971520 MAXSIZE 8192M EXTE"
    "NT MANAGEMENT LOCAL UNIFORM SIZE 10485760"
    IMP-00003: ORACLE error 29339 encountered
    ORA-29339: tablespace block size 4096 does not match configured block sizes
    IMP-00015: following statement failed because the object already exists:
    "REVOKE "OEM_MONITOR" FROM SYSTEM"
    IMP-00015: following statement failed because the object already exists:
    "CREATE ROLE "HS_ADMIN_ROLE""
    IMP-00015: following statement failed because the object already exists:
    "REVOKE "HS_ADMIN_ROLE" FROM SYSTEM"
    . importing O3's objects into O3
    Edited by: moslee on Nov 22, 2012 6:45 PM
    Edited by: moslee on Nov 22, 2012 7:07 PM
    Edited by: moslee on Nov 22, 2012 7:13 PM
    Edited by: moslee on Nov 22, 2012 7:28 PM

  • Endeca Server performance issue: dgraph is slow and not always available

    Hi, I'm experiencing a problem with Endeca Server. It's very slow and the dgraph process takes about 97% of the cpu without doing queries.
    When I try to open the Studio Page, the cpu utilization of the dgraph process increases up to 200%, and if I try to launch a simple query from Integrator the result is:
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
         <env:Header/>
         <env:Body>
              <env:Fault>
                   <faultcode>env:Server</faultcode>
                   <faultstring>Error contacting the conversation service on data store 'SocialIDData' at port 7011: Error parsing data version from DGraph response.</faultstring>
                   <detail>
                        <ns2:Fault xmlns:ns2="http://www.endeca.com/MDEX/conversation/2/0" xmlns:ns3="http://www.endeca.com/MDEX/lql_parser/types"/>         
                   </detail>
              </env:Fault>
         </env:Body>
    </env:Envelope>
    If I try to ping the Data Store by visiting the url http://192.168.2.16:7001/endeca-server/admin/SocialIDData?op=ping sometimes the response is positive, and sometimes it's negative.
    I've also checked the log file in ${ENDECA_SERVER_DOMAIN}/EndecaServer/logs/SocialIDData.out and I've found that message:
    MALLOC:   4656726016 ( 4441.0 MB) Heap size
    MALLOC:   4617801000 ( 4403.9 MB) Bytes in use by application
    MALLOC:     23912448 (   22.8 MB) Bytes free in page heap
    MALLOC:      2146304 (    2.0 MB) Bytes unmapped in page heap
    MALLOC:      4865568 (    4.6 MB) Bytes free in central cache
    MALLOC:       198144 (    0.2 MB) Bytes free in transfer cache
    MALLOC:      7802552 (    7.4 MB) Bytes free in thread caches
    MALLOC:        27450              Spans in use
    MALLOC:            7              Thread heaps in use
    MALLOC:     19922944 (   19.0 MB) Metadata allocated
    MALLOC:         4096              Tcmalloc page size
    Endeca fatal error detected in dgraph.
    Segmentation violation detected.
    testbi.localdomain:/u01/app/oracle/Middleware/user_projects/domains/endeca_domain/EndecaServer/logs
    Register state:
    Stack trace:
    Please report this problem to Endeca and include the following information
      1. Endeca software version number
      2. Operating system, operating system version, and hardware platform
      3. Any other information that you can provide (including core files)
      4. The contents of this message
    Process map:
    00400000-049b4000 r-xp 00000000 fd:04 2590868                            /u01/app/oracle/Middleware/EndecaServer7.5.1_1/endeca-server/dgraph/bin/dgraph
    04bb4000-04c11000 rw-p 045b4000 fd:04 2590868                            /u01/app/oracle/Middleware/EndecaServer7.5.1_1/endeca-server/dgraph/bin/dgraph
    04c11000-04c69000 rw-p 00000000 00:00 0
    As you can see at rows 16-17-18 that's a problem with the dgraph, but I have no idea on how resolve it.
    My system configuration is:
    OS: Oracle Linux 6
    OEID 3
    Can anyone help me? Thank you.
    Samuele Scattolini

    Please contact Oracle Support at http://support.oracle.com for assistance.  Thanks.

  • Performance issue with XLA tables and GL tables R12

    Hi all,
    I have one SQL that joins all the XLA tables with GL tables to get invoice related encumbrance data.
    My problem is for some reason the SQL is going to GL_LE_LINES first (from explain plane). As
    a result my SQL is taking some 25 min to finish.
    I am pretty sure if I can manage to force the SQL to use XLA table 1st the SQL will finish in couple of
    minutes. I even tried LEADING hint. But, it didn't work.
    Can someone help me?
    SELECT poh.segment1,
                        tmp.closed_code,
                        gcc.segment1,
                        gcc.segment2,
                        gcc.segment3,
                        gcc.segment4,
                        gcc.segment5,
                        gcc.segment6,
                        gcc.segment7,
                        SUM (NVL (gjl.entered_dr, 0) - NVL (gjl.entered_cr, 0))
                   FROM apps.up_po_encumb_relief_tmp_nb TMP,
                        apps.po_headers_all POH,
                        apps.po_distributions_all pod,
                        apps.ap_invoice_distributions_all APID,
                        xla.xla_transaction_entities XTE,
                        xla_events XE,
                        apps.xla_ae_headers XAH,
                        apps.xla_ae_lines XAL,
                        apps.gl_import_references GIR, -- DOUBLE CHECK JOIN CONDITIONS ON THIS TO INCLUDE OTHER COLS
                        apps.gl_je_lines GJL,
                        apps.gl_je_headers GJH,
                        apps.gl_code_combinations GCC
                  WHERE     POH.segment1 = TMP.PO_NUMBER
                        AND POH.PO_HEADER_ID = POD.PO_HEADER_ID
                        AND POD.Po_distribution_id = APID.po_distribution_id
                        AND XTE.APPLICATION_ID = 200                           -- Payables
                        AND XTE.SOURCE_ID_INT_1 = APID.INVOICE_ID       --POH.po_header_id
                        AND XTE.ENTITY_ID = XE.ENTITY_ID
                        AND XTE.APPLICATION_ID = XE.APPLICATION_ID
                        AND XAH.ENTITY_ID = XE.ENTity_ID
                        AND XAH.EVENT_ID = XE.EVENT_ID
                        AND XAH.APPLICATION_ID = XE.APPLICATION_ID
                        AND XAL.AE_HEADER_ID = XAH.AE_HEADER_ID
                        AND XAL.APPLICATION_ID = XAH.APPLICATION_ID
                        AND GIR.gl_sl_link_table = XAL.gl_sl_link_table
                        AND GIR.gl_sl_link_id = XAL.gl_sl_link_id
                        AND GJL.je_header_id = GIR.je_header_id
                        AND GJL.je_line_num = GIR.je_line_num
                        AND GJH.je_header_id = GJL.je_header_id
                        AND GJH.status = 'P'
                        AND POD.code_combination_id = GJL.code_combination_id
                        AND GJL.code_combination_id = GCC.code_combination_id
                        AND GCC.enabled_flag = 'Y'
                        AND GJH.je_source = 'Payables'
                        AND GJH.je_category = 'Purchase Invoices'
                        AND GJH.encumbrance_type_id IN (1001, 1002)
                        AND GJH.actual_flag = 'E'
                        AND GJH.status = 'P'
                        AND (NVL (GJL.entered_dr, 0) != 0 OR NVL (GJL.entered_cr, 0) != 0)
               GROUP BY poh.segment1,
                        tmp.closed_code,
                        gcc.segment1,
                        gcc.segment2,
                        gcc.segment3,
                        gcc.segment4,
                        gcc.segment5,
                        gcc.segment6,
                        gcc.segment7;

    Hi,
    did you
    - check table statistics (have the affected tables been analyzed recently)?
    - check explain plan for full table scans? You are using NVL on gjl.entered_dr
      and gjl.entered_cr which may lead to a full table scan, as far as i know, there
      is no (standard) functional index on both columns.
    Regards

  • Performance Issue : 10g is faster and 9i slower

    Hi,
    Please find below the query which behaves differently in 10 g and 9i
    SELECT
    BMS_FACMAS_ACPMAS.FACILITY,
    BMS_FACMAS_ACPMAS.ADDRESS,
    SUBSTR(BMS.VENMAS.NAME, 1, 50),
    BMS.ACPMAS.ACP_NO,
    BMS.ASCCOD.VALUE,
    BMS.ACPPAY.PAYMENT_AMOUNT,
    BMS.ACPACC.CHARGES,
    BMS.ACPPAY.START_DATE,
    --TRUNC(BMS.ACPPAY.END_DATE),
    ENDDATE.ENDDT,
    BMS.ACPMAS.STATUS,
    BMS.ACPMAS.STATUS_DATE,
    BMS.ASVCOD.VALUE,
    BMS_FACMAS_ACPMAS.GROSS_FEET,
    BMS.AFRCOD.VALUE,
    nvl(BMS.PROCOD.VALUE,'N/A'),
    ACP_Correct.CORP_ID,
    BMS_FACMAS_ACPMAS.GROSS_YARDS,
    BMS_FACMAS_ACPMAS.CITY || ',' || BMS_FACMAS_ACPMAS.STATE
    FROM
    BMS.FACMAS BMS_FACMAS_ACPMAS,
    BMS.ACPPAY,
    BMS.VENMAS,
    BMS.CONMAS,
    BMS.ACPACC,
    BMS.ACPMAS,
    BMS.EMPMAS ACP_Correct,
    BMS.EMPMAS ACP_AreaManager,
    BMS.EMPMAS ACP_Director,
    BMS.EMPMAS ACP_Reviewer,
    BMS.PROCOD,
    BMS.ASVCOD,
    BMS.AFRCOD,
    BMS.ASCCOD,
    ( Select Max(END_DATE) ENDDT, ACPMAS.ACP_ID from BMS.ACPPAY, BMS.ACPMAS where
    BMS.ACPMAS.ACP_ID=BMS.ACPPAY.ACP_ID
    GROUP BY ACPMAS.ACP_ID
    ) ENDDATE
    WHERE
    ( BMS.VENMAS.STATUS='A' )
    AND ( BMS.ACPMAS.STATUS <> 'X' )
    AND (
    BMS.ASVCOD.VALUE IN 'JANITORIAL'
    AND SUBSTR(BMS.VENMAS.NAME, 1, 50) LIKE 'UNIVERSAL%'
    AND BMS.ACPMAS.STATUS IN ('I', 'A', 'T')
    AND ACP_Correct.CORP_ID LIKE 'ME5077'
    AND ACP_AreaManager.CORP_ID LIKE '%'
    AND ACP_Director.CORP_ID LIKE '%'
    AND ACPPAY.START_DATE <= SYSDATE
    AND ACPPAY.END_DATE >= SYSDATE
    AND ENDDATE.ACP_ID=ACPMAS.ACP_ID
    AND (( ACP_Correct.SUPERVISOR_ID=ACP_AreaManager.EMPLOYEE_ID )
    OR ACP_Correct.EMPLOYEE_ID=ACP_AreaManager.EMPLOYEE_ID
    AND (( ACP_Director.EMPLOYEE_ID=ACP_AreaManager.SUPERVISOR_ID )
    OR ACP_Director.EMPLOYEE_ID=ACP_AreaManager.Employee_ID
    AND ( BMS.ASVCOD.CODE_ID=BMS.ACPMAS.SERVICE_TYPE_ID )
    AND ( BMS.ACPMAS.CORRECT_ID=ACP_Correct.EMPLOYEE_ID )
    AND ( BMS.CONMAS.VENDOR_ID=BMS.VENMAS.VENDOR_ID )
    AND ( BMS.ACPMAS.FREQUENCY_ID=BMS.AFRCOD.CODE_ID )
    AND ( BMS.ACPACC.ACP_ID=BMS.ACPMAS.ACP_ID )
    AND ( BMS.ACPMAS.ACP_ID=BMS.ACPPAY.ACP_ID )
    AND ( BMS.ACPPAY.SCHEDULE_ID=BMS.ASCCOD.CODE_ID )
    AND ( BMS.CONMAS.CONTRACT_ID=BMS.ACPMAS.CONTRACT_ID )
    AND ( BMS_FACMAS_ACPMAS.FACILITY_ID=BMS.ACPMAS.FACILITY_ID )
    AND ( BMS_FACMAS_ACPMAS.PROPERTY_ID=BMS.PROCOD.CODE_ID(+) )
    AND BMS.ACPMAS.REVIEWER_ID = ACP_Reviewer.EMPLOYEE_ID(+)
    In 10g the Query takes 5 secs while in 9i it takes more than 3 minutes.
    Also find below the Explain Plan for both the versions

    Hi and welcome to the forum,
    Please post:
    1) the output of this query, from both databases:
    SQL> select name, value from v$parameter where name like '%optim%';2) explain plans from the query, again: from both databases
    edit
    And read:
    [How to post a tuning request | http://forums.oracle.com/forums/thread.jspa?threadID=863295]
    [When your query takes too long | http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]
    edit2
    And bookmark (for future reasons ;) )
    http://tahiti.oracle.com
    http://asktom.oracle.com
    edit3
    And always put your code between the codetags:   in order to preserve indentation.
    ( See the OTN [FAQ | http://wiki.oracle.com/page/Oracle+Discussion+Forums+FAQ] )
    Edited by: hoek on Jun 19, 2009 1:26 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Hyperion Interactive reporting performance issue.

    Hi,
    We created a report in Hyperion Interactive reporting using Hyperion Essbase as database connection file .
    Report performance was good in Interactive reporting Studio we don't have any problem in studio.
    when we open the the report in Hyperion Workspace We are facing performance issue of the report and also when i hit refresh button to refresh data in the Workspace,i am getting the following error message
    *"An Interactive Reporting Service error has occurred - Failed to acquire requested service. Error Code : 2001"*
    Any suggestions to resolve this will be really helpful.
    Thanks in advance
    Thanks
    Vamsi
    Edited by: user9363364 on Aug 24, 2010 7:49 AM
    Edited by: user9363364 on Sep 1, 2010 7:59 AM

    Hi
    i also faced such an issue and then i found the answer on metalink
    Error: "An Interactive Reporting Service Error has Occurred. Failed to Acquire Requested Service. Error Code: 2001" when Processing a bqy Report in Workspace. [ID 1117395.1]     
    Applies to:
    Hyperion BI+ - Version: 11.1.1.2.00 and later [Release: 11.1 and later ]
    Information in this document applies to any platform.
    Symptoms
    Obtaining the following error when trying to process a BQY that uses an Essbase data source in Workspace:
    "An Interactive Reporting Service error has occurred. Failed to acquire requested service. Error Code: 2001".
    Cause
    The name of the data source in the CMC contained the machine name in fully qualified name format whereas the OCE contained the machine name only. This mismatch in machine names caused the problem. Making the machine name identical in both cases resolved the problem.
    Solution
    Ensure that the name of the data source as specified in the OCE in Interactive Reporting Studio matches the name specified in the CMC tool in the field "Enter the name of the data source".
    In fact, all fields need to match between the OCE and the CMC Data Source.
    regards
    alex

  • Performance issue in Portal Reports

    Hi
    We are experiencing a serious performance issue, in a report, and need a urgent fix on this issue.
    The report is a Reports From SQL Query report, I need to find a way to dynamically make/create the where clause otherwise I have to make the statement in a way the exclude the use of indexes.
    Full-table-scan is not a valid option here; the number or records is simply too high (several millions its a datawarehouse solution). In the Developer packaged, we can make the where clause dynamically, this basic yet extremely important feature, is essential to all database application.
    We need to know how to do it, and if this functionality is not natively supported, then this should be one of the priority one functionalities to implement in future releases.
    However, what do I do for now?
    Thank in advance

    I have found a temporary workaround, by editing the where clause in the stored procedure manually. However this fix have to be done every time a change have been committed in the wizard, so it is still not a solution to go for indefinitely, but its ok for now.

  • Performance Issue Tracking In Database Level.

    Hi All,
    I am sorry, actually i dont know whether this is the right question to ask in this forum. Below is my question.
    We are working on Oracle 10g and are supposed to moved to 11G. My question is which text book will be best one for getting knowledge regarding database performance issue in broad level and monitoring and resolving the issues. Please suggest.
    Edited by: 930254 on Aug 22, 2012 7:56 AM

    Troubleshooting oracle performance ( Apres) is one book I found very useful along with the Oracle documentation (http://www.oracle.com/pls/db112/to_toc?pathname=server.112/e10822/toc.htm).
    btw, please mark the thread as 'answered', if you feel you got your question answered. This will save the time of others who search for open questions to answer.
    regards,
    CSM

  • Performance issue with Crystal when upgrading Oracle to 11g

    Dear,
    I am facing performance issue in crystal report and oracle 11g as below:
    In ther report server, I have created a ODBC for connect to another Oracle 11g server. also in report server I have created and published a folder to content all of my crystal report. These report can connect to oracle 11g server via ODBC.
    and I have a tomcat server to run my application in my application I refer to report folder in report server.
    This way can work with SQL server and oracle 9 or 10g but it facing performance issue in oracle 11g.
    please let me know the root cause.
    Notes: report server, tomcate server are win 32bit, but oracle is in win 64bit, and i have upgraded DataDirect connect ODBC version 6.1 but the issue can not resolve.
    Please help me to solve it.
    Thanks so much,
    Anh

    Hi Anh,
    Use a third party ODBC test tool now. SQL Plus will be using the Native Oracle client so you can't compare performance.
    Download our old tool called SQLCON: https://smpdl.sap-ag.de/~sapidp/012002523100006252882008E/sqlcon32.zip
    Connect and then click on the SQL tab and paste in the SQL from the report and time that test.
    I believe the issue is because the Oracle client is 64 bit, you should install the 32 bit Oracle Client. If using the 64 bit client then the client must thunk ( convert 64 bit data to 32 bit data format ) which is going to take more time.
    If you can use OLE DB or using the Oracle Server driver ( native driver ) should be faster. ODBC puts another layer on top of the Oracle client so it too takes time to communicate between the layers.
    Thank you
    Don

Maybe you are looking for

  • Headphones Not Recognized When Plugged in, Internal Speakers Won't Mute

    Hi all, I've scoured these message boards looking for a solution to this problem, but couldn't find a solution that worked.  Hopefully someone can! Toshiba Satellite C655-S5312 Edition: Windows 7 Home Premium, Service Pack 1 Processor: Intel(R) Penti

  • Big Question about HD

    Ummm yea i like to make videos of gameplays on the computer and i use hypercam 2 and when i make a new project using the clips with hyper cam their is a change setting button and when i click it im not sure which one to use currently im trying to use

  • Set a different form for a different e-commerce product?

    Hi, I'm new to Business Catalyst, so sorry if this is an obvious question My client wants to set up registration for a conference on their website. This would be set up as if it was a product on the site (using PayPal standard). The site currently is

  • XA overhead in call to prepare, taking up to 1000 ms

    Hello everyone. In a particular use-case on our load-test environment (similar to production) where customer data is being updated via a SOAP from a weblogic 10.3 (JDBC driver 11.2.0.2.0) to two 11gR2 RAC cluster (which leads to a lot of SQL queries,

  • RE: ALE distribution

    Hi Experts, My client has a requirement for outbound HR interface. The interface basically transfers IDocs from R/3 to XI and from XI to an ftp location A. But the client also want an option to simulate the interface. For the simulation run of the in