Important Performance Query

Hi,
I have just started using Toplink ORM 10.1.3.3.
When I log the finest level of logs, I see some entries which could be big performance issues when I put the app in production. Has some one else noticed the same thing, or am I doing something terribly wrong ?
[TopLink Fine]: *2008.08.24 08:35:47.231* --SELECT SEQ.NEXTVAL FROM DUAL
[TopLink Finest]: *2008.08.24 08:35:49.394* --sequencing preallocation for SEQ: objects: 50 , first: 4,602, last: 4,651
[TopLink Fine]: *2008.08.24 08:35:49.805* --SELECT SYSTIMESTAMP FROM DUAL
[TopLink Finest]: *2008.08.24 08:35:52.018* --Assign return row DatabaseRecord(     MY_TABLE.MODIFIED_DATE => 2008-08-25 06:05:50.404)
The above logs correspond to single record insert in my application. One is for sequence and another for timestamp for optimistic locking. Simple operation like this is taking 2 seconds(just on server !) to perform an action and this will be involved in every query (at least the TS because of optimistic locking). So I am little worried about the performance.
Has anyone faced similar issue ?
Regards,
Ani

Thanks for the reply.
As rightly mentioned by you, the time taken for pre-allocation is not that big bothering factor for me as it is divided between batch size.
I need to have an timestamp field in my tables for auditing purpose and I am trying to reuse the same for optimistic locking.
In non ORM world, typically the system time is obtained using db functions. But I guess that ORM tool has to make a call and get the current timestamp from DB, set it in object and then only it can persist the obj.
i.e. I was expecting that to insert an entity following sql is formed insert into mytable(col1,col2) values(123,_sysdate_) . But instead of having sysdate or something similar, the timestamp is taken from the DB first, set into object and then persisted.
The reason for this behavior could be that the timestamp should be set in the object copy without having to perform a read after save.
I have not tried to run any performance test on my usage. As mentioned earlier, we have just started development and I was trying to explore the optimal way to use toplink right from the beginning.
Regards,
Ani

Similar Messages

  • Importing a query from BW to BO Universe

    Hi all,
    I have few questions about Universe designer.
    1) If i have about 40 queries, that are performing good, and created in Bex query designer.
    If i have to migrate this queries to BO, what should be the best way.
    A) To Import the whole cube in to universe b) to Import just the queries to Universe.
    2) When i import a query to the Universe, it is bringing all query elements, except The fiscal year,.
    In the above case do i have manually modify the universe, if yes how would i map the newly created values to fiscal year on BW side.

    Hi,
    Please have a look on the following documents that can help on your requirements:
    [https://portal.wdf.sap.corp/irj/portal?avigationtarget=navurl://eaeb8478be0da5e948e0e9f319c9c499&NavigationContext=navurl://7f6f53bebf57a991a023fca93dc3f9cd]
    [https://portal.wdf.sap.corp/irj/go/km/docs/room_project2/cm_stores/documents/workspaces/e0722765-2c0f-2b10-7eb0-f40944b3d00b/businessobjects%20xi%203.0%20-%20olap%20universes%20enhancements]
    [https://portal.wdf.sap.corp/irj/go/km/docs/room_project2/cm_stores/documents/workspaces/e0722765-2c0f-2b10-7eb0-f40944b3d00b/olap%20universes%3a%20how%20to%2c%20samples%20and%20recommendations.doc]
    Regards,
    Didier

  • Getting error while import the query in quality server

    Hi all,
    I am transporting the user group, infoset and  query from development system to quality system and i am able to export it successfully but at the time of import from quality system i am getting the error .
    please find the attached description of error-
    Query z_it0023 of user group HR_CPC (Import option replace)
    compare query z_it0023/ HR_CPC <-> Infoset ZHR_REPORT : Field p0002-ZMIDNM is missing
    Import fro query Z_IT0023 / HR_CPC cancelled RC = 08.
    I am not getting why this error is occur as field P0002-ZMIDNM is alreday present in the infotype and in infoset both.
    Please some one help me out , it would be great help for me.
    thanks,
    Khush

    Hi Khush,
    You need to check whether the field 'p0002-ZMIDNM' is part of an append structure or include structure. If yes then ensure that the main request for the append/include structure for table P0002 is transported first. If the same has been saved as a local request, then, convert the developments to a request and assign a package to it. Transport the changes to table P0002 and then this request.
    Regards,
    Pranav.

  • Frm-40505:ORACLE error: unable to perform query in oracle forms 10g

    Hi,
    I get error frm-40505:ORACLE error: unable to perform query on oracle form in 10g environment, but the same form works properly in 6i.
    Please let me know what do i need to do to correct this problem.
    Regards,
    Priya

    Hi everyone,
    I have block created on view V_LE_USID_1L (which gives the error frm-40505) . We don't need any updation on this block, so the property 'updateallowed' is set to 'NO'.
    To fix this error I modified 'Keymode' property, set it to 'updatable' from 'automatic'. This change solved the problem with frm-40505 but it leads one more problem.
    The datablock v_le_usid_1l allows user to enter the text (i.e. updated the field), when the data is saved, no message is shown. When the data is refreshed on the screen, the change done previously on the block will not be seen (this is because the block updateallowed is set to NO), how do we stop the fields of the block being editable?
    We don't want to go ahead with this solution as, we might find several similar screens nad its diff to modify each one of them individually. When they work properly in 6i, what it doesn't in 10g? does it require any registry setting?
    Regards,
    Priya

  • Export & import of query (SQ01/SQ02/SQ03)

    Hi,
    How to export/import SAP Query (SQ01/SQ02/SQ03) without any transport request.
    waiting for ur reply.
    Thanks in Advance,
    Pranab

    Hi,
    If you dont want to create any transport request number then you need to download/upload the query instead of import/export.
    Please follow the below steps for downloading/uploading the query
    (Go to the system from where you wnat to export the query)
    1. Goto transaction SQ02.
    2. Click on ' Transports' icon (CTRL+F3).
    3. In 'Transport action' select 'Download'.
    4. Select 4th radio button 'Transport Queris'.
    5. Enter the name your user group and query and click on execute.
    6. Give the file name and location where you want to save the query backup and click on save.
    (Now goto the system where you want to import your query)
    1. Goto SQ02 and click on 'Transports' .
    2.  In 'Transport action' select 'Upload' and click on execute.
    3. Enter/select the name of the file which you have downloaded before and click open.
    Please let me know if this is helpful.
    Regards,
    Priya Bhat

  • "Error #2032" while trying to import BEx query through SAP Netweaver BW connection in Dasboard

    Hi Team,
    BO server version: 4.1 SP3 Edge edition
    Dasboard Version: 4.1 SP3
    BI Version: NW 7.4 SP7 (ABAP only)
    We are getting Error #2032 while trying to import BEx query through SAP
    Netweaver BW connection in dasboard Xcelsius.
    Please suggest to sovle this error.
    Regards,
    Vinay Shrimali

    There are several notes about this error - please see http://service.sap.com/sap/support/notes/1856691
    http://service.sap.com/sap/support/notes/1801130

  • TIPS(49) : IMPORT PERFORMANCE TIPS

    제품 : ORACLE SERVER
    작성날짜 : 2003-06-10
    TIPS(49) : IMPORT PERFORMANCE TIPS
    ==================================
    PURPOSE
    [Import 의 Performance]
    Oracle import를 실행 시 많이 걸리는 시간을 줄이기 위해 다음의 현상을
    적용해 보자.
    Explanation
    1) System 적 변경
    - DB 를 다시 create 할 경우 DB_BLOCK_SIZE를 증가시킨다.
    이 block size 가 큰 경우, 더 작은 I/O cycle 이 발생한다.
    이 변경값이 permanent한 경우는 변경 전과 비교한 모든 효과를 고려한다.
    - 1개의 커다란 rollback segment 를 생성하고, 이외의 rollback segment는
    모두 offline 한다.
    - 1개의 rollback segment는 import 되어질 table 중 가장 큰 것의 50%
    정도로 잡는다. import 는 기본적으로 insert into table_name values
    (,,,,,) 이고 이 경우 rollback segment 에는 rowid 만 들어가게 되므로
    2개의 같은 size 인 extent를 갖는 rollback 을 생성하면 된다.
    - import 가 끝날 때까지 database 를 NOARCHIVELOG mode로 유지한다.
    이는 archive log 를 생성하는 overhead 를 없앨 수 있다.
    - rollback segment와 마찬가지로, 커다란 redo log file을 생성한다.
    클수록 log switch 가 발생하지 않으므로 import 시는 좋다.
    작은 size 의 redo log 는 offline 하도록 한다. alert.log 에 나타나는
    'Thread 1 cannot allocate new log, sequence 17, Checkpoint not
    complete'의 메시지는 좀 더 크거나 더 많은 갯수의 redo log file 이
    필요함을 나타낸다.
    - 가능하다면 table, rollback, redo log file 이 다른 disk에 있도록 한다.
    이는 i/o contention을 줄일 수 있다.
    2) Init.ora Parameter 변경
    - LOG_CHECKPOINT_INTERVAL을 redo log file의 size 보다 크게 준다.
    이 숫자는 OS block 을 의미하여 unix 에서는 512 byte 이다.
    이를 크게 하면 log swich time을 줄일 수 있다.
    - SORT_AREA_SIZE를 증가시킨다.
    인덱스를 아직 생성하지 않았다 하더라도, unique, primary key는 있기
    때문이다.
    이 값의 증가는 같은 machine 안에 별도의 어떤 작업이 있는가,
    free memoty 가 얼마 만큼 있느냐에 따라 다르긴 하지만 평상 시의 5-10
    배로 한다.
    만일 machine이 swapping이나 paging이 발생하면 더욱 크게 해준다.
    3) Import Options 변경
    - COMMIT=N option을 사용한다.
    이를 사용하면 buffer의 data 가 insert되고 commit함이 아니라, 각
    object(table) 의 모든 data가 insert 후 commit 된다.
    만일 rollback 이 작다면 이 option 을 이용할 수 없다.
    - BUFFER 크기를 크게 한다.
    이것 역시 시스템의 다른 activity 나, 데이타베이스 크기에 따라 다르다.
    이 값이 크면 export file을 access 하는 횟수를 줄일 수 있다.
    - import 시 INDEXES=N option을 사용한다.
    만일 index를 생성 시는 SORT_AREA_SIZE 는 더욱 커야 한다.
    Reference Documents
    none

    제품 : ORACLE SERVER
    작성날짜 : 2003-06-10
    TIPS(49) : IMPORT PERFORMANCE TIPS
    ==================================
    PURPOSE
    [Import 의 Performance]
    Oracle import를 실행 시 많이 걸리는 시간을 줄이기 위해 다음의 현상을
    적용해 보자.
    Explanation
    1) System 적 변경
    - DB 를 다시 create 할 경우 DB_BLOCK_SIZE를 증가시킨다.
    이 block size 가 큰 경우, 더 작은 I/O cycle 이 발생한다.
    이 변경값이 permanent한 경우는 변경 전과 비교한 모든 효과를 고려한다.
    - 1개의 커다란 rollback segment 를 생성하고, 이외의 rollback segment는
    모두 offline 한다.
    - 1개의 rollback segment는 import 되어질 table 중 가장 큰 것의 50%
    정도로 잡는다. import 는 기본적으로 insert into table_name values
    (,,,,,) 이고 이 경우 rollback segment 에는 rowid 만 들어가게 되므로
    2개의 같은 size 인 extent를 갖는 rollback 을 생성하면 된다.
    - import 가 끝날 때까지 database 를 NOARCHIVELOG mode로 유지한다.
    이는 archive log 를 생성하는 overhead 를 없앨 수 있다.
    - rollback segment와 마찬가지로, 커다란 redo log file을 생성한다.
    클수록 log switch 가 발생하지 않으므로 import 시는 좋다.
    작은 size 의 redo log 는 offline 하도록 한다. alert.log 에 나타나는
    'Thread 1 cannot allocate new log, sequence 17, Checkpoint not
    complete'의 메시지는 좀 더 크거나 더 많은 갯수의 redo log file 이
    필요함을 나타낸다.
    - 가능하다면 table, rollback, redo log file 이 다른 disk에 있도록 한다.
    이는 i/o contention을 줄일 수 있다.
    2) Init.ora Parameter 변경
    - LOG_CHECKPOINT_INTERVAL을 redo log file의 size 보다 크게 준다.
    이 숫자는 OS block 을 의미하여 unix 에서는 512 byte 이다.
    이를 크게 하면 log swich time을 줄일 수 있다.
    - SORT_AREA_SIZE를 증가시킨다.
    인덱스를 아직 생성하지 않았다 하더라도, unique, primary key는 있기
    때문이다.
    이 값의 증가는 같은 machine 안에 별도의 어떤 작업이 있는가,
    free memoty 가 얼마 만큼 있느냐에 따라 다르긴 하지만 평상 시의 5-10
    배로 한다.
    만일 machine이 swapping이나 paging이 발생하면 더욱 크게 해준다.
    3) Import Options 변경
    - COMMIT=N option을 사용한다.
    이를 사용하면 buffer의 data 가 insert되고 commit함이 아니라, 각
    object(table) 의 모든 data가 insert 후 commit 된다.
    만일 rollback 이 작다면 이 option 을 이용할 수 없다.
    - BUFFER 크기를 크게 한다.
    이것 역시 시스템의 다른 activity 나, 데이타베이스 크기에 따라 다르다.
    이 값이 크면 export file을 access 하는 횟수를 줄일 수 있다.
    - import 시 INDEXES=N option을 사용한다.
    만일 index를 생성 시는 SORT_AREA_SIZE 는 더욱 커야 한다.
    Reference Documents
    none

  • Can we import Bex query, SPO, MP, INFOSET, OHD to HANA studio

    Hi all,
    Can we import Bex query, SPO, MP, Infoset, OHD to hana studio ?.
    If yes then how ?.
    Note: We are already on BW on HANA, SP6 pack.
    Please can any one provide some light on it.
    Regards
    Pavnete Rana

    Can we import Composite provider, Hybrid provider, Virtual Porvider, Advance DSO, Transient Provider to HANA Studio?

  • ? Mail [12721] Error 1 performing query: WHERE clause too complex...

    Console keeps showing this about a zillion times in a row, a zillion times a day: "Mail [12721] Error 1 performing query: WHERE clause too complex no more than 100 terms allowed"
    I can't find any search results anywhere online about this.
    Lots of stalls and freezes in mail, finder/os x, and safari -- freqent failures to maintain a broadband connection (multiple times every day).
    All apps are slow, cranky with interminable beach balls getting worse all the time.
    anyone know what the heck is going on?

    Try rebuilding the mailbox to see if that helps.
    Also, how much disk space is available on your boot drive?

  • Is this the best performed query?

    Hi Guys,
    Is this the best performed query or i can still improve it ?
    I am new to SQL performacne tune, please help to get best performance of the query.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'ASH'
    2 FOR
    3 SELECT /*+ FIRST_ROWS(30) */ PSP.PatientNumber, PSP.IntakeID, U.OperationCenterCode OpCenterProcessed,
    4 PSP.ServiceCode, PSP.UOMcode, PSP.StartDt, PSP.ProvID, PSP.ExpDt, NVL(PSP.Units, 0) Units,
    5 PAS.Descript, PAS.ServiceCatID, PSP.CreatedBy AuthCreatedBy, PSP.CreatedDateTime AuthCreatedDateTime,
    6 PSP.AuthorizationID, PSP.ExtracontractReasonCode, PAS.ServiceTypeCode,
    7 NVL(PSP.ProvNotToExceedRate, 0) ProvOverrideRate,
    8 prov.ShortName ProvShortName, PSP.OverrideReasonCode, PAS.ContractProdClassId
    9 ,prov.ProvParentID ProvParentID, prov.ProvTypeCd ProvTypeCd
    10 FROM tblPatServProv psp, tblProductsAndSvcs pas, tblProv prov, tblUser u, tblGlMonthlyClose GLMC
    11 WHERE GLMC.AUTHORIZATIONID >= 239
    12 AND GLMC.AUTHORIZATIONID < 11039696
    13 AND PSP.AuthorizationID = GLMC.AUTHORIZATIONID
    14 AND PSP.Authorizationid < 11039696
    15 AND (PSP.ExpDt >= to_date('01/03/2000','MM/DD/YYYY') OR PSP.ExpDt IS NULL)
    16 AND PSP.ServiceCode = PAS.ServiceCode(+)
    17 AND prov.ProvID(+) = PSP.ProvID
    18* AND U.UserId(+) = PSP.CreatedBy
    19 /
    Explained.
    Elapsed: 00:00:00.46
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    Plan hash value: 3602678330
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 8503K| 3073M| 91 (2)| 00:00:02 |
    |* 1 | HASH JOIN RIGHT OUTER | | 8503K| 3073M| 91 (2)| 00:00:02 |
    | 2 | TABLE ACCESS FULL | TBLPRODUCTSANDSVCS | 4051 | 209K| 16 (0)| 00:00:01 |
    | 3 | NESTED LOOPS | | 31 | 6200 | 75 (2)| 00:00:01 |
    | 4 | NESTED LOOPS OUTER | | 30 | 5820 | 45 (3)| 00:00:01 |
    |* 5 | HASH JOIN RIGHT OUTER | | 30 | 4950 | 15 (7)| 00:00:01 |
    | 6 | TABLE ACCESS FULL | TBLUSER | 3444 | 58548 | 12 (0)| 00:00:01 |
    |* 7 | TABLE ACCESS FULL | TBLPATSERVPROV | 8301K| 585M| 2 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID| TBLPROV | 1 | 29 | 1 (0)| 00:00:01 |
    |* 9 | INDEX UNIQUE SCAN | PK_TBLPROV | 1 | | 0 (0)| 00:00:01 |
    |* 10 | INDEX UNIQUE SCAN | PK_W_GLMONTHLYCLOSE | 1 | 6 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - access("PSP"."SERVICECODE"="PAS"."SERVICECODE"(+))
    5 - access("U"."USERID"(+)="PSP"."CREATEDBY")
    7 - filter(("PSP"."EXPDT">=TO_DATE('2000-01-03 00:00:00', 'yyyy-mm-dd hh24:mi:ss') OR
    "PSP"."EXPDT" IS NULL) AND "PSP"."AUTHORIZATIONID">=239 AND "PSP"."AUTHORIZATIONID"<11039696)
    9 - access("PROV"."PROVID"(+)="PSP"."PROVID")
    10 - access("PSP"."AUTHORIZATIONID"="GLMC"."AUTHORIZATIONID")
    filter("GLMC"."AUTHORIZATIONID">=239 AND "GLMC"."AUTHORIZATIONID"<11039696)
    28 rows selected.
    Elapsed: 00:00:00.42

    Thanks a lot for your reply.
    Here are the indexes on those tables.
    table --> TBLPATSERVPROV ---> index PK_TBLPATSERVPROV ---> column AUTHORIZATIONID
    table --> TBLPRODUCTSANDSVCS ---> index PK_TBLPRODUCTSANDSVCS ---> column SERVICECODE
    table --> TBLUSER ---> index PK_TBLUSER ---> column USERID

  • FRM-40505  Oracle Error: Unable to perform query(URGENT)

    Hi I developed a form with a control_block and table_block(based on table)
    in same Canvas.
    Based on values on control_block and pressing Find button detail block will be queried.
    Control_block ->
    textitem name "payment_type" char type
    text item name "class_code " char type
    push button "find"
    base table: --> payment_terms(termid,payment_type,class_code,other colums)
    table_block is based on above table
    Now I have written when-button-pressed trigger on find button..
    declare
    l_search varchar2(100);     
    BEGIN
    l_search := 'payment_type='|| :control_block .payment_type||' AND class_code='||:control_block .class_code ;
    SET_BLOCK_PROPERTY('table_block',DEFAULT_WHERE,l_search);
    go_block('table_block');
    EXECUTE_QUERY;
    EXCEPTION
         when others then
         null;
    END;
    I am getting
    FRM-40505 Oracle Error: Unable to perform query
    please help..

    You don't need to build the default_where at run time. Just hard-code the WHERE Clause property as:
        column_x = :PARAMETER.X
    But, if for some compelling reason, you MUST do it at run time this should work:
        Set_block_property('MYBLOCK',Default_where,
            'COLUMN_X=:PARAMETER.X');
    Note that there are NO quotes except for first and last. If you get some sort of error when you query, you should actually see :Parameter.X replaced with :1 when you do Help, Display Error.

  • Bulk import performance

    Hi Folks,
    We're getting ready to migrate our first teams into our new Vibe environment but I'm wondering if we can do anything to improve the performance of bulk imported documents (drag and drop into the webclient).
    Our environment: Vibe 3.3 on SLES11 SP2 (3 virtual servers)
    Server 1 (Tomcat):
    2x CPU
    8GB RAM (Java cache 6GB)
    Server 2 (Lucene):
    2x CPU
    8GB RAM (Java cache 6GB)
    Server 3 (MySQL):
    2x CPU
    4GB RAM
    When I do a bulk import, files import and everything is fine but if I'm concerned about the time it takes. During my test imports the only process that has any significant utilization is Java on the Tomcat server, but it bounces from between 30-90% on a single CPU. If needed I can easily crank these VMs up to 8 CPUs and 64GB RAM if needed. That said, if I can't tax the current servers, there's no need to increase resources.
    Does anyone know a way to improve the import performance? I want to be able to peg all my CPUs at 100% if need be.

    Hi John,
    many thanks for your reply. Import and gathering stats is done through an application. I have only the following information about what is supposed to happen when we run the statistic gathering process:
    "Optimization - Uses sub-commands for database optimization (DBMS_STATS*). These are important for the Oracle optimizer to generate optimal execution plans.
    Optimize Schema - Runs the DBMS_STATS.GATHER_SCHEMA_STATS(USER,cacade=>true) command. This procedure gathers (not estimates) statistics for all objects in a schema and on the indexes.
    Optimize the schema after you import a industry model from a dump file, and run the Optimize command whenever you have extensive changes to the data, such as imports or major updates.
    Optimize Feature Classes - Runs the DBMS_STATS.GATHER_TABLE_STATS([USER], [Table], cascade=>true) command for the feature class tables. This procedure gathers table, column, and index statistics."
    The application we use allows to the gather statistics in two different places. I now realise that we have only used one of the two so far and if my understanding of the documentation is right the one we have used does not gather all statistics. With your explanation the observed behavouir makes sense. Next time I will gather statistics using the second functionality to see if that one gathers all statistics at once.
    Many thanks again, Rob

  • Import structure query

    hi gurus,
    In a se37 function builder i have a structure in the import paramter
    ie i am getting list of client in the import
    how i can write the query to get the details of the list of client
    it says the error as
    "it" must be a flat structure. You cannot use internal
    tables, strings, references, or structures as components. -
    senthil.

    u have to write the select condition as per mentioned in my last post
    second option can be tht u can have a loop at the internal table having clients and inside loop u can fetch the client details one by one like -
    loop at itab.
    select single * from KNA1 into corresponding fields of itab1 where Mandt = itab-mandt.
    endloop.
    second option will degrade ur performance so use the first one only.
    amit

  • Bad Performing Query

    One of our important report is performing too badly today, and lot of users aks for the reson.
    we have 10g r2 database in RHEL4
    Statistics are updated properly
    Present execution path showing good cost, as like last week
    The report is related to stock, and we need to get a solution as yearly as possible
    Thanks in advance

    When your query takes too long ...
    This might be useful.
    Sidhu

  • Error While importing SAP query into quality system

    Hi,
    When Itried to import the dataset(Transport Request)  generated in develoment system into Quality system I am getting the following error.
    Query already exists and Infoset contains a structures which is not there in data dictionary .
    How to oversome this error to import successfully into quality system
    thanks

    hi,
    You need to transport your Z tables to the quality.
    Make sure you transport all the data elements , domain.. etc to the quality.
    Thats why its giving you the error.
    It does not find the Z tables in Quality
    Regards,
    Vinod

Maybe you are looking for