Regarding the insert time

Hello,
I have deployed the BC4J application on Oracle 9iAS.When i insert data quiery or for that matter do any query with the database as Oracle9i I have to wait longer time (35 seconds) before i receive the thankyou page.I was running the same insert page in JRun and it submits instantly. Can anyone let me know why is this so and what do i need to do.?I thought 9i is real quick??
Thanks in advance.
nikhil

To help narrow down your problem, I'd recommend the following:
[list=1]
[*]Create a new project
[*]Default an EO, VO, AM for the EMP sample table in the SCOTT schema
[*]Default a BC4J JSP Browse-Edit page for the "EmpView"
[*]Test it in the local JDeveloper, and try the insert
[*]Deploy to OC4J
[*]Repeat the test above
[list]
Does even this simple one-table app have insert performance issues in your environment?
We're not seeing a general problem of insert performance here. Thanks.

Similar Messages

  • Regarding the product time capsule...is the modem the same as airport extreme and is the disk drive always running? I'm worried about it lasting for at least five years.

    Regarding the product Time Capsule... is the modem the same as the Airport extreme and is the disk drive always running??? I'm worried about it lasting at least five years.

    John,
    I'd pay good money to bet it wouldn't last 5 years... I don't rate the in built power supply and as for "server grade hard disk" - Hmmmm..... The failure rate of all HD's on the market after 3 year is 60%.
    Regards,
    Shawn

  • What is Microsoft's official policy regarding the processing time for HCK2.1 Driver Submissions?

    What is Microsoft's official policy regarding the processing time for HCK2.1 Driver Submissions?
    Can someone point me to a document that states the official policy stating their maximum review time?  This info used to be in the WLK1.6 FAQ but I don't see it for the HCK2.1 suite.
    Thanks!
    Al

    Ian,
    Thanks for your reply. Yes, I'm sure LabVIEW uses the (default) Windows timer. And yes, 1 mS is not guaranteed due to the preemptive nature of Windows (and even "RTOSs" to varying degrees), which is why I see about plus or minus 2 mS. 
    Apparently the Windows timer can be set by API calls. See: http://www.lucashale.com/timer-resolution/. Here's a screen shot of his TimerResolution.exe on a Windows 7 PC:
    Here it is on my Windows XP PC after I set it to "Maximum" (initially it was 15.625 mS):
    Notice that it sets the Maximum to less than 1 mS, which is supposed to be the max, so there are some bugs. Plus the Default button does not reset it in XP, but does work on Windows 7 or 8. (I know this is not the place to "debug" non-LabVIEW applications!)
    I'll bet LabVIEW sets it, too. The only caveat, as I said, is it looks like another application can change it, since the hardware timer is a "global" timer. I have not seen this issue in my LabVIEW applications, have you?
    I guess I need to do some more digging to see the code to set the timer, but it looks like the developers of LabVIEW have it figured it out.
    (FYI, I did notice that running my LabVIEW app (which gives about 2 mS resolution) or a C# app, which gives 15.625 mS resolution, does not affect what TimerResolution.exe reports, so I'm not sure if it's really working correctly. If I figure it out I'll post the results.)
    Ed

  • Regarding the 'Dispatch time' settings for output type...

    Hi Experts,
    May i know what 'Send with application own transaction' means when we configuring output type in SPRO for 'Dispatch time'.
    Cause i knew if we set it as 'Send immediately', then this output type will be triggered upon saving the orders.
    But i dont know what that item for..Could you kindly help me on this...
    What it means if we set the dispatch time as 'Send with application own transaction' for delivery output..??
    Thank you very much.

    Hi,
    If you choose 'Send with application own transaction'  when maintaining condition record for output type,during order,delivery and invoice saving the output type signal is yellow,that means unless you select that output and trigerred for printing it will not goto the spool.i.e in simple words the output is for manual printing.......
    Let me know if you need any clarifications on this..
    Regards,
    Chandra

  • Regarding the run time analysis

    Hi All,
    To check the performance of the program, we will go to Run time analysis, in the utilities. In the analyes part consist of graphical representation of the ABAP, DATABASE and System.
    My Question is 'When we give small range of values in the Selection screen then the database hit is less, but when we give large range of values then database hit is more'.
    So how to analyse the program performance in the scenario?
    Please reply,
    Thanks,
    Rohit.

    that is exactly the point:
    Check SQL Trace first,
    SQL trace:
    /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
    with small selection and large selection. You can compare the Summary by SQL statements, so problems will appear in both cases.
    The table connected to the SELECTION-SCREEN will return different amounts of data, these you should write down.
    Then run for both example
    SE30
    /people/siegfried.boes/blog/2007/11/13/the-abap-runtime-trace-se30--quick-and-easy
    Assume that the amounts of data are 10 and 100 values, then your program should need not more than N*logN nearly 20 times longer for the larger example.
    This should hold for the total time, but also for all individual lines in the SE30.
    The program
    Z_SE30_COMPARE
    /people/siegfried.boes/blog/2008/01/15/a-tool-to-compare-runtime-measurements-zse30compare
    compares 2 SE30 traces.
    And
    Nonlinearity Check
    /people/siegfried.boes/blog/2008/01/24/nonlinearity-check-using-the-zse30compare
    explains how you find programming bugs.
    Siegfried

  • Inserting time in motion 1

    Hi,
    I'm working in motion 1 and need to lengthen a section. I can insert time, but all regions after the inserting are then moved into new layers that are suffixed with '1'. If I do this several times, my project will be a complete mess. The insert time command also just inserts time - it does lengthen the selections, making me have to select all of the latter regions and try to match them up with the end of the cut, which is a pain.
    Suppose I could export to final cut and find a way to lengthen the sections there.
    This program is constantly crashing on me as well. Maybe I need to reinstall. I'd love to buy a new version of motion...but I can't. I don't have a grand laying around to blow on the entire suite. I don't see why apple can't sell separates like everyone else.

    Hi,
    But in this case user has to input entire date and time. Actually I have three fields
    schedule_date - this shoud stores the date- data type : DATE
    from_time - this should store start time for schedule date - data type : TIMESTAMP
    to_time - this should store end time for schedule date - data type : TIMESTAMP
    for eg. 18-MAR-2009, 9.10, 11.25.
    I want user to enter only time in from_time and to_time.
    I have tried setting format to HH24:MI but then it stores 1-MAR-2009 9:10:00:0000AM in from_time instead it should store 18-MAR-2009 9:10:00:0000AM in from_time and same with to_time.
    Regards
    Trusha
    Edited by: trusha on Mar 18, 2009 5:01 PM

  • Delete all record in a table, the insert speed is not change.

    I have an empty table, and i insert a record need 100ms,
    when this table has 40,0000 record, i insert a record need 1s, this is ok, because i need do a compare based an index before insert a record, so more record, need more time.
    The problem is when i delete all record in this table, the insert time is still 1s, not reduce to 100ms.Why?

    Hello,
    Read through this portion of oracle documentation
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/logical.htm#CNCPT004
    The reason is still taking 1s, because when you inserted 400K record because HWM (The high water mark is the boundary between used and unused space in a segment.) moved up. And when you deleted all the records your HWM still at the same marker and didn't get to reset to 0. So when you insert 1 record it lookings for free space and after finding (generally a regular inserts got 6 steps it inserts data). If you truncate your table you and try it again it will be faster as your HWM is reset to 0.
    Regards

  • Insert times grow exponentially

    Hello everyone
    We want to store (lots of) performance data into a database and we are evaluating also berkeley JE. Basically there are devices and performance data types and an m:n relationship, so n types can be measured per device. Values of the performance data (kpi) are stored together with the timestamp when the measurement happened. What I want to get back from the database, is the performances data of one type of a device over a timerange (which are the values for objId/kpiId pair between two timestamps).
    The classes I use are the following:
    @Persistent
    public class KpiKey implements Serializable {
         @KeyField(1) Long objId;
         @KeyField(2) Long kpiId;
         @KeyField(3) Date ts;
    @Entity
    public class KpiEntry implements Serializable {
         @PrimaryKey
         private KpiKey kpiKey;
         private Double value;
    }Now I found out that inserting new values is linear in time, as long as the KpiKey stays ascending, when inserting new values, i.e. when objId and kpiId are constant and only ts grows (~3.5s per 100'000 inserts).
    When I start to randomize objId and kpiId and keep ts ascending, the insert times grow exponentially, starting with 3.5s per 100'000 entires for the first 2'000'000 and grow up to 165s per 100'000 entries between 2 and 4 million entries.
    Do you have any recommendations how to optimize this?
    Thank you very much in advance.
    James

    >
    So I think this is more a conceptual issue then a configuration issue.
    Could another approach with a different primary key configuration maybe solve this issue? Since I don't want to store entites, but values into the DB, a composite key will probably be the only solution - or am I wrong?
    >
    Each record needs a unique key. Since you need to perform lookups by all the fields in your key, I believe your schema is correct.
    One thing that may be working against you is that you are storing very small records, and there is a per-record overhead. If you were to group multiple values into a single record (same kpiId, same objId, but a range of timestamps) you may see a performance benefit. However, this only makes sense if you always operate on a range of timestamps, and don't need to delete or modify individual values. Just a thought, in case you hadn't tried it.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Insert Time Help

    Here's what I want to do: insert time into the middle of a project so I can add some new content. I understand the cmd-opt drag to make a region to insert time. But what I would like to have happen is a kind of "ripple insert time", meaning everything following the inserted time would move without splitting everything into new regions after the insert. Also, I'd like any keyframes and markers to move to stay in sync with all the moved content. Essentially like dropping a slug into a Final Cut project.
    As near as I can tell you can't get there from here. Anyone know otherwise?

    For this reason (Motion is not an editor), I usually do my build and layering in an FCP timeline, then send it to Motion. I often don't use the Embed Motion content checkbox, then I place the resultant Motion project on the timeline in FCP above the elements. If I need to make changes to the relationship of the elements, I copy the timeline, make the changes and send them back to Motion.
    It may be some extra work, but it's easier to do the arranging/timing in FCP for me...
    Patrick

  • Can we use both INSERT and UPDATE at the same time in JDBC Receiver

    Hi All,
    I would like to know is it possible to use both INSERT and UPDATE at the same time in one interface because I have a requirement in which I have to perform both the task.
    user send the file which contains both new and old record and I need to save those in MS SQL database.
    If the record exist then use UPDATE otherwise use INSERT.
    I looked on sdn but didn't find any blog which perform both the things at the same time.
    Interface Requirement
    FILE -
    > PI -
    > JDBC(INSERT & UPDATE)
    I am thinking to use JDBC Lookup but not sure if it good to use for bulk record.
    Can somebody please suggest me something or send me the link of any blog or anything to solve this problem.
    Thanks,

    Hi ,
              If I have understood properly the scenario properly,you are not performing insert and update together. As you posted
    "If the record exist then use UPDATE otherwise use INSERT."
    Thus you are performing either an insert or an update which depends on outcome of a search if the records already exist in database or not. Obviously to search the tables you need " select * from ...  where ...." query. If your query returns some results you proceed with update since this means there are some old records already in database. If your query returns no rows  you proceed with "insert into tablename....." since there are no old records present in database.
      Now perhaps the best method to do the searching, taking a decision to insert or update, and finally insert or update operation is to be done by a stored procedure in MS SQL database.  A stored procedure is a subroutine available to applications accessing a relational database system. Here the application is PI server.   If you need further help on how to write and call stored procedure in MS SQL you can look into these links
    http://www.daniweb.com/web-development/databases/ms-sql/threads/146829
    http://www.sqlteam.com/article/stored-procedures-parameters-inserts-and-updates
    [ This part you can ignore, Since its not sure that you will face this situation
        Still you might face some problems while your scenario runs. Lets consider this scenario, after the stored procedure searches the database it found no rows. Thus you proceed with an insert operation. If your database table is being accessed by multiple applications (or users) other than yours then it is very well possible that after the search operation completed with a null result, an insert/update operation has been performed by some other application with the same primary key. Now when you are trying to insert another row with same primary key you get an error message like "duplicate entry not possible for same primary key value". Thus you need to be careful in this respect. MS SQL has a feature called "exclusive locks ". Look into these links for more details on the subject
    http://msdn.microsoft.com/en-us/library/aa213039(v=sql.80).aspx
    http://www.mssqlcity.com/Articles/Adm/SQL70Locks.htm
    http://www.faqs.org/docs/ppbook/r27479.htm
    http://msdn.microsoft.com/en-US/library/ms187373.aspx
    http://msdn.microsoft.com/en-US/library/ms173763.aspx
    http://msdn.microsoft.com/en-us/library/e7z8d5hf(v=vs.80).aspx
    http://mssqlserver.wordpress.com/2006/11/08/locks-in-sql/
    http://www.mollerus.net/tom/blog/2008/03/using_mssqls_nolock_for_faster_queries.html
        There must be other methods to avoid this problem. But the point is you need to be sure that all access to database for insert/update operations are isolated.
    regards
    Anupam

  • Issues when Insert and Analyze at the same time

    Hi All,
    We have a weekend job that used to take 12-15 hours usually and insert around 8 millions of records to a table from external table. Last weekend the same load took 60 hours and it move to Monday and there are some business impact.
    While we investigate the same issue we identified that there is one more weekend analyze job for all the tables and indexes and took around almost same time and we noticed that the table we inserted the 8 million records alone it took around 15 hours. This table now has around 91 Core records.
    We need to identify why this jobs taken long time as almost 4 times compare to the normal scenario.
    I would like to know what exactly happen when Analyze and insert happened to the table at the same time. And What all issues can occur because this.
    Best Regards,
    Shijo

    Ok,
    It is unclear what version you are using as you forgot to post that.
    Furthermore it is unclear what 'Analyze' is, as analyze statistics was already deprecated in 9i.
    Secondly, most people is this forum speak English, and they are accustomed international units only.
    Sybrand Bakker
    Senior Oracle DBA

  • How to insert, as an input variable within formula, the total time window on scope?

    How to insert, as an input variable within formula, the total time window on scope (i.e. 20ms/div x 10 div = 200ms)?
    BELOW IS AN EXAMPLE OF ISSUE:
    FORMULA FOR ACTION INTEGRAL:
    STATISTICS:
    Input variable: DPO4034(CH1);
    Check Box: number of samples.
    FORMULA
    Input variable 0: DPO4034(CH1); alias: x0
    Input variable 1: "Total Time Window on Scope???"; alias: x1
    Input variable 2: Number of Samples (CH1); alias: x2
    Under Operation Setup: Formula
    Y= (x0^2)*(x1/x2)
    Output: Processed Data 1 (CH1)
    THEN USING STATISTICS:
    Input signal: Processed Data 1 (CH1)
    Check Box: SUM
    Output: CH1 Action Integral [A^2s]
    Solved!
    Go to Solution.

    Hi again Catherine,
    I have now added another TekScope (TDS3032B) along with the DPO4034 and run the same work-around on the TDS3032B using CH1 as the "real" signal channel and CH2 as the "burst width" channel. However, the value returned for CH2 is nominally 99E+36 (min 99E+36, max 99E+36) with very few retrievals of correct burst width (~200ms). Seems the SignalExpress program is unable to consistently retrieve from TDS3032B the actual burst width (scope's time scale/window) and defaults to 99E+36 value. Any ideas on what is occurring and how to make it work? Attached are some screen captures to help guide discussion.
    Regards,
    Michael
    Attachments:
    TDS3032B - incorrect burst width.png ‏301 KB
    TDS3032B - correct burst width.png ‏287 KB
    DPO4034 - always correct burst width.png ‏302 KB

  • How to insert the sysdate time into the database

    hi all,
    when i execute the following query,
    insert into table_name
    (date_field)
    values
    (to_date(sysdate, 'yyyy/mm/dd hh24:mi:ss'));
    The value is inserted as 08-02-12 12:00:00. In this query, it always stores the default time as 12 a.m.
    But i need to insert the original system time not the default one.
    please help me how to rectify it.

    I do not understand as to why you are using the to_date function on sysdate since sysdate is already in date format.
    If date_field is of the data type date, then the following dml should work.
    insert into table_name(date_field) values (sysdate);
    In case date_field is a varchar2 field and you want to store the date and time in a string format, then the following statement should work.
    insert into table_name (date_field)
    values (to_char(sysdate, 'yyyy/mm/dd hh24:mi:ss'));

  • Running a mid 2009 iMac on 10.7.5, 3 gb memory, 320 gb hd.    Suddenly the computer stoped reading DVDs and the only time I can read a CD is if I restart.  Quite often when I insert a CD it gets stuck and I cannot get it out until I restart and it shows u

    Running a mid 2009 iMac on 10.7.5, 3 gb memory, 320 gb hd. 
    Suddenly the computer stoped reading DVDs and the only time I can read a CD is if I restart.  Quite often when I insert a CD it gets stuck and I cannot get it out until I restart and it shows up on the desktop where I can then eject it.
    I have checked and double checked the finder prefs and all looks normal showing a check mark on CDs,DVDs etc. (the ones I want to show up on the desktop)
    I have reset the PEAM, repaired permissions with both the disk utility on the computer and the disk utility when I start up in the Recovery Disk.  I did notice that sometimes the permission repeat the same correction several times before it moves on, and sometimes it doesn’t. I have Windows installed on a partition but I keep it unmounted until it is needed for my wife’s work.  The dock seems to be just fine and all the apps seems to run just fine.  When I insert a photo CD iPhoto does not open but when I insert a music CD iTunes does open. 
    Also, most every time I open iPhoto it takes a long time(sometimes as long as 2 minutes) for it to load.
    Sometimes my Mail (Mail 5.3) does not post new mail but most of the time it does. 
    Once and a while it seems like the computer slows way down but then it seems ok ten minutes later.
    All  of these ‘things’ seemed to have happen suddenly and I have not downloaded anything from the internet in some time.
    Of course the warranty and extended warranty are both no longer in effect having had this computer for more than three years.
    I am running Java and Adobe Player because some of the sites I go to a lot require both.

    I believe that insufficient RAM may be the source of some of your problems. If you have a RAM of somewhere 4 to 8GB, you will experience smoother computing. 3GB doesn't seem right, so you might want to learn more by going to this site:
    http://www.crucial.com/store/drammemory.aspx
    I don't know what know what's happening with your optical drive, but it seems you use your drive quite a bit. In that case, look into a lens cleaner for your machine. It's inexpensive, works quite well.
    I hope you'll post here with your results!

  • CLR trigger - handling multiple inserts at the same time

    Hi
    I've developed a CLR trigger which operates on inserts performed on a staging table. The trigger implements some business logic and then inserts or updates a record in a target table. Whether an insert or update is performed depends on whether
    a record with the same ID already exists in the target (i.e. a select * from target where ID = 123).
    This works fine in most scenarios, but occasionally I am getting duplicates in the target table and have noticed that this seems to occur when inserts on the staging table happen at exactly the same time (i.e. multiple inserts for the same ID at
    the same time). In this situation duplicates are created in the target table because at the time of the inserts, no record with that ID exists in the target table (i.e. the select returns no records), therefore a new record is created for each.
    Is there a known way to deal with this scenario? For example, would locking the target table on insert result in the subsequent selects against the target table waiting until the target table had been updated, therefore the select would return a record
    for the given ID.
    I didn't really want to lock the whole target table on insert, because there are potentially other users reading that table (selects) and these would also have to wait for the insert to complete.
    I'd appreciate any thoughts on how to deal with this and avoid duplicates in the target table. I'm unable to change the way the data is coming in to the staging table, so my trigger code must deal with the above scenario.
    Thanks in advance.
    John

    First if you do not want any duplicate values in a column (or combination of columns) you should add a constraint to ensure this is never possible. A
    unique index
    like this should do this trick.
    CREATE UNIQUE NONCLUSTERED INDEX [IX_yourIndexName] ON [dbo].[YourTableName]
    [yourColumn1] ASC,
    -- add more columns that make the unique combination that you don't want repeated
    You can then add a try/catch block in your trigger code, if you get an exception based on this index then the record was created by another executing instance of this trigger and in that case you should do an update (or not, not sure what the rest of your
    logic is) in your catch block. This is the easiest solution and does not involve table locks. The only drawback is the first one to commit the insert will win and you have no guarantee which process or data set that will be. Also i have no idea how big the
    table is, how frequently changes are made, and what the data type is so you should
    keep this in mind when creating your index so you don't run into unexpected high index fragmentation which can lead to performance problems when executing updates and inserts.
    You could also create a
    named transaction with scope serializable around your insert/update block and execute your reads using a
    NOLOCK hint
    which should allow them to retrieve uncommitted writes and not create a long wait. The downside is is that the data might not be 100% accurate depending on if a transaction fails or not if there happens to be an update at the same time as a select but maybe
    this is not a big deal to the calling code.
    -Igor

Maybe you are looking for

  • Positioning a vertical line in Pages document??

    I have a Pages word processing document that has three columns (like a classified ad page in a newspaper). I want to add vertical lines between the columns but I want to center and position the vertical lines myself between the columns. What setting

  • Word 2011 for Mac Did Not Save My Work despite many save presses. Why?

    I have been working on a document all week on word 2011 for Mac. I initially saved the document and named it. It is still there in the directory folder I originally created it in. I pressed the save icon every 20 minutes when working on my word docum

  • One Iphone, one itunes, two computers

    Can anyone tell me..I now have two computers authorized under my one account of itunes I added my work PC today). My iphone 3G has only been synced with my home computer. When I sync to my work computer now, will my library transfer to itunes at work

  • OIM 11G WEBLOGIC START ERROR

    Hi, I am getting the following error when I start the IDM SERVER. PLEASE HELP ME. [2010-09-15T16:51:20.481+04:00] [OJDL] [NOTIFICATION:16] [ODL-52001] [oracle.core.ojdl.FileLogWriter] [org: Oracle] [host: idmapps] [nwaddr: 127.0.0.1] [tid: [ACTIVE].E

  • Dba_usersのcreatedについて

    マニュアルを見ますと.dba_usersのcreatedはユーザーの作成日付が挿入されると記載されています. これは.つまりユーザーのパスワードやデフォルト表領域等を変更してもcreatedの日付は更新されないと 考えてよいでしょうか. また.他にもcreatedの値を変更する要因があれば.御教示下さい. 上記を検証できる環境が無い為.本スレッドで問い合わせています. 宜しくお願い致します. Edited by: user1816871 on 2011/06/06 22:39