Insert 1000 records per second

I need to insert 1000 records in mysql database what is the best approach
i will get the requests from a through a http url currently we are using http server to handle the incoming requests

This thread was originally posted to the Java Database Connectivity (JDBC).
It appears to be off-topic for that forum.
It has been moved to a database sub-forum, hopefully for closer topic alignment.

Similar Messages

  • Formatting output field to derive Record Throughput (Records Per Second)

    It's been so long long that I've done this I am seeking some guidance.
    The business problem I have is to derive a new calculated field to determine Throughput (Records Per Second)
    Formula: (((batch_job_execution.end_time - batch_job_execution.start_time) * (24*60))) / (write_count)
    The Results from above basically give me a Time Formatted Field and I am seeking an Intgerger field.
    I take it I need to do a CAST or other type of formatting function(s) and it most likely needs to be broken down
    within the calculated field above.
    If someone can provide some guidance as how best to approach this problem, that'll be GREAT!!
    If I need to provide some more information, please let me know as well.
    Sincerely,
    George

    Hello again,
    I attempted to use the TIMESTAMPDIFF function and I am receiving error ORA-00904 invalid identifier.
    I do not find this function in my Oracle SQL book so I'm assuming this might be a function that was created internally
    within your enterprise. Please correct me if I am wrong. This may have to be the path I take. (Create a FUNCTION to perform TIMESTAMPDIFF processing)
    Again, this is just some of my thoughts based on my research and I could be far off.
    Please advise or confirm my suspicions.
    Thanks
    George

  • Script for inserting 1000 record in a table...

    Hello Gurus,
    I have a table structure like this....
    USERID     USERNAME     USERPWD     EMAILID     FIRSTNAME     LASTNAME     ISACTIVE
    1     superuser     Pyramid123      [email protected] a b      1
    21     neha     Pyramid123     [email protected] s     s     1
    I need to write a script where i can insert 1000 dummy related records into this table..
    your help would be appreciated..
    Thanks,
    HP

    Hi,
    Hope the below solves your problem.
    SQL> CREATE TABLE t (USERID NUMBER,
      2                  USERNAME VARCHAR2(20),
      3                  USERPWD VARCHAR2(10),
      4                  EMAILID VARCHAR2(20),
      5                  FIRSTNAME VARCHAR2(10),
      6                  LASTNAME VARCHAR2(10),
      7                  ISACTIVE NUMBER);
    Table created
    SQL> INSERT INTO t
      2    SELECT srl,
      3           name,
      4           pwd,
      5           LOWER(SUBSTR(name, 1, 10)) || '@abc.com',
      6           SUBSTR(name, 1, 10),
      7           SUBSTR(name, 11, 20),
      8           1
      9      FROM (
    10    SELECT level srl,
    11           dbms_random.string('U', 20) name,
    12           dbms_random.string('A', 10) pwd
    13      FROM DUAL
    14   CONNECT BY LEVEL <= 1000);
    1000 rows inserted
    SQL> commit;
    Commit complete
    SQL> select count(1) from t;
      COUNT(1)
          1000
    SQL> select * from t where rownum < 10;
        USERID USERNAME             USERPWD    EMAILID              FIRSTNAME  LASTNAME     ISACTIVE
           342 JLMPNCRYRZYLEGVVKLQT ypsFEvtYOg [email protected]   JLMPNCRYRZ YLEGVVKLQT          1
           343 UINEJWHGFHCBOUXWQWEL OSBmpXSSDp [email protected]   UINEJWHGFH CBOUXWQWEL          1
           344 TLGFDHHLMACMMENWRMZG RIrPTdotaX [email protected]   TLGFDHHLMA CMMENWRMZG          1
           345 QARLMGJVFJXTJRQUFRFU lkbvEGACDi [email protected]   QARLMGJVFJ XTJRQUFRFU          1
           346 TYMDMPTWASFOGIYZYBZP SadCSlHiZc [email protected]   TYMDMPTWAS FOGIYZYBZP          1
           347 XDTRMJICNQNKFMDRRMZB lSchkFigpz [email protected]   XDTRMJICNQ NKFMDRRMZB          1
           348 DQZUKSXOLMQLMFBMEGNI psBCKgLVPP [email protected]   DQZUKSXOLM QLMFBMEGNI          1
           349 JMTNKXDDAPDHYLHUVSWF WXYrBQNKJk [email protected]   JMTNKXDDAP DHYLHUVSWF          1
           350 ZHAFZAJPJCBHNLTCQWTB rhtoGTpBle [email protected]   ZHAFZAJPJC BHNLTCQWTB          1
    9 rows selected
    Regards
    Ameya

  • Need a method to insert 1000 records in oracle in once

    Hi All,
    I want to insert more than 1000 records in oracle database in once. Please let me know the way to do this. It's urgent..........
    Regards,
    Puneet Pradhan

    More then 1000?
    So, how about 10000?
    Use the CONNECT BY LEVEL clause to generate records:
    insert into table
    select level --or whatever
    from dual
    connect by level <= 10000;
    It's urgent..........Since it's your first post, I recommend you to not use the 'U-word', or you'll be made fun of...

  • How to send 1000 records per each time through JDBC adapter

    Hi all,
    In my JDBC to File scenario, SQL Server database having 10,000 records. I want to split these records into 1000 records and process 1000 records each time. How we can do in JDBC adapter. Is there any options in JDBC adapter. Please give me reply very thanks to all of u.

    Hi all,
    Thanks for your repsone. I am very happy.
    If it sender Jdbc we can write the query. If it is Receiver Jdbc adapter and I want to send the record as 1000 at a time. 
    You may give this advise that is you can use RecordsetPerMessage option at sender side. But here I want to read all the records and send 1000 records only.
    please give me valuable answer.
    thanks in advance...

  • 10,000 Recorc Per Second (In EJB 3.0)

    hi all,
    i have some mission critical tasks into my project, is it possible to persist 10 000 record per seconds,
    1. AS - JBoss Application Server 4.0.4GA
    2. Database - Oracle 10G 10.2.0.1
    3.EJB - 3.0 Framework
    4.OS - SunOS 5.10
    4.Server - Memory: 16G phys mem, 31G swap, 16 CPU,
    i know that i need performace
    here is my configurations about performance
    1. JVM Config Into JBoss
    JAVA_OPTS="-server -Xmx3168m -Xms2144m -Xmn1g -Xss256k -d64 -XX:PermSize=128m -XX:MaxPermSize=256m
       -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
        -XX:ParallelGCThreads=20 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
        -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=31 -XX:+AggressiveOpts
        -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintTenuringDistribution2. also i configure my database.xml file
    <?xml version="1.0" encoding="UTF-8"?>
    <datasources>
      <xa-datasource>
        <jndi-name>XAOracleDS</jndi-name>
        <track-connection-by-tx/>
        <isSameRM-override-value>false</isSameRM-override-value>
        <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
        <xa-datasource-property name="URL">jdbc:oracle:thin:@192.168.9.136:1521:STR</xa-datasource-property>
        <xa-datasource-property name="User">SRVPROV</xa-datasource-property>
        <xa-datasource-property name="Password">SRVPROV</xa-datasource-property>
        <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
        <min-pool-size>50</min-pool-size>
        <max-pool-size>200</max-pool-size>    
        <metadata>
             <type-mapping>Oracle9i</type-mapping>
          </metadata>
      </xa-datasource>
      <mbean code="org.jboss.resource.adapter.jdbc.vendor.OracleXAExceptionFormatter"
             name="jboss.jca:service=OracleXAExceptionFormatter">
        <depends optional-attribute-name="TransactionManagerService">jboss:service=TransactionManager</depends>
      </mbean>
    </datasources>3. Also i have one simple Stlateless Session Bean
    @Stateless
    @Remote(UsageFasade.class)
    public class UsageFasadeBean implements UsageFasade {
         @PersistenceContext(unitName = "CustomerCareOracle")
         private EntityManager oracleManager;
         @TransactionAttribute(TransactionAttributeType.REQUIRED)
         public long createUsage(UsageObject usageObject, UserContext context)
                   throws UserManagerException, CCareException {
              try {
                   oracleManager
                             .createNativeQuery("INSERT INTO USAGE "
                                       + " (ID, SESSION_ID, SUBSCRIBER_ID, RECDATE, STARTDATE, APPLIEDVERSION_ID, CHARGINGPROFILE_ID, TOTALTIME, TOTALUNITS, IDENTIFIERTYPE_ID, IDENTIFIER, PARTNO, CALLTYPE_ID, USAGETYPE, APARTY, BPARTY, CPARTY, IMEI, SPECIFICCALLTYPE, APN, SOURCELOCATION, SMSCADDRESS, MSC_ID, ENDREASON, USAGEORIGIN, BILL_ID, CONTRACT_ID) "
                                       + " VALUES(SEQ_USAGE_ID.NEXTVAL, NULL, NULL, SYSDATE, SYSDATE, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL) ");                              
                   return 1;
              } catch (Exception e) {
    }3. and into client side i have 200 Threads, each of them tried to call this method 50 times
    my result is that i can persist 10000 record in 20 seconds, without hibernate, with hibernate i got worst result :(,
    also i hear that it is good idea to use JDBC 3.0 driver for performance,
    i download newest oracle jdbc jar file from oracle site
    http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc_10201.html
    is this jar file JDBC 3.0 driver ?
    is there any hibernate performance configuration?
    is it any more performance tuning into JBoss or EJB with entity beans?
    can anybody help me ? or is there any doc which can help me ?
    Regards,
    Paata,
    Message was edited by:
    paata
    Message was edited by:
    paata

    What makes you think that your database, just the database (with the box that it is on) can handle that rate?
    What makes you think that your network can handle that?
    While this is going on is this the ONLY traffic that will be on the network?

  • What is the most Frames Per Second NI-CAN can do?

    My goal is to send 1000 Frames per Second on my CAN bus using the NI-CAN PCI 2 slot card I have.  However the closest I have been able to do is 666 frames per second.  This is sending 8 frames every 12 MS using an edited readmult example.  Is there a way to do this with writemult?  Or is there a hardware limit that I am trying to go past?
    What can I mess with to get more frames?  Increase Baudrate?  Decrease the size of the frames?  (I've tried both of those)
    Other questions that should probably go in other posts  (Frame API):
    Is there a way to send/read the frames at the bit-level?  I have found ways to manipulate Arbitration ID, Remote Frame, Data Length, and Data, but there are several other bits in a frame.
    Is there a way to send a bad frame, one that would raise/cause an error frame?

    Yes, I did break 1,000 Frames Per Second.  I got up to 1,714 and 1,742 using two different methods.  This is at 250 kbps, if you used 500 or 1 Mbps, you could get more frames.  If you have 125 kbps, you might not be able to break 1,000 Frames per Second.
    ncWriteMult is the key.  You can load 512 frames in a queue at a time.  I would put 256 on at a time and check to see if there was less than 256 frames left in the queue and if there was, load it up, that way the queue would never be empty.  I went about it 2 ways, one was using ncGetAttribute to determine space left, and that got the faster method, however, I was also trying to read the messages to verify that it worked, and I had problems with logging every frame.  It would also send the first 40 ncWriteMults quickly, as if the queue it was filling was much larger than 512.
    The other way, was using trial and error to determine how many MS to sleep before writing the next batch of 256 frames.  There are variables outside of my control that determined the time it would take and it would vary a few ms.  I wanted a stable environment that could send forever without filling the queue so I went with a value that would wait 2 or 3 ms, depending on conditions before writing again.  The value I used was 142 ms, I think.  Your Mileage May Vary.
    There is also a way to do some error handling that I did not try to utilize.  Instead of the process crashing, there is a way to tell it to wait if this error is returned.  That might be the best way for me to implement what I wanted to do, but I was assigned another task before I tried to get that to work.
    There is a timing element in ncWriteMult's documentation I didn't look into very much, but that would space the frames out and could send 1,000 frames a second evenly distributed, instead of sending them as quickly as possible, wait some ms then send another batch.
    If anyone could link us, or provide some code snippets of the error handling, or proper usage of ncGetAttribue, or some way to read faster, that would be greatly appreciated.

  • Re: cooL, it takes 14 seconds to insert 721 records to SQL Server 7.0

    I'm always a little worried when someone is concerned about performance and then asks the question(s); how can I make this faster? or Is this normal?
    There is no standard that can be applied to performance (although you may find limits and constraints published by vendors). I'm connected to a SUN 15000 with 16 processors 32 gigabytes of memory and several terabytes of storage spread across several hundred disk drives I'm guessing I could do inserts "faster" then what you are stating. But that doesn't really matter. What really matters is; how fast do you need the inserts to be? Answer that, and now you have a goal. Without that goal, performance tuning is a complete waste of time. So, what's your goal (and what are you willing to do about it)?
    There are some general coding guidelines that can be used to make sure you are not hurting performance; things like using PreparedStatments, batching together inserts, using efficient string concatenation as described by a previous post, using multiple threads when inserting into multiple tables. There may be specific database procedures you can try, for instance, If you are inserting these into a table for the first time (an initial load), some database allow you to turn of logging which can increase your performance significantly. As mentioned, there may be 3rd party drivers that allow for enhanced performance of inserts. I don't know of any specific API calls that will categorically help insert statement performance, but that doesn't mean there isn't something there.
    There are other things that can be done, usually from within the database itself, mostly to do with memory utilization, changes in checkpoint frequency and optimal logging that can also enhance insert performance, but that is probably beyond the scope of what you are asking and definitely out of the scope of this forum.
    Once you know your goal (i.e. 200 inserts per second, or complete the process within 10 second ...) Then you need to determine where you are spending your time. Do you know how much time you are spending in the database vs. reading from the file system, or in the processing in between? Wouldn�t it be awkward if you spent time optimizing the code for inserts only to find out later that it was 2 seconds for the inserts and 12 for reading the records off the file system?
    It's possible that someone may read this and say, this is ridiculous and way too much work, why would I do that? The answer is, if performance and tuning is too much work, then you simply don't require the additional performance so let it go. If you actually need better performance, and you always know when that's true, then these suggestions and the suggestions of the other posters are not nearly so onerous and may be helpful.
    When talking about application performance, I sometimes say that people don't know how good it could be, and I think that results from people not setting and working towards specific performance goals. To be clear, faster, normal and optimized are not practical performance goals, "x% faster" and "less then x seconds and x inserts per second" are.

    MartinHilpert,
    You said that my response was �not really� nice, I guess implying that I was wrong or my reply was not of any value or maybe you just meant it wasn�t nice. I�m not really sure what you meant, I don�t know your background, and I mean no disrespect, but I disagree with what you wrote and I do have a couple of comments.
    I do believe that working on performance initiatives without goals (as you described) can teach you a bit about performance and performance tuning techniques, but I do not consider an ungraded learning initiative to be an effective methodology for the tuning of an application.
    but if it's just a little slow ...
    they won't complain but just talk behind the backsI disagree that paranoia is a reason for looking at performance. If for some reason you are �worried� that performance is �slow� all you have to do is ask the users, there is no reason to assume anything. I�ve found users in my world to be very forthcoming about their perceptions of performance, and given the opportunity to voice their opinion, very willing to complain on even the smallest concern. I would be surprised that your users would not do the same unless you or your group is unusually unapproachable or perhaps intimidating for some reason. But I cannot imagine why that would be; we all serve at the pleasure of the users.
    So, when your app works and you are finished and you
    have time, try to improve performance anway.So when you are finished, you are not really finished? Sounds like you are using a consulting project methodology; the source for most never ending projects (please don't get prickly, I'm a consultant). To me this philosophy sounds a bit impractical and not at all trackable from a project management or funding perspective.
    So, when your app works and you are finished and you
    have time, try to improve performance anway.Your company has time and money to spend on never ending performance initiatives (without goals, they are by definition never ending), but no time to determine if the performance initiatives save money or enhance the user experience or more formally, moves application performance into SLA compliance?
    That doesn't sound correct to me, and I doubt if a company would want to be perceived this way.
    In retrospect, it is possible that yours is my dream company with my dream job, one without funding or time constraints, and no one holding me statistically responsible for the performance of my applications. Although I must admit I would not like people talking behind my back. :)
    Regards.

  • Trigger for every 1000 record insert

    Hi
    I am working in oracle 9i / Aix 5.3
    I need a trigger which should fire whenever my temp_table growing 1000,2000...etc.,
    The records should reach me in the body of email.
    Like
    Hi
    The temp_table count has been reached to 1000.
    Thanks
    second time execution...
    Hi
    The temp_table count has been reached to 2000.
    Thanks
    etc.,
    How can i achieve this? I'm ok to shell script also for the above functionality
    Thanks
    Raj

    Why do you want to do this?
    SQL> create table temp_table (x number);
    Table created.
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace function temp_table_cnt return number is
      2    pragma autonomous_transaction;
      3    v_cnt number;
      4  begin
      5    select count(*) into v_cnt from temp_table;
      6    return v_cnt;
      7* end;
    SQL> /
    Function created.
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace trigger trg_a_temp_table
      2  after insert on temp_table
      3  for each row
      4  declare
      5    v_cnt number;
      6  begin
      7    v_cnt := temp_table_cnt();
      8    if mod(v_cnt,1000) = 0 then
      9      dbms_output.put_line('Email Sent for '||v_cnt||' records.');
    10    end if;
    11* end;
    SQL> /
    Trigger created.
    SQL> set serverout on
    SQL> ed
    Wrote file afiedt.buf
      1  begin
      2    for i in 1..3456
      3    loop
      4      insert into temp_table values (i);
      5      commit;
      6    end loop;
      7* end;
    SQL> /
    Email Sent for 0 records.
    Email Sent for 1000 records.
    Email Sent for 2000 records.
    Email Sent for 3000 records.
    PL/SQL procedure successfully completed.
    SQL>... however I wouldn't consider this good design, as it requires each of the rows to be committed so that the autonomous transaction procedure can count the number of rows on the table. Of course, if the rows are being inserted through, let's say, user input and are committed on a 1 by 1 basis anyway, then it's perfectly acceptable, but I wouldn't use it for bulk insertions.

  • How many of these objects should I be able to insert per second?

    I'm inserting these objects using default (not POF) serialization with putAll(myMap). I receive about 4000 new quotes per second to put in the cache. I try coalescing them to various degrees but my other apps are still slowing down when these inserts are taking place. The applications are listening to the cache where these inserts are going using CQCs. The apps may also be doing get()s on the cache. What is the ideal size for the putAll? If I chop up myMap into batches of 100 or 200 objects then it increases the responsiveness of other apps but slows down the overall time to complete the putAll. Maybe I need a different cache topology? Currently I have 3 storage enabled cluster nodes and 3 proxy nodes. The quotes go to a distributed-scheme cache. I have tried both having the quote inserting app use Extend and becoming a TCMP cluster member. Similar issues either way.
    Thanks,
    Andrew
    import java.io.Serializable;
    public class Quote implements Serializable {
        public char type;
        public String symbol;
        public char exch;
        public float bid = 0;
        public float ask = 0;
        public int bidSize = 0;
        public int askSize = 0;
        public int hour = 0;
        public int minute = 0;
        public int second = 0;
        public float last = 0;
        public long volume = 0;
        public char fastMarket; //askSource for NBBO
        public long sequence = 0;
        public int lastTradeSize = 0;
        public String toString() {
            return "type='" + type + "'\tsymbol='" + symbol + "'\texch='" + exch + "'\tbid=" +
                    bid + "\task=" + ask +
                    "\tsize=" + bidSize + "x" + askSize + "\tlast=" + lastTradeSize + " @ " + last +
                    "\tvolume=" + volume + "\t" +
                    hour + ":" + (minute<10?"0":"") + minute + ":" + (second<10?"0":"") + second + "\tsequence=" + sequence;
        public boolean equals(Object object) {
            if (this == object) {
                return true;
            if ( !(object instanceof Quote) ) {
                return false;
            final Quote other = (Quote)object;
            if (!(symbol == null ? other.symbol == null : symbol.equals(other.symbol))) {
                return false;
            if (exch != other.exch) {
                return false;
            return true;
        public int hashCode() {
            final int PRIME = 37;
            int result = 1;
            result = PRIME * result + ((symbol == null) ? 0 : symbol.hashCode());
            result = PRIME * result + (int)exch;
            return result;
        public Object clone() throws CloneNotSupportedException {
            Quote q = new Quote();
            q.type=this.type;
            q.symbol=this.symbol;
            q.exch=this.exch;
            q.bid=this.bid;
            q.ask = this.ask;
            q.bidSize = this.bidSize;
            q.askSize = this.askSize;
            q.hour = this.hour;
            q.minute = this.minute;
            q.second = this.second;
            q.last = this.last;
            q.volume = this.volume;
            q.fastMarket = this.fastMarket;
            q.sequence = this.sequence;
            q.lastTradeSize = this.lastTradeSize;
            return q;
    }

    Well, firstly, I surprised you are using "float" objects in a financial object, but that's a different debate... :)
    Second, why aren't you using pof? Much more compact from my testing; better performance too.
    I've inserted similar objects (but with BigDecimal for the numeric types) and seen insert rates in the 30-40,000 / second (single machine, one node). Obviously you take a whack when you start the second node (backup's being maintained, plus that node is probably on a separate server, so you are introducing network latency.) Still, I would have thought 10-20,000/second would be easily doable.
    What are the thread counts on the service's you are using?; I've found this to be quite a choke point on high-throughput caches. What stats are you getting back from JMX for the Coherence components?; what stats from the server (CPU, Memory, swap, etc)?; What spec of machines are you using? Which JVM are you using? How is the JVM configured? What's are the GC times looking like? Are you CQC queries using indexes? Are your get()'s using indexes, or just using keys? Have you instrumented your own code to get some stats from it? Are you doing excessive logging? So many variables here... Very difficult to say what the problem is with so little info./insight into your system.
    Also, maybe look at using a multi-threaded "feeder" client program for your trades. That's what I do (as well as upping the thread-count on the cache service thread) and it seems to run fine (with smaller batch sizes per thread, say 50.) We "push" as well as fully "process" trades (into Positions) at a rate of about 7-10,000 / sec on a 4 server set-up (two cache storage nodes / server; two proxies / server.) Machines are dual socket, quad-core 3GHz Xeons. The clients use CQC and get()'s, similar to your set-up.
    Steve

  • Commit for every 1000 records in  Insert into select statment

    Hi I've the following INSERT into SELECT statement .
    The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.
    Please suggest me the best way to do that .
    I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .
    How can i achieve this ..
    insert into emp_dept_master
    select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
       from emp e , dept d
      where e.deptno = d.deptno       ------ how to use commit for every 1000 records .Thanks

    Smile wrote:
    Hi I've the following INSERT into SELECT statement .
    The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.Does the another table already have records or its empty?
    If its empty then you can drop it and create it as
    create your_another_table
    as
    <your select statement that return 60000000 records>
    Please suggest me the best way to do that .
    I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .That is not the best way. Frequent commit may lead to ORA-1555 error
    [url http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:275215756923]A nice artical from ASKTOM on this one
    How can i achieve this ..
    insert into emp_dept_master
    select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
    from emp e , dept d
    where e.deptno = d.deptno       ------ how to use commit for every 1000 records .
    It depends on the reason behind you wanting to split your transaction into small chunks. Most of the time there is no good reason for that.
    If you are tying to imporve performance by doing so then you are wrong it will only degrade the performance.
    To improve the performance you can use APPEND hint in insert, you can try PARALLEL DML and If you are in 11g and above you can use [url http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_parallel_ex.htm#CHDIJACH]DBMS_PARALLEL_EXECUTE to break your insert into chunks and run it in parallel.
    So if you can tell the actual objective we could offer some help.

  • Trying to find a camcorder that records in 60 frames per second and will upload onto  ipad2 through camera connector

    Trying to find a camcorder that records in 60 frames per second and will upload onto  ipad2 through camera connector

    Old forum discussion, message now gone, but here's the summary
    Matt with Grass Valley Canopus in their tech support department stated that the model 110 will suffice for most hobbyist. If a person has a lot of tapes that were played often the tape stretches and the magnetic coding diminishes. If your goal is to encode tapes in good shape buy the 110, if you will be encoding old tapes of poor quality buy the model 300
    Both the 110 and 300 are two way devices so you may output back to tape... if you don't need that, look at the model 55
    http://www.grassvalley.com/products/advc55 One Way Only to Computer
    http://www.grassvalley.com/products/advc110 for good tapes, or
    http://www.grassvalley.com/products/advc300 better with OLD tapes
    Or
    ADS Pyro http://www.adstechnologies.com

  • New Updates - Video recording/more frames per second?

    Just wondering if any future updates for the iPhone could include a video recording function? or is this impossible to do without changing hardware in the actual phone?
    Also could it be possible for the camera to improve? mine at present captures a very low number of frames per second and you have to hold it extremely still or else it will take a blurred photo. Anyone think it will be possible for apple to improve the camera functions in an update?
    Regards,

    dherron wrote:
    Just wondering if any future updates for the iPhone could include a video recording function? or is this impossible to do without changing hardware in the actual phone?
    There have been hackers that have created a video capture program for the iPhone. Its just a proof of concept right now but it does work.

  • When i open EMC on 2010 cas server i get "the system load quota of 1000 requests per 2 seconds has been exceeded"

    when i open EMC on 2010 cas server i get "the system load quota of 1000 requests per 2 seconds has been exceeded"
    and it wont load

    when i open EMC on 2010 cas server i get "the system load quota of 1000 requests per 2 seconds has been exceeded"
    and it wont load
    Close EMC and Powershell and run iisreset.
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Oracle 10g AQ - Required Insert Rates  of atleast 8000+ inserts per second

    Hey Guys,
    I am setting up a AQ in oracle 10g and I require atleast 8000+ insert rates per second, is it possible if so How do i implement it.
    Any help would be helpful.
    Thanks
    Stan

    Can you Give me some examples :
    I am trying to use AQ Messaging : How do I use perl scripts for the AQ messaging.
    In any case if you can give me some examples; detailed on how you have coded
    to attain the insert rates of 8000 messages per second
    I would appreciate that
    Thanks
    stan kumar

Maybe you are looking for

  • Conduit search has taken over my address bar.

    If I type anything into the address bar, links to bing searches for websites come up. If I make an error in the address, I am sent to bing. Virus scan finds no issues. Ad-aware puts a Win32 file in quarantine, but it doesn't have an impact on the add

  • System Preferences Compatability Issue

    I recently went from Leopard to tiger after a system crash, and now I am unable to open my system preferences (using tiger OS X). Is there any way to get back the tiger system preferences with out completely wiping my drive?

  • Report to run CATT test case

    Hi Experts, I have a requirement where i need to create a report program to run the   CATT test case 'XYZ'. CATT Test case 'XYZ' mass updates pricing for materials via VK11 for condition type PR00, sales area GB20/IC/01. As the business should not ha

  • Bug - Database Object Dependencies report and Interactive Reports

    Hello, I think I found a bug, perhaps that not the right word, in the Database Object Dependencies report. I don't think it's including problems in Interactive Report regions. Regards, Dan http://danielmcghan.us http://sourceforge.net/projects/tapige

  • If your looking for a case I recommend the Belkin one..

    Picked it up at Best Buy today. Its clear so you can still see the coolness of the iphone and its sturdy at the same time. The best part of it is that it doesn't make it bulky looking like some of the other cases.