Inserting Millions of records-Please Help!

Hi All,
I have a scenario where I hvae to query MARA and filter out some articles and then query WLK1 table(Article/Site Combination) and insert the records to a Custom (z) table.the result maybe millions of records,
can anyone tell me a efficient way to insert large number of records? This is urgent.Please help.
Warm Regards,
Sandeep Shenoy

This is a sample code i am using in one of my programs. You can try similar way and insert into custom table with every loop pass.
I am considering 2000 records at a time. You can decide the no and code accordingly.
  if not tb_bkpf[] is initial.
fetching the data from BSEG for each 1000 entries in BKPF to
reduce the overhead of database extraction.
    clear l_lines .
    describe table tb_bkpf lines l_lines.
    if l_lines >= 1.
      clear: l_start, l_end.
      do.
        l_start = l_end + 1.
        l_end = l_end + 2000.
        if l_end > l_lines.
          l_end = l_lines.
        endif.
        append lines of tb_bkpf from l_start to l_end to tb_bkpf_temp.
Populating the tb_bseg_tmp in the order of the database table
        select bukrs
               belnr
               gjahr
               buzei
               shkzg
               dmbtr
               hkont
               matnr
               werks
          from bseg
          appending table tb_bseg_tmp
          for all entries in tb_bkpf_temp
          where bukrs = tb_bkpf_temp-bukrs and
                belnr = tb_bkpf_temp-belnr and
                gjahr = tb_bkpf_temp-gjahr and
                hkont in s_hkont.
        refresh tb_bkpf_temp.
        if l_end >= l_lines.
          exit.
        endif.
      enddo.
    endif.

Similar Messages

  • Microphone of my iphone is not working when i call anyone nobody can listen but me, its working with headphone but its also working with videocam recording, please help me to solve this probelm.

    microphone of my iphone is not working when i call anyone nobody can listen me its working with headphone. but it is working with videocam recording, please help me to solve this probelm.

    Try this:
    1. Reset the iPhone by pressing and holding the sleep/wake and home buttons and releasing them, when the Apple logo appears on the display. Now test again.
    2. If the issue persists: Restore the iPhone without any of your own content, using this description from Apple: http://support.apple.com/kb/HT4137
    3. If the issue persists, Your iPhone need repair.

  • Best way to insert millions of records into the table

    Hi,
    Performance point of view, I am looking for the suggestion to choose best way to insert millions of records into the table.
    Also guide me How to implement in easier way to make better performance.
    Thanks,
    Orahar.

    Orahar wrote:
    Its Distributed data. No. of clients and N no. of Transaction data fetching from the database based on the different conditions and insert into another transaction table which is like batch process.Sounds contradictory.
    If the source data is already in the database, it is centralised.
    In that case you ideally do not want the overhead of shipping that data to a client, the client processing it, and the client shipping the results back to the database to be stored (inserted).
    It is must faster and more scalable for the client to instruct the database (via a stored proc or package) what to do, and that code (running on the database) to process the data.
    For a stored proc, the same principle applies. It is faster for it to instruct the SQL engine what to do (via an INSERT..SELECT statement), then pulling the data from the SQL engine using a cursor fetch loop, and then pushing that data again to the SQL engine using an insert statement.
    An INSERT..SELECT can also be done as a direct path insert. This introduces some limitations, but is faster than a normal insert.
    If the data processing is too complex for an INSERT..SELECT, then pulling the data into PL/SQL, processing it there, and pushing it back into the database is the next best option. This should be done using bulk processing though in order to optimise the data transfer process between the PL/SQL and SQL engines.
    Other performance considerations are the constraints on the insert table, the triggers, the indexes and so on. Make sure that data integrity is guaranteed (e.g. via PKs and FKs), and optimal (e.g. FKs should be indexes on the referenced table). Using triggers - well, that may not be the best approach (like for exampling using a trigger to assign a sequence value when it can be faster done in the insert SQL itself). Personally, I avoid using triggers - I rather have that code residing in a PL/SQL API for manipulating data in that table.
    The type of table also plays a role. Make sure that the decision about the table structure, hashed, indexed, partitioned, etc, is the optimal one for the data structure that is to reside in that table.

  • What is the best approach to insert millions of records?

    Hi,
    What is the best approach to insert millions of record in table.
    If error occurred while inserting the record then how to know which record has failed.
    Thanks & Regards,
    Sunita

    Hello 942793
    There isn't a best approach if you do not provide us the requirements and the environment...
    It depends on what for you is the best.
    Questions:
    1.) Can you disable the Constraints / unique Indexes on the table?
    2.) Is there a possibility to run parallel queries?
    3.) Do you need to know the rows which can not be inserted if the constraints are enabled? Or it is not necessary?
    4.) Do you need it fast or you have time to do it?
    What does "best approach" mean for you?
    Regards,
    David

  • How to restore/view the deleted records - Please help me on this regard

    Hi All,
    Please help me in restore/view the deleted data.
    I had removed 2 records from a table without back up and commited the same. Now I want to restore/view it, can you please guide me on this regard.
    Oracle Version: 10g
    OS: Windows XP
    Database in Archive Mode.
    With Regards,
    Jamearon

    Aman.... wrote:
    <snip>
    If all what you want is to view the data, you can use the Flashback's as of query which would enable you to go back either by SCN or by Timestamp. If you want to restore those rows back in the time( and losing the changes that has happened in that time ), you can use the Flashback Table option. An example of this is given below,
    http://www.oracle-base.com/articles/10g/Flashback10g.php
    HTH
    Aman....As promised, here's one way to use flashback to restore the one deleted row without having to impact the rest of the table with a general FLASHBACK TABLE.
    =~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2011.01.23 15:17:56 =~=~=~=~=~=~=~=~=~=~=~=
    login as: oracle
    oracle@vmlnx01's password:
    Last login: Sun Jan 23 15:13:10 2011 from 192.168.160.1
    [oracle@vmlnx01 ~]$ sqlplus /nolog
    SQL*Plus: Release 10.2.0.4.0 - Production on Sun Jan 23 15:18:11 2011
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    SQL> @doit
    SQL> col col_ts for a15
    SQL> conn scott/tiger
    Connected.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE10.2.0.4.0Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    5 rows selected.
    SQL> --
    SQL> alter session set nls_timestamp_format='hh24:mi:ss.FF';
    Session altered.first we will create a test table and populate it. Pay close attention to the row identified by col_id=2
    SQL> drop table flashtest;
    Table dropped.
    SQL> create table flashtest
      2   (col_id number(1),
      3    col_ts timestamp,
      4    col_txt varchar2(10)
      5   );
    Table created.
    SQL> --
    SQL> insert into flashtest
      2    values (1, systimestamp, 'r1 v1');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_lock.sleep(5);
    PL/SQL procedure successfully completed.
    SQL> --
    SQL> insert into flashtest
      2    values (2, systimestamp, 'r2 v1');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_lock.sleep(5);
    PL/SQL procedure successfully completed.
    SQL> --
    SQL> insert into flashtest
      2    values (3, systimestamp, 'r3 v1');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_lock.sleep(5);
    PL/SQL procedure successfully completed.
    SQL> --
    SQL> select * from flashtest;
        COL_ID COL_TS          COL_TXT
             1 15:18:14.896167 r1 v1
             2 15:18:21.841682 r2 v1
             3 15:18:28.772038 r3 v1
    3 rows selected.
    SQL> --
    SQL> update flashtest
      2   set col_ts = systimestamp,
      3       col_txt = 'r2 v2'
      4   where col_id = 2;
    1 row updated.
    SQL> commit;
    Commit complete.
    SQL> select * from flashtest;
        COL_ID COL_TS          COL_TXT
             1 15:18:14.896167 r1 v1
             2 15:18:35.929847 r2 v2
             3 15:18:28.772038 r3 v1
    3 rows selected.
    SQL> exec dbms_lock.sleep(5);
    PL/SQL procedure successfully completed.So at this point we can see that we have 3 rows, and row 2 has been modified from its original values.
    Now we will delete that row.
    SQL> --
    SQL> delete from flashtest
      2  where col_id=2;
    1 row deleted.
    SQL> commit;
    Commit complete.
    SQL> --
    SQL> select * from flashtest
      2  order by col_id;
        COL_ID COL_TS          COL_TXT
             1 15:18:14.896167 r1 v1
             3 15:18:28.772038 r3 v1
    2 rows selected.Now let's do a SELECT...VERSIONS and see what flashback knows about the row.
    SQL> --
    SQL> select col_id,
      2         col_ts,
      3         col_txt,
      4         nvl(versions_startscn,0) START_SCN,
      5         versions_endscn END_SCN,
      6         versions_xid xid,
      7         versions_operation operation
      8  from flashtest
      9  versions between scn minvalue and maxvalue
    10  where col_id=2
    11  order by col_id, start_scn;
        COL_ID COL_TS          COL_TXT     START_SCN    END_SCN XID              O
             2 15:18:21.841682 r2 v1               0    2802287
             2 15:18:35.929847 r2 v2         2802287    2802292 0200260082060000 U
             2 15:18:35.929847 r2 v2         2802292            0A002300A4060000 D
    3 rows selected.
    SQL> --And having seen the above, we can use a more selective form to provide the values for an INSERT statement to put the row back.
    SQL> insert into flashtest
      2     select col_id,
      3            col_ts,
      4            col_txt
      5     from flashtest
      6     versions between scn minvalue and maxvalue
      7     where col_id=2
      8       and versions_operation = 'D'
      9  ;
    1 row created.
    SQL> --
    SQL> select * from flashtest
      2  order by col_id;
        COL_ID COL_TS          COL_TXT
             1 15:18:14.896167 r1 v1
             2 15:18:35.929847 r2 v2
             3 15:18:28.772038 r3 v1
    3 rows selected.
    SQL>One could also query FLASHBACK_TRANSACTION_QUERY to get the actual DML needed to UNDO an operation on a single table.

  • Need help / Advice ; manage daily millions of records;;plz help me:)

    Hi all,
    I've only 2 years of experience in Oracle DBA. I need advice from Experts:)
    To begin, the company I work for, decide to daily save in our Oracle database about 40 millions of records in our only table (User tables). These records should be daily imported from csv or xml feeds into one table.
    This 's a project that need :
    - Study the performance
    - Study What is required in terms of hardware
    As a leader in the market, Oracle 's the only DBMS that could support this size of data, but what's the limit of Oracle in this case? can Oracle support and manage perfectly daily 40 millions of records and for many years, ie We need all data of this table, we can't consider after a period that we don't need history: we need to save all data and without purge the history and this for many years i suppose!!! you can imagine 40 daily millions of records and for many years!!!
    Then we need to consolidate from this table different views (or maybe materalized view) for each department and business inside the company, one other project that need study!
    My questions 're :Using Oracle Database 10g Enterprise Edition Release 10.2.0.1.0:
    1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?
    2-Study the performance ; which solutions, technics could I use to improve the performance of :
    - Daily Loading 40 millions of records from csv or xml file/files?
    - Daily Consolidate / managing different views/ materalized view from this big table?
    3- What is required in terms of hardware? features / Technologies( maybe clusters...)
    Hope that experts help me and advice me! thank you very much for your atention :)

    1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?Yes
    2-Study the performance ; which solutions, technics could I use to improve >>>the performance of :Send me your email, and I can send you a Performance tuning metodology pdf.
    You can see my email on my profile.
    Daily Loading 40 millions of records from csv or xml file/files?DIrect Load
    - Daily Consolidate / managing different views/ materalized view from this big table?You can use table partitions, one partition for each day.
    Regards,
    Francisco Munoz Alvarez

  • HT4085 Some videos that are taken with my iphone's camera are not making sounds, this problem occured today to all videos i record, please help

    Some videos that are taken with my iphone's camera are not making sounds, this problem occured today to all videos i recorded starting from today, please help

    It also takes time for the photos to be transferred out of your iPhone and into Photo stream on Apple's iCloud servers.
    Note that this can't happen if your iPhone is not connected to a WiFi network:
    When you enable My Photo Stream on your devices, all new photos you take or import to those devices will be automatically added to your photo stream.
    iOS devices: New photos you take are automatically uploaded to your photo stream when you leave the Camera app and are connected to Wi-Fi. My Photo Stream does not push photos over cellular connections.
    Macs: Any new photos you import to iPhoto or Aperture begin uploading automatically when you have a Wi-Fi or Ethernet connection. Or you can change your iPhoto or Aperture preferences so that only photos you manually add to My Photo Stream are uploaded.
    PC with iCloud Control Panel 2.0 or later: Open a Windows Explorer window and under Favorites select iCloud Photos if you are using iCloud Control Panel 3.0 (or Photo Stream if you are using 2.0 to 2.1.2). Open My Photo Stream. Click the "Add photos" button. Select the photos to import to My Photo Stream, then click Open.
    from here: http://support.apple.com/kb/ht4106

  • DELETE MILLIONS OF RECORDS;;;PLZ HELP ME

    Hi all;
    On my database I have an Unnecessary table ;
    This table contains 38 millions of records; I want to purge this table but I afraid to destroy my database ;
    this's the structure of my table :
    -- Create table
    create table MANAGEMENT.MC_COMPUTERS
    GUID VARCHAR2(24),
    GROUPID NUMBER(5),
    QUOVACOUNTRY VARCHAR2(30),
    CONNECTIONTYPE VARCHAR2(255),
    CREA_MONTH VARCHAR2(10) not null,
    ANTIVIRUS VARCHAR2(100),
    WINVERSION VARCHAR2(100),
    ACTF_1_DAY NUMBER,
    ACTF_7_DAY NUMBER,
    ACTF_14_DAY NUMBER,
    ACTF_21_DAY NUMBER,
    ACTF_28_DAY NUMBER,
    INSTALL NUMBER,
    DATE_CONSO VARCHAR2(24)
    partition by (CREA_MONTH)
    partition MC200701
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200702
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200703
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200704
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200705
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200706
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200707
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200708
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200709
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    partition MC200710
    tablespace DATA
    pctfree 10
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    -- Create/Recreate indexes
    create index MANAGEMENT.INX_MC_MONTH on MANAGEMENT.MC_COMPUTERS (CREA_MONTH)
    tablespace INDX
    pctfree 10
    initrans 2
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    Should I use this command truncate table ...; or delete from ... where CREA_MONTH ='' or maybe other sql statement ?

    Well, you may not need to worry about destroy your database but you might find out if truncate this table will destroy your application.
    A truncate statement is a regular DDL of Oracle, runing it will certainly not crush your database. However you need to talk to application user to figure out if this table is really unnecessary.
    Before you do the truncate, better make a backup.

  • CTAS, INSERT million of records

    Hi All,
    I have a table which has approx 20 million record, I will delete about 10 million record. After some research, I decided to perform CTAS + TRUNCATE rather than DELETE.
    I will be executing the following statements, TEMP_TABLE will retain the record I wish to keep.
    1. CREATE TEMP_TABLE AS ( SELECT * FROM ACTUAL_TABLE WHERE DATE BETWEEN X AND Y );
    2. TRUNCATE TABLE ACTUAL_TABLE;
    3. INSERT /*+ append */ INTO ACTUAL_TABLE (SELECT * FROM TEMP_TABLE);
    4. DROP TABLE TEMP_TABLE;
    My production database is running on Oracle 10g, I am worried the INSERT operation might fail due to the large amount of records to be transferred (Est. approx. 10 million records)
    I have read by adding append hints might help during the insert process and UNDO tablespace might not be sufficient for huge record inserts.
    How can I improve the process and avoid any possible failures?
    Please kindly advise.
    Thank you.

    What do you mean by your INSERT will fail. Is your UNDO Tablespace not AUTO EXTENDABLE?
    Another option is instead of doing the insert, you can
    1. drop the ACTUAL_TABLE
    2. Rename the TEMP_TABLE as ACTUAL_TABLE
    3. Re Create the indexes, constraints, Triggers on the table back.
    Edited by: Karthick_Arp on Apr 29, 2009 10:15 PM

  • Adobe audition cc crashes after a few times of recording(please help)

    Okay so Adobe audition is incredibly useful or should i say WAS... It was great until one day it just crashed after a few times of recording. I just thought maybe something went wrong whatever ill re do it. But no.. for months ive been trying, uninstalling and reinstalling, updating, closing all other programs running but nothing has seemed to do it. Audition is really important to my music career and lately i havent been able to do anything because it doesnt work. Im begging, can someone PLEASE try to help me, im desperate.
    heres a pic of what happens when it crashes:

    I notice from the picture that you have four files in the middle of processing. Do you remember what they were doingwhen the crash happened? Were they saving or did you have some effects processing going on?
    Also either Audition or Windows may have saved a crash report somewhere which could help to diagnose the problem. I am not sure where they may have been saved but others might remember where to look for them.

  • SOS problem using one Entity bean for editing records. please help me

    Hello
    I have one great problem using one entity bean 2.1 and i am working with this problem several days and i dont solve it.
    i am using this jb 2.1 for add new records to one oracle database and for editing records.
    I have one great problem for editing records
    I have one objets that is as one of my database table. (PersonaObj.java)
         //personaObj.java
         public class PersonaObj implements Serializable{
              private Integer id_persona; //this ishte database table autonumeric
              private java.lang.String nombre;
              now the set/get methods.
         One field of this objets (bean) is the primary key of the table, and this field is one part of the pk from the entity bean.
         From one session bean i call the entity bena passing the objet
         into the entity bean (interface) i have one methods:
              public getPersona getT56aaat04();
         public void setPersona(PersonaObj obj);
         into the entitybean bean i have:
         public PersonaEntityPK ejbCreate(PersonaObj obj) throws CreateException {
              this.setT56aaat04(obj);
              public abstract Integer getId_persona();
         public abstract void setId_persona(Integer Id_persona);
         public abstract java.lang.String getNombre();
         public abstract void setNombre(java.lang.String Nombre);
         public PersonaObj getPersona() {
              PersonaObj obj = new PersonaObj();
              obj.setId_persona(getId_persona());
              obj.setNombre(getNombre());
              return obj;
         public void setPersona(PersonaObj obj) {
              setId_persona(obj.getId_persona());
              setNombre(obj.getNombre());
    But i have one graet problem.
    adding new record to the database i runs well but if i want to edit one recor appears this error:
    => Error <=
    java.rmi.RemoteException: EJB Exception: ; nested exception is: javax.ejb.EJBException: EJB Exception: : java.lang.IllegalStateException:[EJB:010144]The setXXX method for a primary key field may only be called during ejbCreate.
    at PersonasEJB_a43o8n__WebLogic_CMP_RDBMS.setId_persona(PersonasEntityEJB_a43o8n__WebLogic_CMP_RDBMS.java:328)
    at PersonasEntityBean.setPersona(PersonasEntityBean.java:114)
    at PersonasEntityEJB_a43o8n_ELOImpl.setPersona(PersonasEntityEJB_a43o8n_ELOImpl.java:45)
    at PersonasSessionBean.editarPersona(PersonasSessionBean.java:849)
    at PersonasSessionBean_bszo9t_EOImpl.editarPersona(PersonasSessionBean_bszo9t_EOImpl.java:208)
    at PersonasSessionBean_bszo9t_EOImpl_WLSkel.invoke(Unknown Source)
    into the session bean i make this (the session recibes one PersonaObj obj):
    T56aContactosEntityLocal personaLocal ;
                   try {
                        personaLocal = personaHome.findByPrimaryKey(new T56aContactosEntityPK(obj.getId_a04()));
                        T56aaat04Obj objTmp = new T56aaat04Obj();
                        objTmp.setAp1_a04(obj.getAp1_a04());
                        objTmp.setEstado_a04("false");
                        personaLocal.setT56aaat04(obj);
    Please can you help me to solve this problem?

    Hello Werner,
    The mappings seem to be alright at a first glance.
    Have you tried out un- and redeploying your application? Sometimes values appear to be cached in the server. So if you have deployed the application before you entered DB_BANK as alias this might solve the problem...
    BR
    Daniel

  • How to insert million of record

    Dear All,
    I need your expert opinion.
    To insert 1 million( will be called in parallel 15 jobs) of records can we use
    Insert into table1 (col1,col2,col3) Select col1,col2,col2 from table2
    concern : buffer memory or any error like rollback segment;
    we have 10-15 columns of almost 15 bytes and its production enviorment
    And what if the records are near 15 million?
    Edited by: ma**** on 06-Jun-2012 05:34

    845712 wrote:
    Hi,
    We can use bulk collect and for all, think it could be easier.
    Thanks,
    BrijI'm sorry but this is not correct:
    Check what Tom Kyte recommends here: [url:http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:760210800346068768]http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:760210800346068768
    the mantra:
    o You should do it in a single SQL statement if at all possible.
    o If you cannot do it in a single SQL Statement, then do it in PL/SQL.
    o If you cannot do it in PL/SQL, try a Java Stored Procedure.
    o If you cannot do it in Java, do it in a C external procedure.
    o If you cannot do it in a C external routine, you might want to seriously
    think about why it is you need to do it…
    So, go with the last one, a single sql statement... with /*+ APPEND */ if you understand what that does and if it applies....
    Regards.
    Al

  • Oracle:JDBC Call returns no results, SQL*Plus returns 1 record, Please help

    Any help would be greatly appreciated.
    Running 9.2.0.5.0, and using latest 9.2 JDBC 1.4_g drivers in thin mode.
    Execute the following query from SQL*Plus and it returns one row, from JDBC using a PreparedStatement, I get no results. Here's the query, table def, record, etc.:
    Query:
    SELECT
    ID_WEB_FRM,ID_WEB_SIT,CDE_LVL_1_FUNC,
    CDE_LVL_2_FUNC,NUM_WEB_FUNC_PG,NUM_WEB_PG_ID
    FROM
    WEB_FRM
    WHERE
    ID_WEB_FRM = ' '
    OR
    (ID_WEB_SIT = 'test' AND CDE_LVL_1_FUNC = ' '
    AND CDE_LVL_2_FUNC = 'u2T' AND NUM_WEB_FUNC_PG = 1
    AND NUM_WEB_PG_ID = 0)
    Record returned from SQL*Plus:
    ID_WEB_FRM ID_WEB_SIT CDE CDE NUM_WEB_FUNC_PG NUM_WEB_PG_ID
    NfRRmc5XZu test u2T 1 0
    Both in the data returned and the query, there are no blanks, but they are a single space instead (hard to see in message here).
    Java code:
    int count = 1;
    findDBNameStatement.setString(count++," ");
    findDBNameStatement.setString(count++,form.getSiteID());
    findDBNameStatement.setString(count++," ");
    findDBNameStatement.setString(count++, form.getFunctionID());
    findDBNameStatement.setInt(count++,form.getPageNumber());
    findDBNameStatement.setInt(count++,form.getSectionNumber());
    ResultSet resultSet = findDBNameStatement.executeQuery();
    ResultSetMetaData metaData = resultSet.getMetaData();
    resultSet.next() returns false
    DB table:
    CREATE TABLE web_frm (
    ID_WEB_FRM varchar2(10) NOT NULL,
    ID_WEB_SIT varchar2(20) NOT NULL,
    NAM_WEB_FRM varchar2(40),
    TXT_EMAIL_SUBJ varchar2(50),
    CDE_LVL_1_FUNC char(3),
    CDE_LVL_2_FUNC char(3) NOT NULL,
    NUM_WEB_FUNC_PG int NOT NULL,
    NUM_WEB_PG_ID smallint NOT NULL,
    DTE_WEB_FRM_EFF date NOT NULL,
    DTE_WEB_FRM_TRM date,
    CDE_VLDT_RUL char(3),
    DTE_LAST_EXPRT date,
    TXT_CNFRMN_MSG varchar2(4000),
    IND_UPDT_ALWD char(1) NOT NULL,
    TXT_RECAP_HDR varchar2(4000),
    TXT_RECAP_FTR varchar2(4000),
    CDE_WEB_OBJ char(3),
    NUM_MAX_FRM_WIDTH number(4,0),
    IND_RECAP_PG char(1) NOT NULL,
    IND_CNFRM_PG char(1) NOT NULL,
    IND_DSPL_CNFRM_NUM char(1) NOT NULL,
    CNT_SUBM_MAX int,
    TXT_CHCE_ADD_MSG varchar2(255),
    TXT_CHCE_MOD_MSG varchar2(255),
    TXT_WEB_HDR varchar2(4000),
    TXT_WEB_FTR varchar2(4000),
    TXT_WAIT_LIST_MSG varchar2(255),
    FORMOBJECTHEIGHT int NOT NULL,
    FORMOBJECTWIDTH int NOT NULL
    ALTER TABLE web_frm ADD ( CONSTRAINT PK_web_frm PRIMARY KEY (ID_WEB_FRM));
    ALTER TABLE web_frm ADD ( CONSTRAINT UK_web_frm UNIQUE (ID_WEB_SIT,CDE_LVL_1_FUNC,CDE_LVL_2_FUNC,NUM_WEB_FUNC_PG,NUM_WEB_PG_ID)) ;
    Thanks,
    Matt

    That's not quite right. From the javadocs:
    next
    public boolean next()
    throws SQLException
    Moves the cursor down one row from its current position. A ResultSet cursor is initially positioned before the first row; the first call to the method next makes the first row the current row; the second call makes the second row the current row, and so on.
    If an input stream is open for the current row, a call to the method next will implicitly close it. A ResultSet object's warning chain is cleared when a new row is read.
    Returns:
    true if the new current row is valid; false if there are no more rows
    Throws:
    SQLException - if a database access error occurs

  • JDBC Call returns no results, SQL*Plus returns 1 record, Please help!

    Any help would be greatly appreciated.
    Running 9.2.0.5.0, and using latest 9.2 JDBC 1.4_g drivers in thin mode.
    Execute the following query from SQL*Plus and it returns one row, from JDBC using a PreparedStatement, I get no results. Here's the query, table def, record, etc.:
    Query:
    SELECT
    ID_WEB_FRM,ID_WEB_SIT,CDE_LVL_1_FUNC,
    CDE_LVL_2_FUNC,NUM_WEB_FUNC_PG,NUM_WEB_PG_ID
    FROM
    WEB_FRM
    WHERE
    ID_WEB_FRM = ' '
    OR
    (ID_WEB_SIT = 'test' AND CDE_LVL_1_FUNC = ' '
    AND CDE_LVL_2_FUNC = 'u2T' AND NUM_WEB_FUNC_PG = 1
    AND NUM_WEB_PG_ID = 0)
    Record returned from SQL*Plus:
    ID_WEB_FRM ID_WEB_SIT CDE CDE NUM_WEB_FUNC_PG NUM_WEB_PG_ID
    NfRRmc5XZu test u2T 1 0
    Both in the data returned and the query, there are no blanks, but they are a single space instead (hard to see in message here).
    Java code:
    int count = 1;
    findDBNameStatement.setString(count++," ");
    findDBNameStatement.setString(count++,form.getSiteID());
    findDBNameStatement.setString(count++," ");
    findDBNameStatement.setString(count++, form.getFunctionID());
    findDBNameStatement.setInt(count++,form.getPageNumber());
    findDBNameStatement.setInt(count++,form.getSectionNumber());
    ResultSet resultSet = findDBNameStatement.executeQuery();
    ResultSetMetaData metaData = resultSet.getMetaData();
    resultSet.next() returns false
    DB table:
    CREATE TABLE web_frm (
    ID_WEB_FRM varchar2(10) NOT NULL,
    ID_WEB_SIT varchar2(20) NOT NULL,
    NAM_WEB_FRM varchar2(40),
    TXT_EMAIL_SUBJ varchar2(50),
    CDE_LVL_1_FUNC char(3),
    CDE_LVL_2_FUNC char(3) NOT NULL,
    NUM_WEB_FUNC_PG int NOT NULL,
    NUM_WEB_PG_ID smallint NOT NULL,
    DTE_WEB_FRM_EFF date NOT NULL,
    DTE_WEB_FRM_TRM date,
    CDE_VLDT_RUL char(3),
    DTE_LAST_EXPRT date,
    TXT_CNFRMN_MSG varchar2(4000),
    IND_UPDT_ALWD char(1) NOT NULL,
    TXT_RECAP_HDR varchar2(4000),
    TXT_RECAP_FTR varchar2(4000),
    CDE_WEB_OBJ char(3),
    NUM_MAX_FRM_WIDTH number(4,0),
    IND_RECAP_PG char(1) NOT NULL,
    IND_CNFRM_PG char(1) NOT NULL,
    IND_DSPL_CNFRM_NUM char(1) NOT NULL,
    CNT_SUBM_MAX int,
    TXT_CHCE_ADD_MSG varchar2(255),
    TXT_CHCE_MOD_MSG varchar2(255),
    TXT_WEB_HDR varchar2(4000),
    TXT_WEB_FTR varchar2(4000),
    TXT_WAIT_LIST_MSG varchar2(255),
    FORMOBJECTHEIGHT int NOT NULL,
    FORMOBJECTWIDTH int NOT NULL
    ALTER TABLE web_frm ADD ( CONSTRAINT PK_web_frm PRIMARY KEY (ID_WEB_FRM));
    ALTER TABLE web_frm ADD ( CONSTRAINT UK_web_frm UNIQUE (ID_WEB_SIT,CDE_LVL_1_FUNC,CDE_LVL_2_FUNC,NUM_WEB_FUNC_PG,NUM_WEB_PG_ID)) ;
    Thanks,
    Matt

    I have verified the parameters and such, and if I run as a Statement instead of a PreparedStatement, the query works fine. After some more troubleshooting, I narrowed the problem down, but not sure of the fix.
    If I eliminate teh extra parameters and simplify things to:
    SELECT ID_WEB_FRM FROM WEB_FRM WHERE ID_WEB_SIT = 'test' AND CDE_LVL_2_FUNC = 'u2T' AND NUM_WEB_FUNC_PG = 1 AND NUM_WEB_PG_ID = 0
    The code works. But if I add the additional WHERE clause of CDE_LVL_1_FUNC = ' ' (has single space), it returns no data (record in the DB has single space in this new column. Query that does not work:
    SELECT ID_WEB_FRM FROM WEB_FRM WHERE ID_WEB_SIT = 'test' AND CDE_LVL_1_FUNC = ' ' AND CDE_LVL_2_FUNC = 'u2T' AND NUM_WEB_FUNC_PG = 1 AND NUM_WEB_PG_ID = 0
    It appears the JDBC Driver is trimming the parameter when it is a space when set through a PreparedStatement.setString(2," ")

  • I lost all my music i bought on itunes how do i get it all back is there a way itunes keeps a record please help me?????

    Can someone help me find my music i bought on itunes. I lost it all. Does itunes have a backup somewhere. i deleted it all. do they keep records of bought music.

    Yeah, but if I bought a CD I wouldn't have to pay 30 cents extra to have the complete use of any tune on the CD would I?  ten songs on a CD means you pay an extra $3.00 for the CD that you didn't agree to but its held for ransom.
    What? No idea what this has to do with anything (or even what it means)...
    So if you wanna be FAIR itunes isn't fair.
    Don't purchase from iTunes if you think it is unfair. There are many other places to purchase music.
    kcsummerkc wrote:
    And if you don't have an IPod but some other equipment?  Then what?
    You make a backup even if you have no other equipment or an iPod.

Maybe you are looking for

  • Question onTwo EUL in One environment

    Hi Experts . We are planning to have Fusion middleware setup for Discoverer . we have two EUL's in two different databses . Please let me know how can i create config the environment to use both the discoverer EUL's. i have created two different repo

  • Double/duplicate info in iCal on my iPhone

    I have a Mac desktop, a Mac laptop, a mobileMe account & an iPhone. I was pleased as punch to finally figure out how to get my iCal info from my desktop to my iPhone. I know it is a simple procedure, not rocket science, but let's just say it took me

  • ZEN VISION:M What happened to the Album art???/What do you use to transf

    I've had my zen vision m since last summer and absolutely love it. Recently though I reformatted and added my entire library with the zen vision m media explorer like I always do. I have always tagged my music with musicmatch jukebox so that the albu

  • Deletion of Particular PSA Request  which is in Yellow Color

    Hai All,               Can anybody tell how to delete the Yellow status PSA request in 2004S . I am not able to delelte this request. Can't we delete that one ? Due to this yellow status reuest i am getting run time error while i am moving other gree

  • Synch/Asynch with out BPM

    Hi All, I have a HTTP to webservice scenario currently running in production , i.e ., From the HTTP Application user sends a request to XI . XI makes a SOAP call and sends the request back to HTTP Application .              now I have a new requireme