SQL to extract large data

I am using sybase to extract contents of a table which contains 20,000 rows. The front end application needs to view all data but does not want the overhead of extracting all the data only to view part of the data.
I am currently using the following to create a scrollable resultset....
ResultSet rst = null;
try {
     if (conn == null) connect();
Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
// Execution
rst = stmt.executeQuery(strRequest);
Will this be enough to scroll through the data or should I create a stored procedure using cursors to return 1000 rows at a time ?

It should be enough. The driver will not read all data at once.
kaj

Similar Messages

  • Using XPath with SQL to extract XML data

    Given data such as this one:
    <?xml version="1.0"?>
    <ExtendedData>
       <Parameter name="CALLHOLD"><BooleanValue>true</BooleanValue></Parameter>
      <Parameter xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="BARRING_PASSWORD" xsi:nil="true"/>
      <Parameter name="ALLCF"><BooleanValue>true</BooleanValue></Parameter>
      <Parameter name="RealProv"><BooleanValue>false</BooleanValue></Parameter>
    </ExtendedData>I normally use extractValue function as shown below for example to extract the value for the last parameter in the data above, e.g:
    select extractValue(extended_data,'/ExtendedData/Parameter[@name="RealProv"]/BooleanValue') "my_column_alias" from tableAny ideas on how I may return the value of the parameter xsi:nil from this node:
    <Parameter xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="BARRING_PASSWORD" xsi:nil="true"/>I'd like to extract the true in xsi:nil="true"...
    Thanks,
    Edited by: HouseofHunger on May 15, 2012 2:13 PM
    Edited by: HouseofHunger on May 15, 2012 2:13 PM

    Extractvalue() has a third parameter we can use to declare namespace mappings :
    SQL> with sample_data as (
      2    select xmltype('<?xml version="1.0"?>
      3  <ExtendedData>
      4    <Parameter name="CALLHOLD"><BooleanValue>true</BooleanValue></Parameter>
      5    <Parameter xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="BARRING_PASSWORD" xsi:nil="true"/>
      6    <Parameter name="ALLCF"><BooleanValue>true</BooleanValue></Parameter>
      7    <Parameter name="RealProv"><BooleanValue>false</BooleanValue></Parameter>
      8  </ExtendedData>') doc
      9    from dual
    10  )
    11  select extractvalue(
    12           doc
    13         , '/ExtendedData/Parameter[@name="BARRING_PASSWORD"]/@xsi:nil'
    14         , 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"'
    15         )
    16  from sample_data
    17  ;
    EXTRACTVALUE(DOC,'/EXTENDEDDAT
    true
    If you're on 11.2.0.2 and up, extractvalue() is deprecated.
    One should use XMLCast/XMLQuery instead :
    SQL> with sample_data as (
      2    select xmltype('<?xml version="1.0"?>
      3  <ExtendedData>
      4    <Parameter name="CALLHOLD"><BooleanValue>true</BooleanValue></Parameter>
      5    <Parameter xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="BARRING_PASSWORD" xsi:nil="true"/>
      6    <Parameter name="ALLCF"><BooleanValue>true</BooleanValue></Parameter>
      7    <Parameter name="RealProv"><BooleanValue>false</BooleanValue></Parameter>
      8  </ExtendedData>') doc
      9    from dual
    10  )
    11  select xmlcast(
    12           xmlquery('/ExtendedData/Parameter[@name="BARRING_PASSWORD"]/@xsi:nil'
    13            passing doc
    14            returning content
    15           ) as varchar2(5)
    16         )
    17  from sample_data
    18  ;
    XMLCAST(XMLQUERY('/EXTENDEDDAT
    true
    Note : the xsi prefix is predefined when using Oracle XQuery, so in this case we don't have to declare it explicitly.
    Edited by: odie_63 on 15 mai 2012 15:23

  • How to extract incremental data from SQL server to oracle tables in ODI

    HI All,
    In my ODI sql server is install.My Source is in SQL server and my target is in Oracle.
    I need to create a interface mapping where i need to extract incremental data from sql server to oracle.
    There is a datetime(with Timestamp) field in sql server .I need to pull incremental data based on dateime.
    Example = tablename.DateTime > (select '1-jan-11' from dual) .....i am using this query but its not woking.the error is Invalid object name"dual".
    We are not going to use Incremental in IKM and LKM.
    Request you to please provide any suggestion ASAP.
    Thanks,
    Lony

    You can do that via Variable.
    In the interface mapping create a filter on Tablename.DateTime
    and put the condition like this
    Tablename.DateTime BETWEEN #VAR and in the variable use this query in refreshing tab with oracle schema
    SELECT max(start_time)||' AND '||max(END_TIME)+1 from audit_table where ETL_JOB_CODE = '20'In the package call the above variable in refresh mode and then interface.
    This way you will pass from the query between and condition date and pass to interface so that SQL Server fetches the data between those too range.
    Note:- You might need to tweak the date format so that SQL Server can understand.
    Hope this helps.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • Best Way To Extract Large Text Files From Query

    Using CF7/SQL Server 2000.  I need to extract about 50,000 records from another server, and save it to a .CSV file.  Not sure what the best method is for this.  I've got the query written, and it will pull all the records.  But don't want to display them on the screen, just store in a .CSV file somewhere on the server.
    I've looked at CFFILE and CFDIRECTORY, and those don't seem to be the best tools for this.  What about using CFCONTENT with CFHEADER?  I use that to output to Excel files
    <CFCONTENT type="application/vnd.ms-excel">
    <CFHEADER NAME="Content-Disposition" VALUE="attachment; filename=c:\mydir\myfile.xls">
    But when trying to use it, it doesn't save the file on my drive.  Would appreciate advice on the best tools/tags for extracting large volumes of data from other servers, and saving to a local file.
    Once working, ultimately, I'd like to set it up, so the .CFM job runs daily, automatically, extracts the data and FTP's it to another server.  I've seen the CFFTP tag, but until I can get a file saved on my drive, or another server's drive, there's nothing to FTP.  Thanks for any help/advice.
    Gary

    Thanks, but I can't find any good examples of using CFFILE (just syntax of the tag, which doesn't make a lot of sense).  I've been writing CF code for 8 years, and feel I can make it sing and dance.  But I didn't understand how CFFILE worked, from reading the online documentation and my Ben Forta books.  There are no good examples.  Do you still write the query with CFQUERY, then "substitutute" CFFILE for CFOUTPUT?  Or enclose CFFILE inside CFOUTPUT?  I can't find any examples that explain this.
    I just need to see a basic example of CFFILE "in action."  Starting with a simple query, and a simple output that writes the query results to a file.
    Lastly, once you have the data written to a file, using CFFILE, is that when you can use CFFTP, to FTP that file, (or any file on the hard drive for that matter) to another server?
    Thanks for help, and any simple examples, just to get me started.  Thanks.
    Gary

  • Use LINQ to extract the data from a file...

    Hi,
    I have created a Subprocedure CreateEventList
    which populates an EventsComboBox
    with a current day's events (if any).
    I need to store the events in a generic List communityEvents
    which is a collection of
    communityEvent
    objects. This List needs to be created and assigned to the instance variable
    communityEvents.
    This method should call helper method ExtractData
    which will use LINQ to extract the data from my file.
    The specified day is the date selected on the calendar control. This method will be called from the CreateEventList.
    This method should clear all data from List communityEvents.  
    A LINQ
    query that creates CommunityEvent
    objects should select the events scheduled for selected
    day from the file. The selected events should be added to List
    communityEvents.
    See code below.
    Thanks,
    public class CommunityEvent
    private int day;
    public int Day
    get
    return day;
    set
    day = value;
    private string time;
    public string Time
    get
    return time;
    set
    time = value;
    private decimal price;
    public decimal Price
    get
    return price;
    set
    price = value;
    private string name;
    public string Name
    get
    return name;
    set
    name = value;
    private string description;
    public string Description
    get
    return description;
    set
    description = value;
    private void eventComboBox_SelectedIndexChanged(object sender, EventArgs e)
    if (eventComboBox.SelectedIndex == 0)
    descriptionTextBox.Text = "2.30PM. Price 12.50. Take part in creating various types of Arts & Crafts at this fair.";
    if (eventComboBox.SelectedIndex == 1)
    descriptionTextBox.Text = "4.30PM. Price 00.00. Take part in cleaning the local Park.";
    if (eventComboBox.SelectedIndex == 2)
    descriptionTextBox.Text = "1.30PM. Price 10.00. Take part in selling goods.";
    if (eventComboBox.SelectedIndex == 3)
    descriptionTextBox.Text = "12.30PM. Price 10.00. Take part in a game of rounders in the local Park.";
    if (eventComboBox.SelectedIndex == 4)
    descriptionTextBox.Text = "11.30PM. Price 15.00. Take part in an Egg & Spoon Race in the local Park";
    if (eventComboBox.SelectedIndex == 5)
    descriptionTextBox.Text = "No Events today.";

    Any help here would be great.
    Look, you have to make the file a XML file type -- Somefilename.xml.
    http://www.xmlfiles.com/xml/xml_intro.asp
    You can use NotePad XML to make the XML and save the text file.
    http://support.microsoft.com/kb/296560
    Or you can just use Notepad (standard), if you know the basics of how to create XML, which is just text data that can created and saved in a text file, which, represents data.
    http://www.codeproject.com/Tips/522456/Reading-XML-using-LINQ
    You can do a (select new CommunityEvent) just like the example is doing a
    select new FileToWatch and load the XML data into the CommunityEvent properties.
    So you need to learn how to make a manual XML textfile with XML data in it, and you need to learn how to use LINQ to read the XML. Linq is not going to work against some  flat text file you created. There are plenty of examples out on Bing and Google
    on how to use Linq-2-XML.
    http://en.wikipedia.org/wiki/Language_Integrated_Query
    <copied>
    LINQ extends the language by the addition of query
    expressions, which are akin to
    SQL statements, and can be used to conveniently extract and process data from
    arrays, enumerable
    classes, XML documents,
    relational databases, and third-party data sources. Other uses, which utilize query expressions as a general framework for readably composing arbitrary computations, include the construction of event handlers<sup class="reference" id="cite_ref-reactive_2-0">[2]</sup>
    or
    monadic parsers.<sup class="reference" id="cite_ref-parscomb_3-0">[3]</sup>
    <end>
    <sup class="reference" id="cite_ref-parscomb_3-0"></sup>

  • Extract Master Data to Flat File

    Hello
    Do you know a easy way to extract master data into a flat file?   I've tried using maintain master data but my master table is too large to display this way.

    Dear Satya,
    1) You can simply write an ABAP to access the P table ( /BIC/PInfobobject )
    2) Using infospokes
    rsbo->ZDOWNLOAD->Create->Choose Data Source as Infopbject with attributes..
    and select your fields and selections and specify the flat file destination and thatsts it..
    regards,
    Hari
    Message was edited by: Hari Kiran Y

  • SQL*Loader and binary data

    i have a C routine that builds SQL*Loader input files. the input files contain multiple records, with a couple of integer columns and a raw(1400) field. the control file specifies a record separator of '|', which seems weird (having text in the middle of raw data) but also is somewhat co-operative (see below).
    so i basically write each integer to the file, then a short (2-byte) length value for the raw field, then the raw field. then the '|' separator.
    i've noticed that if the size of raw field is 400 bytes or less, everything works fine, i get the correct number of records in the database.
    unfortunately, with a size of 401 or more, SQL*Loader parses the thing into twice as many records as it should. so if i've written 3 records to my input data file, with each record's raw field at 400 bytes or less i get 3 records loaded. but any with a raw field of 401+, i get two records for each.
    any ideas why? and how to correct this? also, any ideas on a better way to do this? all the examples of large data in the online doc and the o'reilly book favor showing examples with large character data, which does me not much good.
    tia. john

    i have a C routine that builds SQL*Loader input files. the input files contain multiple records, with a couple of integer columns and a raw(1400) field. the control file specifies a record separator of '|', which seems weird (having text in the middle of raw data) but also is somewhat co-operative (see below).
    so i basically write each integer to the file, then a short (2-byte) length value for the raw field, then the raw field. then the '|' separator.
    i've noticed that if the size of raw field is 400 bytes or less, everything works fine, i get the correct number of records in the database.
    unfortunately, with a size of 401 or more, SQL*Loader parses the thing into twice as many records as it should. so if i've written 3 records to my input data file, with each record's raw field at 400 bytes or less i get 3 records loaded. but any with a raw field of 401+, i get two records for each.
    any ideas why? and how to correct this? also, any ideas on a better way to do this? all the examples of large data in the online doc and the o'reilly book favor showing examples with large character data, which does me not much good.
    tia. john

  • SQL LOADER Problem when data is loaded but not come in standard formate

    Hi guys,
    I got problem when sql loader run data loaded successfully in table but UOM data not come in standard formate.
    UOM table column contains the Unit of measure data but in my excel sheet it's look like :
    EXCEl SHEET DATA:
    1541GAFB07080          0     Metres
    1541GAFE10040          109.6     Metres
    1541GAFE10050          594.2     Metres
    1541GAFE10070          126.26     Metres
    1541GAFE14040          6.12     Metres
    1541GAFE14050          0     Metres
    1541SAFA05210          0     Metres
    1541SAFA07100          0     Metres
    1551EKDA05210          0     Nos
    1551EKDA07100          0     Nos
    1551EKDA07120          0     Nos
    1551EKDA07140          0     Nos
    1551EKDA07200          0     Nos.
    1551EKDA08160          0     Nos.
    1551EKDA08180          0     Nos.
    1551EKDA08200          0     Nos.
    1551EKDA10080          41     Nos.
    1551EKDA10140          85     Nos.
    .ctl file :
    OPTIONS (silent=(header,feedback,discards))
    LOAD DATA
    INFILE *
    APPEND
    INTO TABLE XXPL_PO_REQUISITION_STG
    FIELDS TERMINATED BY ','
    OPTIONALLY ENCLOSED BY'"'
    TRAILING NULLCOLS
    ( ITEM_CODE CHAR,
    ITEM_DESCRIPTION CHAR "TRIM(:ITEM_DESCRIPTION)",
    QUANTITY,
    UOM,
    NEED_BY_DATE,
    PROJECT,
    TASK_NAME,
    BUYER,
    REQ_TYPE,
    STATUS,
    ORGANIZATION_CODE,
    LOCATION,
    SUBINVENTORY,
    LINE_NO,
    REQ_NUMBER,
    LOADED_FLAG CONSTANT 'N',
    SERIAL_NO "XXPL_PRREQ_SEQ.NEXTVAL",
    CREATED_BY,
    CREATION_DATE SYSDATE,
    LAST_UPDATED_BY,
    LAST_UPDATED_DATE,
    LAST_UPDATED_LOGIN
    Some output came in table like:
    W541WDCA05260 0 Metres|
    W541WDCA05290 3 Metres|
    W541WDCA05264 4 Metres|
    W541WDCA05280 8 Metres|
    1551EADA04240 0 Nos|
    1551EADA07100 0 Nos|
    1551EKDA10080 0 Nos.|
    1551EKDA10080 41 Nos.|
    proble in | delimiter...how to remove it ' | ' from my table when sqlloader program runnig ...... where i can change in .ctl file or excel file....it's urgent guys olz help me ..
    thanks

    Hi,
    How are you extracting the data to Excel sheet ?
    Please check the format type of the column in Excel sheet for UOM.
    There is no issue in the SQL loader control file, but issue is there in your source excel file. (Try using a different method to extract the data to Excel sheet.)
    Regards,
    Yuvaraj.C

  • Regarding large date range

    hi all,
    i am using Forms [32 Bit] Version 6.0.8.24.1 (Production)
    Oracle Database 10g Release 10.2.0.1.0 - Production
    i am selecting some fileds from some tables for certain condition
    for ex:
    select a,b,c
    from table1
    where trunc(col) between from_date and to_date
    with the help of cursor i am accessing these selected columns.
    the problem is if i run it for small date range(from 01-jan-2009 to 10-jan-2009) its giving the correct result but if i run it for large date range for example 2005 to 2011 i am getting error as ora-302000.
    i have written this code in the button in the front end.
    what could be the reason behind this?
    Please help to sort it out..
    Thanks..

    I appreciate your views. However here is why it works...
    The default date format is DD-MON-RR (RR Date Format)
    Oracle does an implicit convertion from CHAR to DATE when we specify in this format 'DD-MON-RR'
    Here is how it works...
    SELECT * FROM EMP WHERE TRUNC(HIREDATE) = '17-DEC-80'This SQL will give me this record from emp table
    SQL> SELECT * FROM EMP WHERE TRUNC(HIREDATE) = '17-DEC-80';
    EMPNO     ENAME     JOB     MGR     HIREDATE     SAL     COMM     DEPTNO
    7369     SMITH     CLERK     7902     17-12-1980     800          20Regarding, specifying 2 digits for the year. Oracle is intelligent enough to decode it based on its built-in internal functionality...
    Quoting from Oracle Docs..
    •If the specified two-digit year is 00 to 49, then
        ◦If the last two digits of the current year are 00 to 49, then the returned year has the same first two digits as the current year.
        ◦If the last two digits of the current year are 50 to 99, then the first 2 digits of the returned year are 1 greater than the first 2 digits of the current year.
    •If the specified two-digit year is 50 to 99, then
        ◦If the last two digits of the current year are 00 to 49, then the first 2 digits of the returned year are 1 less than the first 2 digits of the current year.
        ◦If the last two digits of the current year are 50 to 99, then the returned year has the same first two digits as the current year.To know more about implicit year decoding you can refer this.
    Look for The RR Datetime Format Element at this page http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements004.htm#SQLRF00210
    Regards,
    Rakesh
    Edited by: Rakesh Desai on Mar 28, 2011 8:11 PM

  • Extract of Data from Oracle in a Flat File with Column and Record seperator

    I have a case where I have to extract whole data froma Oracle Table where the Columns should be seperated by |@|
    and Records by ^*^.
    The reason for this is My data has Space and New line in it. So My Program to recoganize each column and record I want them to be seperated by special chars.
    Itried this but of no much help.
    set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on;
    spool on;
    set colsep '|@|';
    set recsepchar '^*^';
    spool "T_COMPLAINT.dat";
    select * from T_COMPLAINT where ROWNUM < '100' order by cptoid;
    spool off;

    Having '@' and '*' characters in the data will not make any difference if you are using a combined column separator of '|@|' - provided any process you use subsequently can handle it.
    However, the recsepchar parameter appears to be restricted to a single character which is repeated right across the page, so I don't think you could have a single iteration of '|*|' using this method:
    SQL> set  newpage 0 space 0 pagesize 0 feed off head off trimspool on
    SQL> set recsep EACH
    SQL> set recsepchar '*'
    SQL> set colsep '|@|'
    SQL> select * from testa;
    A         |@|1@        |@|22-JUN-2010
    B         |@|2*        |@|22-JUN-2010
    B         |@|2*        |@|22-JUN-2010
    ********************************************************************************Edited by: LindaA on 23-Jun-2010 08:33

  • Cannot change SQL command text in Data Flow Task

    I have an SSIS package that extracts data from Teradata into a SQL Server DB.
    I'm using SQL Server 2008 R2 EE x64, and Visual Studio 2008 PE (BIDS) supplied with it, accessing Teradata v13.
    In the Integration Services project, I have a Data Flow Task, it has an ADO .net source which has Data access mode set to “SQL Command”.
    This worked for a while when I initially entered a SQL statement to select data.
    But, when I change the existing SQL Command text and save the package, the changes are lost.
    It keeps going back to the original SQL Statement.
    It is currently “select col1, col2, … col10 from view1”.
    When I change it to anything else, like “select col5 from view1”, and save, and then double click the ADO NET source again, I find that the SQL command text is still the old one. It goes back to the previous statement.
    I’ve tried other statements like “exec macro” for Teradata, etc. but the same thing keeps happening - changes are not saved.
    Does anyone have any ideas on this, or have you seen this before?

    This is odd, but seems to be a component metadata corruption; so why don't you:
    delete the teradata connector and source component, then re add them back, see if it now accepts the new SQL statement, if this does not work
    abandon this package and create a new one replicating the functionality incorporating the new SQL.
    In case #1 and 2 both fail post here any errors observed and find out whether you are missing any updates to either Teradata provider or SQL Server
    Arthur My Blog

  • Large data sets and key terms

    Hello, I'm looking for some guidance on how BI can help me. I am a business analyst in a health solutions firm, but not proficient in SQL. However, I have to work with large data sets that just exceed the capabilities of Excel.
    Basically, I'm having to use Excel to manaully search for key terms and apply a values to those results. For instance, I have a medical claims file, with Provider Names, Tax ID, Charges, etc. It's 300,000 records long and 15-25 columsn wide. I need to search for key terms in the provider name like Ambulance, Fire Dept, Rescue, EMT, EMS, etc. Anything that resembles an ambulance service. Also, need to include abbreviations of them such as AMB, FD, or variations like EMT, E M T, EMS, E M S, etc. Each time I do a search, I have filter and apply an "N/A" flag.
    That's just one key term. I also have things like Dentists or DDS, Vision, Optomemtry and a dozen other Provider Types that need to be flagged as "N/A".
    Is this something that can be handled using BI? I have access to a BI group, but I need to understand more about the capabilities of what can be done. As an analyst, I'm having to deal with poor data inegrity. So, just cleaning up the file can be extremely taxing and cumbersome.
    Some insight would be very helpful. Thanks.

    I am not sure if you are looking for an explanation about different BI products? If so, may be this forum is not the place to get a straight answer.
    But, Information Discovery product suite might be useful in your case. Regarding the "large date set" you mentioned, searching and analyzing 300,000 records may not be considered a large data set at least in Endeca standards :).
    All your other requests, could also be very easily implemented using Endeca's product suite. Please reach out to Oracle's Endeca product team and they might guide you on how this product suite would help you.

  • Best way to extract XML data to DB

    Hello,
    In our work we need to extract data from XML documents into a database. Here are some extra notes:
    + There is no constant schema for the XML documents.
    + Some processing on the XML files is required.
    + The XML documents are very big.
    + The database is Oracle 8i or 9i (Enterprise Edition both) and our framework is NET.
    Our questions are:
    1. What is the best way to extract the data into the database under the above circumstances ?
    2. In case there was a constant schema for the XML documents, would there be a better way ?
    3. Is writing the data to text files first, and then loading it via SQL-Loader is an effective way ?
    Any thoughts would be welcome. Thanks.

    Hi Nicolas,
    The answer depends on your actual storage method (binary, OR, CLOB?), and db version.
    You can try XMLTable, it might be better in this case :
    SELECT x.elem1, x.elem2, ... , x.elem20
    FROM your_table t
       , XMLTable(
          '/an/xpath/to/extract'
          passing t.myxmltype
          columns elem1  varchar2(30) path 'elem1'
                , elem2  varchar2(30) path 'elem2'
                , elem20 varchar2(30) path 'elem20'
         ) x
    ;

  • I am extracting the data from ECC To bw .but Data Loading taking long tim

    Hi All,
                     i am extracting the data from ECC To BI Syatem..but Data Loading Taking Long time. from last   6 hoursinfopackage is running.still it is showing yellow.Manually i made the red.and delete again i applied repeat of the last delta.but same proble is coming .in the status job is showing bckground job is not finished at source system.we requested to basis.basis people killed that job.again we schedule the chain also again same problem is coming.how can i solve this issue.
    Thanks ,
    chandu

    Hi,
    There are different places to track your job. Once your job is triggered in BW, you can track your load job where exactly it is taking more time and why. Follow below steps:
    1) After InfoPackage is triggered, then take the request number and go to source system to check your extraction job status.
    You can get the job status by taking the request number from BW and go to transaction SM37 in ECC. Then give the request number with begining '' and ending ''.  Also give '*' to user name.
    Job name:  REQ_XXXXXX
    User Name: *
    Check the job status whether job is completed or cancelled or short dump. If the job is still running check in SM66 whether you can see any process. If not accordingly you got to check in ST22 or SM21 in ECC. If the job is complete, then the same in BW side now.
    2) Check the data arrived in PSA, if not check whether Transfer routines or start routines are having bad SQL or code. Similarly in update rules.
    3) Once it is through in Source system (ECC), Transfer rules , Update Rules, then the next task is updating the data might some time take more time which might be based on some parameters ( Number of parallel process to update database ). Check whether updating the database is taking more time and may be you got to check with the DBA guy also.
    At all the times you should see minimum of atleast once process running all the time in SM66 till the time your job gets complete. If not you will see a log in ST22.
    Let me know if you still have questions.
    Assigning points is the only way of saying thanks in SDN.
    Thanks,
    Kumar.

Maybe you are looking for

  • Bought new iPad mini retina but cant find install free iWork apps like Keynote. Where can I find/install Them?

    When configuring my new iPad mini retina last weekend I first ignored the questions on the installation of the free iWork apps. Now I would like to install them but I only find the paid version in the App Store. What am I donning wrong. Or was it a m

  • Palm Z22 won't power up

    My Palm Z22 will not power up.  It was working yesterday morning and just shut off while at my desk.  I thought it might be the battery so I plugged it in to charge it, but it still won't power up at all.  Is there some sort of reset button? Post rel

  • Copy/pasting content into one project from another.

    When I copy pictures from one project into another, it just leaves me with black pictures for the duration of the other ones...how do I fix this?

  • Connection 1.3 OK, but 1.4 fails

    I'm using a JDBC connection to access an Oracle db. Using JDK 1.3 it functions OK with the following run-command: set JDK=C:\jdk1.3.1\jre set PROJ=C:\Distro set ORACLE=%PROJ%\classes12.zip set path=.;%JDK%\bin; set classpath=.;%PROJ%;%ORACLE%;%JDK%\j

  • GigE vision on NI cRIO-9068

    Hello: We are planning a project in which we are intending to use GigE vision cameras and embedded vision processing. Our client is interested in the cRIO-9068 as the embedded platform. The question is: Is there support for GigE Vision on the cRIO-90