Sqlldr - getting all nulls in target table

Database: 10gR2 on WinXP
I am trying to load about 4000 of 7.5 million lines into the database from a 390MB file (there are 1522 of these files in total).
Goal:
1. When the line starts with "ScanHeader" or "position = ", just insert the line into TEST1.
2. When the line starts with "packet #", break it up into columns and insert into TEST2.
In fact, I only want three values out of the line. For example, for the line:
"Packet # 2, intensity = 0.000000, mass/position = 50.250000"
I want "2", "0.000000", and "50.250000" to be inserted into a table (and only if the intensity (the second value) is greater than 25).
(Note: I was thinking I would just load them all and then delete the ones I don't want, but if there is a way to exclude them up front, I would love to see it).
The loading into TEST1 seems to be working, but my first several stabs at loading into TEST2 results in the correct number of rows with source_file and seq set properly, but NULL for all the other columns. I suspect this is probably an easy one, but the correct search terms are eluding me.
My rust-encrusted loader skills were somewhat limited in the first place, so any suggestions beyond fixing the main issue are appreciated.
Thanks!
-Tom
CONTROL FILE:
OPTIONS(DIRECT=TRUE, ROWS=100000)
UNRECOVERABLE
LOAD DATA
INFILE '081017102.txt'
BADFILE '081017102.bad'
TRUNCATE
INTO TABLE TEST1
WHEN (1:10) = 'ScanHeader'
(source_file constant '011017102.txt',
content position(1:200) char,
SEQ recnum)
INTO TABLE TEST1
WHEN (1:10) = 'position ='
(source_file constant '011017102.txt',
content position(1:200) char,
SEQ recnum)
INTO TABLE TEST2
WHEN (1:8) = 'Packet #'
fields terminated by ' '
trailing nullcols
(source_file constant '011017102.txt',
seq recnum,
fpacket,
fpoundsign,
packet_number,
fintensity,
fequals1,
fcontent,
fmplabel,
fequals2,
mp
SAMPLE DATA FROM ONE FILE (partial file):
RunHeaderInfo
dataset_id = 0, instrument_id = 0
first_scan = 1, last_scan = 318, start_time = 0.002282, end_time = 0.997597
low_mass = 50.000000, high_mass = 1000.000000, max_integ_intensity = 41286.089844, sample_volume = 0.000000
sample_amount = 0.000000, injected_volume = 0.000000, vial = 0, inlet = 0
err_flags = 0, sw_rev = 1
Operator =
Acq Date =
comment1 =
comment2 =
acqui_file =
inst_desc =
sample volume units =
sample amount units =
Injected volume units =
Packet Position = 38630
Spectrum Position = 14603130
ScanHeader # 1
position = 1, start_mass= 50.000000, end_mass = 1000.000000
start_time = 0.002282, end_time = 0.000000, packet_type = 0
num_readings = 11400, integ_intens = 15376.646484, data packet pos = 0
uScanCount = 0, PeakIntensity = 1098.594238, PeakMass = 75.333328
Scan Segment = 0, Scan Event = 0
Precursor Mass
Collision Energy
Isolation width
Polarity negative, Profile Data, Full Scan Type, MS Scan
SourceFragmentation Off, Type Ramp, Values = 0, Mass Ranges = 0
Turbo Scan Off, IonizationMode Electrospray, Corona Any
Detector Any, Value = 0.00, ScanTypeIndex = -1
DataPeaks
Packet # 0, intensity = 0.000000, mass/position = 50.083333
saturated = 0, fragmented = 0, merged = 0
Packet # 1, intensity = 0.000000, mass/position = 50.166667
saturated = 0, fragmented = 0, merged = 0
Packet # 2, intensity = 0.000000, mass/position = 50.250000
saturated = 0, fragmented = 0, merged = 0
(...11,900 more packets for each of 220 ScanHeaders...)
LOG FILE FROM LOADING A PARTIAL FILE:
SQL*Loader: Release 10.2.0.1.0 - Production on Tue Oct 21 11:24:11 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: pos.ctl
Data File: 081017102.txt
Bad File: 081017102.bad
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation: none specified
Path used: Direct
Silent options: FEEDBACK and DISCARDS
Load is UNRECOVERABLE; invalidation redo is produced.
Table TEST1, loaded when 1:10 = 0X5363616e486561646572(character 'ScanHeader')
Insert option in effect for this table: TRUNCATE
Column Name Position Len Term Encl Datatype
SOURCE_FILE CONSTANT
Value is '011017102.txt'
CONTENT 1:200 200 CHARACTER
SEQ RECNUM
Table TEST2, loaded when 1:8 = 0X5061636b65742023(character 'Packet #')
Insert option in effect for this table: TRUNCATE
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
SOURCE_FILE CONSTANT
Value is '011017102.txt'
SEQ RECNUM
FPACKET NEXT * WHT CHARACTER
FPOUNDSIGN NEXT * WHT CHARACTER
PACKET_NUMBER NEXT * WHT CHARACTER
FINTENSITY NEXT * WHT CHARACTER
FEQUALS1 NEXT * WHT CHARACTER
FCONTENT NEXT * WHT CHARACTER
FMPLABEL NEXT * WHT CHARACTER
FEQUALS2 NEXT * WHT CHARACTER
MP NEXT * WHT CHARACTER
Table TEST1, loaded when 1:10 = 0X706f736974696f6e203d(character 'position =')
Insert option in effect for this table: TRUNCATE
Column Name Position Len Term Encl Datatype
SOURCE_FILE CONSTANT
Value is '011017102.txt'
CONTENT 1:200 200 CHARACTER
SEQ RECNUM
Table TEST1:
1 Row successfully loaded.
0 Rows not loaded due to data errors.
34234 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Table TEST2:
11400 Rows successfully loaded.
0 Rows not loaded due to data errors.
22835 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Table TEST1:
1 Row successfully loaded.
0 Rows not loaded due to data errors.
34234 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes:20971520
Total logical records skipped: 0
Total logical records read: 34235
Total logical records rejected: 0
Total logical records discarded: 22833
Total stream buffers loaded by SQL*Loader main thread: 5
Total stream buffers loaded by SQL*Loader load thread: 0
Run began on Tue Oct 21 11:24:11 2008
Run ended on Tue Oct 21 11:24:12 2008
Elapsed time was: 00:00:01.09
CPU time was: 00:00:00.10

I don't think that is the issue, but thanks for the suggestion.
I eliminated the directives to load data into TEST1 from the control file and then, surprise surprise, the load into TEST2 started working. I added the TEST1 directives back in after the TEST2 directive and all appears to be working now. I don't understand exactly why, but at this point I will accept the result.
I have changed the control file to what you see below. For the TEST2 load, any suggestions on how to skip the records where the intensity is less than 25?
REVISED CONTROL FILE:
OPTIONS(DIRECT=TRUE, ROWS=100000)
UNRECOVERABLE
LOAD DATA
INFILE '081017102-partial.txt'
BADFILE '081017102.bad'
TRUNCATE
INTO TABLE TEST2
WHEN (1:8) = 'Packet #'
fields terminated by ' '
trailing nullcols
(source_file constant '011017102',
seq recnum,
fpacket filler,
fpoundsign filler,
packet_number "replace(:packet_number, ',', '')",
fintensity filler,
fequals1 filler,
intensity "replace(:intensity, ',', '')",
fmplabel filler,
fequals2 filler,
mass_position
INTO TABLE TEST1
WHEN (1:10) = 'ScanHeader'
(source_file constant '011017102',
content position(1:200) char,
SEQ recnum)
INTO TABLE TEST1
WHEN (1:10) = 'position ='
(source_file constant '011017102',
content position(1:200) char,
SEQ recnum)

Similar Messages

  • How get all rows of a table with a BAPI

    Hi,
    how is it possible to get more then one row by calling a BAPI from the WD. In my Application I need the rows of a Table coming from the r/3 System. How is it possible to get all the rows after the first call? What is the logic behind it? My purpose is also to create an own BAPI.
    regards,
    Sharam
    null

    Hi,
    If I understand, you don't want display the result into a Web Dynpro Table. If so, after the execution, the result of your request is stored into the context. Then you don't really need to transfert the data from your context to an Java Array.
    But if you want to do it, here is the code :
    guess your result node called
    nodeResult
    Vector myVector = new Vector();
    for (int i = 0; i < wdContext.nodeResult().size(); i++){
       myVector.put(wdContext.nodeResult().getElementAt(i));
    I hope this will answer to your question.
    Regards

  • Urgent solution needed for the Problem ( get all the combination from table

    we are having a table in following format
    day | grpID | pktID
    sun | 1 | 001
    sun | 1 | 002
    sun | 1 | 003
    sun | 2 | 007
    sun | 2 | 008
    sun | 2 | 009
    mon | 1 | 001
    mon | 1 | 002
    mon | 1 | 003
    mon | 2 | 007
    mon | 2 | 008
    mon | 2 | 009
    tue | 1 | 001
    tue | 1 | 002
    tue | 1 | 003
    tue | 2 | 007
    tue | 2 | 010
    1. We have a combination of pkdIDs related with a specific grpID, for a particular day.
    Ex: For Sunday, we have two combination list for grpID=1 is (001,002,003) and for group id = 2 is (007,008,009)
    2. We need to get all the available combined pktid for each group id for all the days .
    Eg the the expected result that is needed from the above table
    (001,002,003)
    (007,008,009)
    (007,010)

    SQL> with tbl as
      2  (select 'sun' d, 1 grp, '001' pk from dual union all
      3   select 'sun' d, 1 grp, '002' pk from dual union all
      4   select 'sun' d, 1 grp, '003' pk from dual union all
      5   select 'sun' d, 2 grp, '007' pk from dual union all
      6   select 'sun' d, 2 grp, '008' pk from dual union all
      7   select 'sun' d, 2 grp, '009' pk from dual union all
      8   select 'mon' d, 1 grp, '001' pk from dual union all
      9   select 'mon' d, 1 grp, '002' pk from dual union all
    10   select 'mon' d, 1 grp, '003' pk from dual union all
    11   select 'mon' d, 2 grp, '007' pk from dual union all
    12   select 'mon' d, 2 grp, '008' pk from dual union all
    13   select 'mon' d, 2 grp, '009' pk from dual union all
    14   select 'tue' d, 1 grp, '001' pk from dual union all
    15   select 'tue' d, 1 grp, '002' pk from dual union all
    16   select 'tue' d, 1 grp, '003' pk from dual union all
    17   select 'tue' d, 2 grp, '007' pk from dual union all
    18   select 'tue' d, 2 grp, '010' pk from dual) -- end of data sample
    19  select distinct '('||ltrim(max(c1) keep (dense_rank last order by lv),',')||')'
    20  from   (select d,grp,level lv,sys_connect_by_path(pk,',') c1
    21          from   tbl
    22          connect by d=prior d and grp = prior grp and pk > prior pk)
    23  group by d,grp;
    '('||LTRIM(MAX(C1)KEEP(DENSE_RANKLASTORDERBYLV),',')||')'
    (001,002,003)
    (007,008,009)
    (007,010)
    SQL>That works on 9i. Other possiblities on 10g.
    Nicolas.

  • How do I get the rowcount of target table using Sunopsis API in ODI 10g ?

    Hi guys,
    Actually I want to send an alert mail once interface is run from a package. I have included OdiSendMail alert which sends a mail once interface is run.
    Could anyone please tell how to get the no of rows inserted in the target table from Sunopsis API.
    I tried using <%=odiRef.getNbRows( )%> but this did not work for me. Since I'm a beginner, could you please help me out
    This is my ODI send mail format
    " Data population has been completed successfully at <%=odiRef.getSysDate( )%>
    Total rows in target table are: <-- need some API code --> ''
    Regards,
    Clinton
    Edited by: LawrenceClinton on Feb 25, 2013 8:53 PM

    Hi
    Create project variable with below details
    Variable_name: Total_Row_Count
    Variable Type: Refresh Variable
    Definition Tab:
    Datatype: Numeric
    Action: Not Persistent
    Refreshing Tab:
    Schema: provide your Work Repository Schema and be
    SELECT log.nb_row
    FROM snp_step_log log, snp_scen_step step
    WHERE log.nno = step.nno
    AND step.scen_no =( SELECT scen_no FROM snp_scen_step WHERE step_name='<%=odiRef.getPrevStepLog( "STEP_NAME" )%>' )
    AND log.sess_no = '<%=odiRef.getSession( "SESS_NO" )%>'low code
    AND step.step_name = '<%=odiRef.getPrevStepLog( "STEP_NAME" )%>'
    Note: Add this variable after interface step in your package (after the interface any where you can place), you can add this variable before ODISendEmailNotification Step in your package
    Call this varciable *#Total_Row_Count* in ODISendEmail Notification
    eg:
    Data has been populated successfully at <%=odiRef.getSysDate( )%>
    Total no of rows populated are : *#Total_Row_Count*
    It will work
    Regards,
    Phanikanth
    Edited by: Phanikanth on Feb 28, 2013 1:13 AM
    Edited by: Phanikanth on Feb 28, 2013 1:14 AM

  • Get all rows from a table control

    Hi All,
    I have a table control with one column. What function should I use to retrieve all the rows ? Do I need to iterate row by row and read each row or is it possible to do it in one function ?
    Thanks,
    Kanu
    Solved!
    Go to Solution.

    Supposing vells in the column have all the same data type, you can retrieve the whole column with a single instruction:
    GetTableCellRangeVals (panel, control, VAL_TABLE_COLUMN_RANGE (1), array, VAL_COLUMN_MAJOR);
    The array passed must be large enough to retrieve all data. Alternatively, you may substitute the macro VAL_TABLE_COLUMN_RANGE with the appropriate MakeRect instruction.In case your table was dinamically built, you can obtain the nu,ìmber of rows using GetNumTableRows and dimension your array accordingly.
    The above macro is defined in userint.h together with some other useful macros that can be used to access data in a table.
    There are some precautions to take in case of string values or some cell type (ring, combo box, button...) that are described in the hell for the function.
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • How to get all the client independent tables in SCM

    Hi Guys,
    I want to know all the client independent table is SCM. Can you guys help me please.
    Regards,
    Kumar

    Hi Kumar,
    Generally, Data of  SAP system control data, language indicators, and transaction
    codes are stored in client independent tables.
    A table attribute in the Data Dictionary indicates whether a table is client-specific
    or client-independent.
    Regards
    R. Senthil Mareeswaran.

  • Count of all nulls in a Table

    Hi all,
    Is there a way to find out the count of all the null fields in a table?
    One way is obvious:-
    SQL> select * from test6152;
             A B   DD
             1 one <null>
             2 two <null>
             3 tri <null>
             3 dri <null>
             1 <nu 01-JAN-08
               ll>
             1 ch  01-JAN-08
    6 rows selected.
    SQL> select a + b + c from
      2  (
      3  select sum(case when a is null then 1 else 0 end) a ,
      4  sum(case when b is null then 1 else 0 end) b ,
      5  sum(case when dd is null then 1 else 0 end) c
      6  from
      7  test6152
      8  )
      9  /
         A+B+C
             5
    .Is there any other viable way to find the count of null fields in a table with say 80 fields.??
    Thanks in advance.

    You solution is already a good one.
    I personally would prefer "decode" instead of "case" in the case. but that's only a matter of style.
    DECODE(a,null,1,0)Another possibility is first to string all rows together and count the number of nulls in that row. Then add everything up.
    E.g. (not tested!)
    SELECT SUM(
                    LENGTH(
                                 decode(a,null,'x','')
                              || decode(b,null,'x','')
                              || decode(c,null,'x','')
                              || decode(d,null,'x','')
    FROM test6152

  • Query to get all tables related to a given table directly or indirectly

    Does anyone have a query to get a list of tables that are directly or indirectly related to a given table by foreign keys?
    I used a CONNECT BY clause to get a list of all child tables and their child tables and so forth by going against the ALL_CONSTRAINTS table. I then UNIONed the result with another query that got all the parents/grandparents etc. for this table.
    I realized that I then need to get all the other child tables of these parent tables and their children and it sort of ran into an endless number of unions because we're traversing up and down a tree and are starting out from the middle.
    Is there a better way to do this?
    Thanks,
    Mohit

    Hello!
    Do you know transaction SUIM. From there you can find all roles with a given authorization. If you need a function maybe you can look which functions are used behind SUIM. For the tables check the table behind the fields in the transaction with F1.
    Best regards,
    Peter

  • How do I update/insert into a target table, rows after date X

    Hi all
    I have a mapping from source table A to target table B. Identical table structure.
    Target table A updates rows and inserts rows daily. Every week I want to synchronize this with table B.
    I have CREATION_DATE and LAST_UPDATE_DATE on both tables. I want to pass in a parameter to this mapping of date X which tells the mapping:
    "if CREATION_DATE is past X then do an insert of this row in B, if LAST_UPDATE_DATE is past X then do an update of this row in B"
    Please can you help me work out how to map this correctly as I am new to OWB.
    Many thanks
    Adi

    Hi,
    You can achieve this by -
    1. Create a control table, say Control_Table, with structure
    Map Name, last_load_date. Populate this table with the mappings that synchronizes your Table B.
    2. Alter mapping, that loads Table B to use the above control table to get all the records from Table A, you have to join Table A and Control_Table with the condition -
    Control_Table.Map_Name = < mapping name>
    AND ( TableA.Creation_Date > Control_Table.last_load_date
    OR TableA.Last_Update_Date > Control_Table.last_load_date )
    3. Then use UPDATE/INSERT on the Table B based on the Keys. This should take care of INSERT ( if not present) / UPDATE (if the row already exists).
    4. Schedule the mapping to run on weekly basis.
    5. You have to maintain the Control_Table to keep changing the values for Last_Load_Date to pick the data since the last time Table B is synchronized.
    HTH
    Mahesh

  • Target Table Column Changes .....?

    Dear All,
    If the target table column gets changed, how can we implement in OWB Mappings? For instance today I have a column name as CTYNAM in the CITY_DIM, a couple of days after in the database it is renamed as CTY_NAME. How can I do the changes in OWB. Assume that this column is mapped to n number of transformations when extracted from source tables.
    Thanks in Advance
    Regards
    Kishan

    Good morning Kishan,
    As an addition, if the datatype of columns has changed, the reconcile inbound might correct it for the reconciled table operator, but not for any other operator in your mappings.
    Check all operators that use columns of which the datatype has changed, unfortunately that's a manual job (unless you're handy with OMBPlus, then you could use scripting to alter properties for all occurrences).
    Good luck, Patrick

  • Sorted Target table

    Hi all,
    where/how can I use clause order by to get sorted data on target table?
    I tried it via expression editor in mapping on some column in target datastore, but received something like "... the key word 'From' have not be found on the expected place..."
    Forgive me this stupid question, I am just an old man new in ODI..
    Thx

    Hi,
    Try with COL_NAME,
    <%=snpRef.getColList("", " [COL_NAME]", ", ", "", "UD1") %>
    FYR: Re: please suggest me
    Thanks,
    Guru

  • How can i get all values from jtable with out selecting?

    i have one input table and two output tables (name it as output1, output2). Selected rows from input table are displayed in output1 table. The data in output1 table is temporary(means the dat wont store in database just for display purpose).
    Actually what i want is how can i get all values from output1 table to output2 table with out selecting the data in output1 table?
    thanks in advance.
    raja

    You could set the table's data model to be the same:
    output2.setModel( output1.getModel() );

  • Null value in target table

    Hi All
    I am using ODI 10.1.13
    I have declred teh Target datastore
    and mapped the target datastore with source in teh interface
    But when ODI is creating teh target database all the columns are created with accepting NULL Properties.
    where as i hv declared some as null and some with notnull.
    when i checked in IKM Oracle Incrimental KM(MERGE)-> create target table step
    the coding is
    create table <%=odiRef.getTable("L", "TARG_NAME", "A")%>
         <%=odiRef.getTargetColList("", "[COL_NAME]\t[DEST_CRE_DT] NULL", ",\n\t", "")%>
    due to which the target table is created with all field havine NULL properties.
    But i want the columns should hv the properties declared in the datastore.Eg. some columns with NULL and some columns with NOT NULL properties.
    Please help..
    Gourisankar

    In the IKM Merge , change the null to + odiRef.getInfo("DEST_DDL_NULL") so finally something like this <%=odiRef.getTargetColList("", "[COL_NAME]\t[DEST_CRE_DT] " + odiRef.getInfo("DEST_DDL_NULL"), ",\n\t", "")%> For bulk insert try SQL Control Append

  • Using % to get all including null or empty values

    dear all;
    Please find below the following sample data
    create table t1
    ID varchar2(200),
    time_create date
    insert into t1
        (id, time_create)
      values
        ('A', sysdate);
      insert into t1
        (id, time_create)
      values
        ('B', sysdate);
        insert into t1
        (id, time_create)
      values
        (null, sysdate);
        I have the following sql statement below
        select * from t1
        where t1.id like decode(:id, 'ALL', '%', :id);now, I have a situation where if the user input is ALL, I would like to get all results including the null values or space values, how can I modify the decode statement to do that.
    This is the output I want below for ALL
    ID  TIME_CREATE
    A   5/23/2011 11:14:23 PM
    B   5/23/2011 11:14:24 PM
         5/23/2011  11:14:25 PMAll help is appreciated. Thank you.

    Both work in exactly the same manner.
    The OR condition says either boolean expression needs to be true for the row to be returned, so when :id = 'ALL' (and bind variable is set to 'ALL')- all rows are returned regardless.
    When id = :ID, only values that are equal to your bind variable will be returned.
    You could modify to this to provide even more flexibility
    select * from t1
    where :ID = 'ALL'
    OR like :ID;
    Scott

  • How to get source table name according to target table

    hi all
    another question:
    once a map was created and deployed,the corresponding information was stored in the repository and rtr repository.My question is how to find the source table name according to the target table,and in which table these records are recorded.
    somebody help me plz!!
    thanks a lot!

    This is a query that will get you the operators in a mapping. To get source and targets you will need some additional information but this should get you started:
    set pages 999
    col PROJECT format a20
    col MODULE format a20
    col MAPPING format a25
    col OPERATOR format a20
    col OP_TYPE format a15
    select mod.project_name PROJECT
    , map.information_system_name MODULE
    , map.map_name MAPPING
    , cmp.map_component_name OPERATOR
    , cmp.operator_type OP_TYPE
    from all_iv_xform_maps map
    , all_iv_modules mod
    , all_iv_xform_map_components cmp
    where mod.information_system_id = map.information_system_id
    and map.map_id = cmp.map_id
    and mod.project_name = '&Project'
    order by 1,2,3
    Jean-Pierre

Maybe you are looking for