Compare floating points for all columns in parallel

I have a data set that needs processing and wondering what is the best way to handle.
What i have:
1) Text file with 300+ columns (all numbers (float or integers) )
2) Rows =~ 1M (This is the actual data)
3) One more text file with 300+ columns (column name same as #1) with boundary conditions for that column (explained below)
Example: This is content of file1 (cols 300 and rows =1M)
Col1 Col2 Col3...... Col300
1.1 4.22 6 8.1234
1.23 4.5 7
1.0 4.7 11
1.3 4.1 13
2nd File
ColumName Low High
Col1 1.0 2.0
Col2 4.1 4.5
Col3 5 10
What i want:
Compare each column of file 1 with the low/high range of file 2. So if ( col1 < low ) then output=Fail if (col1 > High) output=Fail else PASS if Low < col1 < High
Output:
Col1 Col2 Col3
Pass Pass Pass
Pass Fail Pass
Pass Pass Fail
Question:
1) I want to do it in parallel since there are million rows and hundreds of column - Need SPEED
2) I do it in perl right now and it takes about 30-40 mins for this 3GB data file
3) If there is good way to do it in oracle then i can create 2 tables for these 2 files and write pl/sql function to run in parallel and create output in 3rd table
a) If this the right way to do.
b) What would the pl/sql look like and how to run in parallel
c) Since i will be starting from scratch so open to any new design idea for implementation
4) Once this is done then i would like to perform other statistics on the first file (MIN,MAX, AVG, MEDIAN etc) and store in different output table
Appreciate any help.

However, i will have to check with our DBA but the answer would be no most likely.It would be very interesting to know the reason(s) why your DBA would say no to external tables, so please keep us posted on that.
He/she can easily create a directory object that points to a specific physical directory on the server, only allowing you access to that directory.
By simply querying your files, you can use the power of SQL (for example: analytics etc) to do set-based processing, you'll crunch a million rows within a minute.
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6611962171229
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5448988086977
http://www.oracle-developer.net/display.php?id=325
How will you get your data into your fixed 'playtables' in the first place anyway?
I use cluster computing so things can be done in parallel but there are too many files to handle.If those files all have an identical layout then the external table can query them all at once.
You could do this dynamically.

Similar Messages

  • Method_opt = 'FOR ALL COLUMNS SIZE REPEAT'

    Hi all Gurus,
    We have a script to take statitics (see below), it runs every day and All the tables has de "MONITORING" option in a 9.2.0.5 database.
    My question is concern to "method_opt =>'FOR ALL COLUMNS SIZE REPEAT',OPTIONS=>'GATHER EMPTY',estimate_percent =>5);
    So, for a new table with columnns and indexes (with "monitoring" option), will it create statistics or histograms statistics when the script run for the first time on the table??? and then, will continue with/without histograms???
    begin
    DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO();
    DBMS_STATS.GATHER_DATABASE_STATS(DEGREE=>4,GRANULARITY=>'ALL',CASCADE=>TRUE,method_opt =>'FOR ALL COLUMNS SIZE REPEAT',OPTIONS=>'GATHER EMPTY',estimate_percent =>5);
    DBMS_STATS.GATHER_DATABASE_STATS(estimate_percent =>5,OPTIONS=>'GATHER STALE',method_opt =>'FOR ALL COLUMNS SIZE REPEAT',degree => 4, cascade=>true,STATTAB=>'TABLA_ESTADISTICAS',STATID=>to_char(sysdate,'yymmdd'),STATOWN=>'OPER');
    end;
    Regards,

    Hi all Gurus,
    We have a script to take statitics (see below), it runs every day and All the tables has de "MONITORING" option in a 9.2.0.5 database.
    My question is concern to "method_opt =>'FOR ALL COLUMNS SIZE REPEAT',OPTIONS=>'GATHER EMPTY',estimate_percent =>5);
    So, for a new table with columnns and indexes (with "monitoring" option), will it create statistics or histograms statistics when the script run for the first time on the table??? and then, will continue with/without histograms???
    begin
    DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO();
    DBMS_STATS.GATHER_DATABASE_STATS(DEGREE=>4,GRANULARITY=>'ALL',CASCADE=>TRUE,method_opt =>'FOR ALL COLUMNS SIZE REPEAT',OPTIONS=>'GATHER EMPTY',estimate_percent =>5);
    DBMS_STATS.GATHER_DATABASE_STATS(estimate_percent =>5,OPTIONS=>'GATHER STALE',method_opt =>'FOR ALL COLUMNS SIZE REPEAT',degree => 4, cascade=>true,STATTAB=>'TABLA_ESTADISTICAS',STATID=>to_char(sysdate,'yymmdd'),STATOWN=>'OPER');
    end;
    Regards,
    {code}
    I have taken following explanation from documentation:
    {code}
    METHOD_OPT - The value controls column statistics collection and histogram creation. It accepts either of the following options, or both in combination:
    FOR ALL [INDEXED | HIDDEN] COLUMNS [size_clause]
    FOR COLUMNS [size clause] column|attribute [size_clause] [,column|attribute [size_clause]...]
    size_clause is defined as size_clause := SIZE {integer | REPEAT | AUTO | SKEWONLY}
    column is defined as column := column_name | (extension)
    - integer : Number of histogram buckets. Must be in the range [1,254].
    - REPEAT : Collects histograms only on the columns that already have histograms.
    - AUTO : Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.
    - SKEWONLY : Oracle determines the columns to collect histograms based on the data distribution of the columns.
    - column_name : name of a column
    - extension : can be either a column group in the format of (column_name, colume_name [, ...]) or an expression
    The default is FOR ALL COLUMNS SIZE AUTO.
    {code}
    GATHER EMPTY: Gathers statistics on objects which currently have no statistics. Return a list of objects found to have no statistics.
    Reference: http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_stats.htm
    Please go through the link, it will give you more clear picture on DBMS_STATS.
    Regards,
    S.K.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Create LOV's for all columns at the time of folder creation

    Hi,
    I know we can automatically create the LOV's for all the columns of a database table when creating the folder in the EUL. Is this a good practice or should I create LOV's on demand? What is the disadvantage of creating more LOV's than needs to be used other than the disk storage? I also know it is not a good idea to have an LOV on a field that has a lot of values. I am tempted not to create the LOV's in the beginning, but then it becomes a high maintenance issue if I have to come back and create many LOV's later. Thank you for your input.

    Yes, it is not a good idea to create LOVs for all the columns. LOVS should be created only for the columns where parameters will be used. More the number of LOVs, slower the response in opening up your Discoverer reports and if you are using Disco Viewer, the page might take a long time to open up and frustrate the users.
    Also it is a good practise to have LOVs defined on the lookup tables or where there are distinct values in the table and not from where these values are used in the table.
    regards
    http://www.infocaptor.com <--- Free Discoverer Monitoring Dashboards

  • Max stock level and reorder point for all the materials

    Dear experts,
    Is there any standard mm report that will give the details of max stock level and reorder point details for all the materials?
    Thanks in advance,
    Regards,
    Krish

    Your requirement can be pulled from table MARC. I dont think there is a standard report to meet your requirement. You can meet your requirement through SAP Query though.

  • Compare floating point values in a column

    I have a column defined as NUMBER. The values are say 1.2345 and 1.23443. Visually i can say they are almost the same but if i do select count(distinct colA) from table it gives two values, which is perfectly right. What i want to do is say if they are less than 0.0001 then count them as same value.
    You could argue if i use ROUND or TRUNCATE to the 'n' decimal point but i see this as an issue even even to two decimal places. If the 3rd digit after the decimal is greater than 5 it would round the 2nd digit. If i use TRUNCATE to 'n' places then the last digit might be smaller than the previous value so again the same issue.
    However, my problem is how do i do if its in 2 or 3 different rows for the same column. If they are in two columns, i am able to do (COLA - COLB) and if the value is < 0.00001 then ignore or filter from the query result.
    Example:
    ColA , ColB
    Variable1, 1.23453876
    Variable1, 1.2342736
    Variable1, 1.23420087
    Variable2, 10.250
    Variable2, 15.775
    Variable3, -1.93815994262695
    Variable3, -1.93815803527832
    Output: Count of unique values for each variable in columnA
    Variable1, 1
    Variable2, 2
    Variable3, 1
    OR
    Output: Display on those variables whose values is more than 0.0001 (4 digits after the decimal)
    Variable2, 2
    I am using oracle 10.2 Enterprise Edition and accessing data via TOAD
    Thanks

    Peter,
    Thank you for the sql, i had not tried lag function before so it was new to me. I modified it slightly to lag by 'partition query'. The issue i see is it does delta between only two consecutive values so if there are 5 values in 5 rows then they can be off by 0.0001 (some small number) and we will never catch those variables because the case statement would evaluate to '0' based on the if condition.
    However, for my case it would work perfectly fine since i wanted to separate small values from the real obvious big deltas. As in the case below where the delta was 5.XX which narrows down to Variable2.
    with
    tab as
    (select 'Variable1' cola, 1.23453876 colb from dual union all
    select 'Variable3', -1.93815994262695 from dual union all
    select 'Variable1', 1.2342736 from dual union all
    select 'Variable2', 10.250 from dual union all
    select 'Variable2', 15.775 from dual union all
    select 'Variable1', 1.23420087 from dual union all
    select 'Variable3', -1.93815803527832 from dual union all
    select 'Variable4', 47.41209 from dual union all
    select 'Variable4', 47.41207 from dual
    select *
    from
    select
    cola,colb,lag(colb) over (partition by cola order by colb)prev_value, round(abs(colb - lag(colb) over (partition by cola order by colb)),4)prev_delta
    ,case when colb - nvl(lag(colb) over (partition by cola order by colb ), colb) < 0.0001 then 0
    else 1 end dif_count
    from
    tab
    --order by   cola
    --group by cola
    COLA     COLB     PREV_VALUE     PREV_DELTA     DIF_COUNT
    Variable1     1.23420087               0
    Variable1     1.2342736     1.23420087     0.0001     0
    Variable1     1.23453876     1.2342736     0.0003     1
    Variable2     10.25               0
    Variable2     15.775     10.25     5.525     1
    Variable3     -1.938159943               0
    Variable3     -1.938158035     -1.938159943     0     0
    Variable4     47.41207               0
    Variable4     47.41209     47.41207     0     0
    You guys rock! thank you.

  • Premiere CC somehow erased all the in/out points for all project subclips

    I've been editing a project for over a couple of weeks now, and the first thing I did was go through all of the raw footage (a lot of it was interview footage) and make subclips out them for the 'beats'/segments and to separate the b-roll footage in long files. This worked great, and I almost completely had the project edited in the sequence and was down to little tweaks to clean it up when I opened the project today and wanted to go back and add a subclip I hadn't used. I clicked on the subclip to open it in the Source viewer, and the source file opened without the in/out markers. Just the subclip source. No subclip.
    I started clicking through all of the others too. Hundreds of subclips, some used in the sequence, some not - all suddenly stripped of all in/out data. All of them merely referenced the entire, long source clips.
    How in the hell is this even possible? That's at least a couple days worth of logging work just GONE.
    I'm using fully updated versions of CC apps, as well as the Mac OS on a 2011 MacBook Pro and editing off a LaCie Rugged Drive.

    Nested sequences can do that, since they are references to another sequence. Specifically if you open a nest by double clicking on it in the new sequence, change in and out to make another edit, your changes may reflect on your new timeline.

  • How do I setup my MacBook Pro (2009) as a wireless access point for all my wireless device to access the Internet using a USB wireless adapter?

    I connect to the Internet with a USB Super Antenna from CCrane.
    I want to establish a network on my Mac to allow my other wireless devices to use the USB connection at the USB.
    Any ideas?

    Yes, that would work, however it would cost me $2000 for the monitors.

  • Sorting a Collection with dynamic columns using a custom compare function for multiple columns

    I need help and ideas on how to sort a ListCollectionView.  My problem is complicated by 3 requirements-
         1. The column values contain HTML tags that needs to be removed before sorting (use custom compareFunction to strip HTML)
         2. The columns are dynamic, so dataField names are not known at compile time (need a single compareFunction for all columns)
         3. The data is used in an AdvancedDataGrid so multi-column sorting is required
    I have figured out how to solve any 2 of the 3 requirements.  However, I am having difficulties supporting all 3 requirements.
    Any help or ideas would be greatly appreciated.  Thanks.

    After playing with this some more I think I've figured out a solution.  This seems to work in initial testing.  Also, there is not a need to capture the current sort column in the headerRelease event which many offered solutions suggested.  Another benefit to this solution is that keyboard initiated sorting is handled also.  Whereas the headerRelease event is only triggered by a mouse click on the column header and special handling is required if the user uses the keyboard to access the column header.
    One point that I don't understand is how ascending/decending order is determined.  Behavior seems to be different between a single SortField versus multiple SortFields.  Notice how the compareResults are handled for the different situations.  Anyone out there know why???
     private function colSortCompareFunction(obj1:Object, obj2:Object, fields:Array = null):int{
         var compareResults:int = 0; 
         var newObj1:Object = new Object(); 
         var newObj2:Object = new Object();
          // should not be a condition that is met   
         if (_dataProviderDetails.sort.fields == null)     {
              var s:Sort = new Sort(); 
              var f:Function = s.compareFunction; 
              return f.call(null, obj1, obj2, fields);     }
         // when a single column is selected for sorting   
         else if (_dataProviderDetails.sort.fields.length == 1)     {
              var firstFld:SortField = _dataProviderDetails.sort.fields[0];
              newObj1[firstFld.name] = stripHTML(obj1[firstFld.name]as String);          newObj2[firstFld.name] = stripHTML(obj2[firstFld.name]
    as String);
              compareResults = ObjectUtil.compare(newObj1[firstFld.name], newObj2[firstFld.name]);
               return compareResults;     }
         // when multiple columns are selected for sorting   
         else       {
              for each (var fld:SortField in _dataProviderDetails.sort.fields)          {
                   newObj1[fld.name] = stripHTML(obj1[fld.name]
    as String);               newObj2[fld.name] = stripHTML(obj2[fld.name]
    as String);
                   compareResults = ObjectUtil.compare(newObj1[fld.name], newObj2[fld.name]);
                    if (compareResults != 0)               {
                        if (fld.descending)                    {
                             return compareResults * -1;                    }
                        else                      {
                             return compareResults;                    }
               return compareResults;     }
    Does anyone see any problems with this solution?
    NOTE:  stripHTML(String) is a simple function using regular expression to remove HTML tags.
    Thx

  • Gather_table_stats with a method opt of "for all indexed columns size 0"

    I have 9 databases I support that contain the same structure, and very similar data concentrations. We are seeing inconsistent performance in the different databases due to bind variable peeking.. I have tracked it down to the Min and Max values that are gathered during the analyze. I analyze from one cluster, and export/import those statistics into the other clusters.. I then go about locking down the stats gathered. Some of the statistics are on tables that contain transient data (the older data is purged, and new data gets a new PK sequence number).
    Since I am gathering statistics with a 'FOR ALL INDEXED COLUMNS SIZE 1', a min and max value are grabbed. This value is only appropriate for a short period of time, and only for a specific database. I do want oracle to know the density to help calculate, but I don't want cardinality based on whether the current bind values fall in this range..
    Example
    COLUMN PK
    When I analyze the min is 1 and max is 5. I then let the database run, and the new min is 100 and max is 105.. same number of rows, but different min/max. At first a select * from table where pk>=1 and pk <=5 would return a cardinality of 5.. Later, a seelct * from tables where pk>=100 and pk<=105 would return a cardinaility 1.
    Any ideas how to avoid this other than trying set min and max to something myself (like min =1 max = 99999999). ??

    MarkDPowell wrote:
    The Oracle documentation on bind variable peeking said it did not peek without histograms and I cannot remember ever seeing on 9.2 where the trace showed otherwise. Mark,
    see this simple test case run on 9.2.0.8. No histograms, but bind variable peeking, as you can see that the EXPLAIN PLAN output generated by AUTOTRACE differs from the estimated cardinality of the actual plan used at runtime.
    Which documentation do you refer to?
    SQL>
    SQL> alter session set nls_language = 'AMERICAN';
    Session altered.
    SQL>
    SQL> drop table bind_peek_test;
    Table dropped.
    SQL>
    SQL> create table bind_peek_test
      2  as
      3  select
      4             100 as n1
      5           , cast(dbms_random.string('a', 20) as varchar2(20)) as filler
      6  from
      7             dual
      8  connect by
      9             level <= 1000;
    Table created.
    SQL>
    SQL> exec dbms_stats.gather_table_stats(null, 'bind_peek_test', method_opt=>'FOR ALL COLUMNS SIZE 1')
    PL/SQL procedure successfully completed.
    SQL>
    SQL> variable n number
    SQL>
    SQL> variable n2 number
    SQL>
    SQL> alter system flush shared_pool;
    System altered.
    SQL>
    SQL> exec :n := 1; :n2 := 50;
    PL/SQL procedure successfully completed.
    SQL>
    SQL> set autotrace traceonly
    SQL>
    SQL> select * from bind_peek_test where n1 >= :n and n1 <= :n2;
    no rows selected
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1000 Bytes=24
              000)
       1    0   FILTER
       2    1     TABLE ACCESS (FULL) OF 'BIND_PEEK_TEST' (Cost=2 Card=100
              0 Bytes=24000)
    Statistics
            236  recursive calls
              0  db block gets
             35  consistent gets
              0  physical reads
              0  redo size
            299  bytes sent via SQL*Net to client
            372  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
              0  rows processed
    SQL>
    SQL> set autotrace off
    SQL>
    SQL> select
      2             cardinality
      3  from
      4             v$sql_plan
      5  where
      6             cardinality is not null
      7  and      hash_value in (
      8    select
      9            hash_value
    10    from
    11            v$sql
    12    where
    13            sql_text like 'select * from bind_peek_test%'
    14    );
    CARDINALITY
              1
    SQL>
    SQL> alter system flush shared_pool;
    System altered.
    SQL>
    SQL> exec :n := 100; :n2 := 100;
    PL/SQL procedure successfully completed.
    SQL>
    SQL> set autotrace traceonly
    SQL>
    SQL> select * from bind_peek_test where n1 >= :n and n1 <= :n2;
    1000 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1000 Bytes=24
              000)
       1    0   FILTER
       2    1     TABLE ACCESS (FULL) OF 'BIND_PEEK_TEST' (Cost=2 Card=100
              0 Bytes=24000)
    Statistics
            236  recursive calls
              0  db block gets
            102  consistent gets
              0  physical reads
              0  redo size
          34435  bytes sent via SQL*Net to client
           1109  bytes received via SQL*Net from client
             68  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
           1000  rows processed
    SQL>
    SQL> set autotrace off
    SQL>
    SQL> select
      2             cardinality
      3  from
      4             v$sql_plan
      5  where
      6             cardinality is not null
      7  and      hash_value = (
      8    select
      9            hash_value
    10    from
    11            v$sql
    12    where
    13            sql_text like 'select * from bind_peek_test%'
    14    );
    CARDINALITY
           1000
    SQL>
    SQL> spool offRegards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Multi-column Index vs One index for each column

    Hello everyone,
    i have one table about 20 000 000 rows, some developers have to generate reports on it and i want to create indexes on this table.
    The table has 34 columns, no primary key, no unique keys.
    The "where..." clause of the reports usually use 8 columns but some reports uses 8 + some other columns.
    can any one help me on what kind of indexes do i have to create?
    1. one index for each column used in "where clause"
    2. one index for 8 columns and some other indexes for other used columns
    3. one index for all columns
    or something else etc...
    br flag

    i have one table about 20 000 000 rows, some developers have to generate reports on it and i want to create indexes on this table.
    The table has 34 columns, no primary key, no unique keys.
    The "where..." clause of the reports usually use 8 columns but some reports uses 8 + some other columns.
    can any one help me on what kind of indexes do i have to create?
    1. one index for each column used in "where clause"
    2. one index for 8 columns and some other indexes for other used columns
    3. one index for all columns
    or something else etc...What's the version of your data base? what kind of database you have, DWH or OTLP? The answer might depend on the type of database as far as bitmap indexes might suit or might not depending if you are runing DWH or OLTP kind of database
    Let me suppose that you are runing OLTP database and you have a where clause with 8 columns.
    1) are all those where clause equalities (where col1 = and col2 =) or there are inequalities?
    2) could you evaluate the most repetitive columns?
    3) could you know the column that could have the best clustering factor (the column which most follow a certain order in the table)
    Based on that I would suggest to create one b-tree index having 8 columns (even though that it seems for me to high) this index should follow the following points:
    1) put the most repetitive column at the leading edge (and compress the index if necessary)
    2) put the columns that are used in equalitity predicate first
    3) put the column having the best clustering factor first
    The most precise index you have the best access you could gain.
    Of course that you have to know that an index access is not always good and a FULL table scan is not always bad.
    Best regards
    Mohamed Houri
    www.hourim.wordpress.com

  • Compare floating poit numbers

    Hello,
    I'm using TestStand and LabVIEW.
    What is the best way to compare floating point numbers in TestStand flow?
    Thanks in advance.

    If the numbers are already available in TestStand and it is important for the report, i suggest you to make the comparison in TestStand.
    The simplest way might be using a Numeric Limit Step when reading the data from the second instrument. I think this is what you are really looking for.
    In regard to the comparison: you should define a maximum "jitter" (or hysterises or whatever you call it) around your desired value and check for this "corridor". "greater equal, less euqal"  would be the comparison type for the Numeric Limit Step.
    You should not alter the value in LabVIEW if you require the value, the limits and the result of this comparison to be part of the report.
    hope this helps,
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Saving floating point images

    I currently use averaging to get a noise mitigated image in single precision. I do all calculations with the data in array format but I would like to save the averaged images in floating point for posterity's sake. I have noticed that I can display the images of datatype Grayscale (SGL) no problem but have found no way to save the image without rounding the data into integer format. I think i'm out of luck but I wanted to see if anyone knew of another way.

    MikeBoso a écrit :
    Whoops, I forgot to mention a key detail in that the images would need to be viewed by users without LabVIEW.
    When wonky stuff is observed we use ImageJ for image manipulation.
    That was indeed a key detail. Have you had a look at the TIFF format. I know that TIFF can handle FP images, but I don't know if there are the corresponding readers (Photoshop ?). Of course, you'll have to develop your own file saver, since IMAQ vision TIFF file save is restricted to integers.
    Chilly Charly    (aka CC)
             E-List Master - Kudos glutton - Press the yellow button on the left...        

  • How can i write the below code using "For all entries"

    Hi
    How can we write the below code using "for all entries" and need to avoid joins...
    Please help
    SELECT aaufnr aobjnr aauart atxjcd a~pspel
    agstrp awerks carbpl cwerks
    INTO TABLE t_caufv
    FROM caufv AS a
    INNER JOIN afih AS b
    ON aaufnr = baufnr
    INNER JOIN crhd AS c
    ON bgewrk = cobjid
    AND c~objty = 'D'
    WHERE ( a~pspel = space
    OR a~txjcd = space
    OR NOT a~objnr IN
    ( select OBJNR from COBRB AS e
    WHERE objnr = a~objnr ) )
    AND a~werks IN s_plant
    AND a~auart IN s_wtype
    AND NOT a~objnr IN
    ( select OBJNR from JEST AS d
    WHERE objnr = a~objnr
    AND ( dstat = 'A0081'OR dstat = 'A0018' )
    AND d~inact 'X' ).
    Reward points for all helpfull answers
    Thanks
    Ammi.

    Hi,
    SELECT objnr objid aufnr
            from afih
            into table t_afih.
    SELECT objnr
            from JEST
            into table t_JEST
            where stat = 'A0045'
               OR stat = 'A0046'
               AND inact 'X'.
    SELECT objnr
            from COBRB
            into table t_cobrb.
    SELECT arbpl werks objid objty
          from crhd
          INTO table it_crhd
          FOR ALL ENTRIES IN it_afih
          WHERE objty eq 'D'
          AND gewrk = it_afih-objid.
    SELECT aufnr objnr auart txjcd pspel gstrp werks aufnr
            FROM caufv
            INTO table t_caufv
            FOR ALL ENTRIES IN it_afih
            WHERE aufnr = it_afih-aufnr
              And pspel = ' '
              AND txjcd = ' '
             ANd objnr ne it_crhd-objnr
              AND auart in s_wtype
              AND werks in s_plant.
             AND objnr ne it_jest-objnr.
    dont use NE in the select statements, it may effect performance also. Instead use if statements inside
    loops.
    loop at t_caufv.
    read table it_chrd............
      if t_caufv-objnr ne it_chrd-objnr.
      read table it_jest..........
       if   if t_caufv-objnr ne it_jest-objnr.
        (proceed further).
       endif.
      endif.
    endloop.
    hope this helps.
    Reward if useful.
    Regards,
    Anu

  • Floating Point # in MIDP & CLDC...

    What is way of using the floating point numbers in MIDP & CLDC?
    MIDP and CLDC has no built in support for it.
    Plz help me...

    simple.... Don't :P
    you've got fixed point libs available, but theyre not exactly fast.
    It is possible to recompile the KVM and enable floating point support, but none of the embedded implementations have this support, so its a pointless excercise.
    What do you need floating point for?

  • ALV: F4 help for a column wanted, but without any check

    Hello,
    I have an ALV and I use a DDIC structure for the field catalog. My requirement is that for all columns in ALV, the F4 should appear normally (using the DDIC information), but I do not want an automatic check for the value entered by user.
    So .e.g. if in F4, possible values are 1, 2, 3 then the ALV throws an error if user leaves the field blank (because blank was not a possible value for the domain of the field). How, can I by-pass this automatic check by ALV (while still keeping the F4 help)?
    Regards,
    Mohit

    Hi hope u know the difference between value table and check table.
    In this case you have to give value table as F4 reference but not check table.
    I mean consider the field Plant. if you are defining plane with reference to master table T001w and u r entering a value not in T001W it will give error. because here it is check table. where as define the plant with refence to MARC. then MARC will be value table. and dont give error if you enter any plane value not there in MARC.
    Try to find out value tables for your fields and use. Then it will work.

Maybe you are looking for

  • P35 neo cold boots twice when powered on or warm booted

    Hi, I'm trying to figure out  what is happening with this board, when I turn on my computer it automatically shutsdown again for a sec or two and then comes back and boots ok. It does that also in a warm reboot. I've seen similar issues in other foru

  • Re-sort in PDF form

    Hi experts, I'm using transaction SFP to develop PDF forms. Is it possible to re-sort an internal table in a PDF form. Thanks, Anja

  • Acrobat X unable to print

    I am running Win 7 Pro 64 bit and Acrobat X v 10.0.3. I am able to view .pdfs but unable to print them. I am able to print from other applicaitons on this PC but not Acrobat X. Any assistance is appreciated.

  • No AirPlay functionality in Photos App

    Hi, I've read that many have been able to transfer photos from the camera roll using AirPlay, it's also previewed in Apple's video. But after upgrading to 4.2.1 I have no AirPlay button in my photos app at all. It works everywhere else it should be.

  • Diff between SIA and SEA

    Hi, i want to know what is the real difference between Siebel SIA and Siebel SEA beyond changes in the data model... Thanks! Regards