Query on Value Table creation

Dear Experts,
I am working on ECC (sender as Business System) <----> SAP PI 7.31 Single stack JAVA only <----> JDBC Receiver (Business Component) scenario. During the response from JDBC, I have to maintain a value table for Gender.
Following are values for JDBC and I have to maintain the values below for ECC
JDBC                    ECC
1                          Male
2                          Female
3                          Trans
4                          Not Declared
I referred the document "How to perform Value Mapping-A Walkthrough" by Sharath Chandra and other related documents but still got issue with the basic understanding.When I created Value Mapping from Tools option in ID, it navigated to pop up window for Agency and Scheme for Sender and Receiver.
1. What value should I enter for Agency. Is it the Business System or Business Component for Sender and Receiver or any string representing the meaningful name of source and receiver.
2. What value should I enter for Scheme. Is it the Message Type for Sender / Receiver or just a string representing a meaning ful name .. here Gender1 and Gender2.
3. After saving the Value Mapping, it is asking for Group Name for each value i.e. 4 Group Name. If I dont give, then it is saved a GUID.
4. In ESR, in the mapping part, it asks for Source and Target Schema. I hope this would be the same Scheme name which we maintain in ID.
5. I have to create other Value Mapping table for Customer Number with the same source and receiver in the same message mapping with 3 columns. How to add columns in the table?
I feel this are the basic assumption but need guidance from experts..
Request experts to answer my queries in sequence wise..
Regards
Rebecca

Hi Rebecca,
1. What value should I enter for Agency. Is it the Business System or Business Component for Sender and Receiver or any string representing the meaningful name of source and receiver.
-->> I would suggest to use Sender system name as Sender Party (e.g. JDBC or functional name of system) and receiver Party (e.g. SAP).
2. What value should I enter for Scheme. Is it the Message Type for Sender / Receiver or just a string representing a meaning ful name .. here Gender1 and Gender2.
-->> Schema should be the name/purpose of value mapping. In you case it can be Gender for both sender and receiver schema.
3. After saving the Value Mapping, it is asking for Group Name for each value i.e. 4 Group Name. If I dont give, then it is saved a GUID.
-->> it will be populated by system.
4. In ESR, in the mapping part, it asks for Source and Target Schema. I hope this would be the same Scheme name which we maintain in ID.
-->> yes it is same as maintained in ID.
5. I have to create other Value Mapping table for Customer Number with the same source and receiver in the same message mapping with 3 columns. How to add columns in the table?
-->> Value mapping can have only two column key and value.
regards,
Harish

Similar Messages

  • Connecting Query variable with Table Values

    Hi all,,
    I'm trying to connect a query variable to table column contents. The problem is, there is an additional Date as input which is single value. As a result, the query does not get executed. It gives an error that it expects a single value for the date field.
    Am I doing something wrong. Please guide.
    Regds,
    Srinivasan.

    Hi Srinivasan,
    Sorry but i didnt got exactly what you are trying to do? You have query to which you are passing multiple values from a table. or are you selecting a single row in the table and passing it to the query? How is the variable defined in the query?
    Thanks,
    Regards,
    Vikram

  • How to assign a query retrived value to a user defined  object in a table

    how to assign a query retrived value to a user defined  object in a table

    Rajeshwar,
    If you use the "Search" feature in this forum, you should be able to find helpful links to similar questions.  You could also look at the RecordSet and DoQuery documentation in the SAP Business One SDK Help Center documentation to assist you with your question.
    HTH,
    Eddy

  • VO query with SQL table function (MS SQL) and update of parameter value

    Hello All,
    I use VO where Query is the SQL function that returns a table (MS SQL Server):
    select * from getData('XXXX')
    where ‘ХХХХ’ should be binded to the specific ID (e.g. selected row in the tree view on the another facet).
    More details: I have two facets. Left facet contains a tree view that displays the list of the projects grouped by industry. Right facet contains a pivot table (source query is a ‘table function’ from above).
    I want to have the pivot table (right facet) to be updated with the data specific to the project (selected row in the tree).
    I know about the possibility to dynamically update the where clause (setNamedWhereClauseParam method), but in my case there is no ‘where’ clause. Is there any kind of ‘bind variable’ to be used to refresh the data?
    Any guidance, ideas, examples are greatly welcome.
    With best wishes,
    Anatol

    Are the columns that are being selected different each time or the same?
    If they are the same then you don't need to do any additional binding - the pivot table is connected to the VO at design time and it wouldn't care if the underlying from/where have changed.
    If on the other hand your columns change (but the number of them stay the same) then use aliases for the initial definition (select ename a, id b, salary c) and then when you change the VO change the mapping but keep the names (select dname a, moo b, foo c).
    If the whole query is changing and you don't know in advance the structure - then there is no out of the box solution with a pivot table - you'll need to code a backing bean that will provide the data model to the pivot table based on your query.

  • Slow table creation after upgrade from 10.2.0.3 to 11.2.0.1 using DBUA

    I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
    I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
    By chance has anyone encountered something similar?
    The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
    I will be using the following table creation DDL for this abbreviated test case:
    create table ALLIANCE  (
       ALLIANCEID           NUMBER(10)                      not null,
       NAME                 VARCHAR2(40)                    not null,
       CREATION_DATE        DATE,
       constraint PK_ALLIANCE primary key (ALLIANCEID)
               using index
           tablespace LIVE_INDEX
    tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
    create or replace
    trigger select_grant
    after CREATE on schema
    declare
        l_str varchar2(255);
        l_job number;
    begin
        if ( ora_dict_obj_type = 'TABLE' ) then
            l_str := 'execute immediate "grant select on ' ||
                                         ora_dict_obj_name ||
                                        ' to select_role";';
            dbms_job.submit( l_job, replace(l_str,'"','''') );
        end if;
    end;
    {code}
    Below I've included data on two separate test runs.  The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF.  I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case.  The 10g test run's TKPROF is also included below.
    The version of the database is 11.2.0.1.
    These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
    {code}
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_capture_sql_plan_baselines boolean     FALSE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      11.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    optimizer_use_invisible_indexes      boolean     FALSE
    optimizer_use_pending_statistics     boolean     FALSE
    optimizer_use_sql_plan_baselines     boolean     TRUE
    SQL> show parameter db_file_multi
    NAME                                 TYPE        VALUE
    db_file_multiblock_read_count        integer     8
    SQL> show parameter db_block_size
    NAME                                 TYPE        VALUE
    db_block_size                        integer     8192
    SQL> show parameter cursor_sharing
    NAME                                 TYPE        VALUE
    cursor_sharing                       string      EXACT
    SQL> column sname format a20
    SQL> column pname format a20
    SQL> column pval2 format a20
    SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
    SNAME                PNAME                     PVAL1 PVAL2
    SYSSTATS_INFO        STATUS                          COMPLETED
    SYSSTATS_INFO        DSTART                          03-11-2010 16:33
    SYSSTATS_INFO        DSTOP                           03-11-2010 17:03
    SYSSTATS_INFO        FLAGS                         0
    SYSSTATS_MAIN        CPUSPEEDNW           713.978495
    SYSSTATS_MAIN        IOSEEKTIM                    10
    SYSSTATS_MAIN        IOTFRSPEED                 4096
    SYSSTATS_MAIN        SREADTIM               1565.746
    SYSSTATS_MAIN        MREADTIM
    SYSSTATS_MAIN        CPUSPEED                   2310
    SYSSTATS_MAIN        MBRC
    SYSSTATS_MAIN        MAXTHR
    SYSSTATS_MAIN        SLAVETHR
    13 rows selected.
    {code}
    Output from TKPROF on the 11g SID:
    {code}
    create table ALLIANCE  (
       ALLIANCEID           NUMBER(10)                      not null,
       NAME                 VARCHAR2(40)                    not null,
       CREATION_DATE        DATE,
       constraint PK_ALLIANCE primary key (ALLIANCEID)
               using index
           tablespace LIVE_INDEX
    tablespace LIVE_DATA
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          4           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          4           0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 324
    {code}
    ... large section omitted ...
    Here is the performance hit portion of the TKPROF on the 11g SID:
    {code}
    SQL ID: fsbqktj5vw6n9
    Plan Hash: 1443566277
    select next_run_date, obj#, run_job, sch_job
    from
    (select decode(bitand(a.flags, 16384), 0, a.next_run_date,
      a.last_enabled_time) next_run_date,       a.obj# obj#,
      decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job  sch_job  from
      (select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
      p.job_status job_status, p.class_oid class_oid,      p.last_enabled_time
      last_enabled_time, p.instance_id instance_id,      1 sch_job   from
      sys.scheduler$_job p   where bitand(p.job_status, 3) = 1    and
      ((bitand(p.flags, 134217728 + 268435456) = 0) or
      (bitand(p.job_status, 1024) <> 0))    and bitand(p.flags, 4096) = 0    and
      p.instance_id is NULL    and (p.class_oid is null      or (p.class_oid is
      not null      and p.class_oid in (select b.obj# from sys.scheduler$_class b
                               where b.affinity is null)))   UNION ALL   select
      q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
      q.last_enabled_time, q.instance_id, 1   from sys.scheduler$_lightweight_job
      q   where bitand(q.job_status, 3) = 1    and ((bitand(q.flags, 134217728 +
      268435456) = 0) or         (bitand(q.job_status, 1024) <> 0))    and
      bitand(q.flags, 4096) = 0    and q.instance_id is NULL    and (q.class_oid
      is null      or (q.class_oid is not null      and q.class_oid in (select
      c.obj# from sys.scheduler$_class c                          where
      c.affinity is null)))   UNION ALL   select j.job, 0,
      from_tz(cast(j.next_date as timestamp),      to_char(systimestamp,'TZH:TZM')
      ), 1, NULL,      from_tz(cast(j.next_date as timestamp),
      to_char(systimestamp,'TZH:TZM')),     NULL, 0   from sys.job$ j   where
      (j.field1 is null or j.field1 = 0)    and j.this_date is null) a   order by
      1)   where rownum = 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.47       0.47          0       9384          0           1
    total        3      0.48       0.48          0       9384          0           1
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
          1   VIEW  (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
          1    SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
    194790     VIEW  (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
    194790      UNION-ALL  (cr=9384 pr=0 pw=0 time=439235 us)
        231       FILTER  (cr=68 pr=0 pw=0 time=920 us)
        231        TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
          1        TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
          1         INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
          0       FILTER  (cr=3 pr=0 pw=0 time=0 us)
          0        TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
          0        TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
          0         INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
    194559       TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
    {code}
    and the totals at the end of the TKPROF on the 11g SID:
    {code}
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      2      0.00       0.00          0          0          4           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      0.00       0.00          0          0          4           0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       70      0.00       0.00          0          0          0           0
    Execute     85      0.01       0.01          0         62        208          37
    Fetch       49      0.48       0.49          0       9490          0          35
    total      204      0.51       0.51          0       9552        208          72
    Misses in library cache during parse: 5
    Misses in library cache during execute: 3
       35  user  SQL statements in session.
       53  internal SQL statements in session.
       88  SQL statements in session.
    Trace file: 11gSID_ora_17721.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
          35  user  SQL statements in trace file.
          53  internal SQL statements in trace file.
          88  SQL statements in trace file.
          51  unique SQL statements in trace file.
        1590  lines in trace file.
          18  elapsed seconds in trace file.
    {code}
    The version of the database is 10.2.0.3.0.
    These are the parameters relevant to the optimizer for the test run on the 10g SID:
    {code}
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.3
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter db_file_multi
    NAME                                 TYPE        VALUE
    db_file_multiblock_read_count        integer     8
    SQL> show parameter db_block_size
    NAME                                 TYPE        VALUE
    db_block_size                        integer     8192
    SQL> show parameter cursor_sharing
    NAME                                 TYPE        VALUE
    cursor_sharing                       string      EXACT
    SQL> column sname format a20
    SQL> column pname format a20
    SQL> column pval2 format a20
    SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
    SNAME                PNAME                     PVAL1 PVAL2
    SYSSTATS_INFO        STATUS                          COMPLETED
    SYSSTATS_INFO        DSTART                          09-24-2007 11:09
    SYSSTATS_INFO        DSTOP                           09-24-2007 11:09
    SYSSTATS_INFO        FLAGS                         1
    SYSSTATS_MAIN        CPUSPEEDNW           2110.16949
    SYSSTATS_MAIN        IOSEEKTIM                    10
    SYSSTATS_MAIN        IOTFRSPEED                 4096
    SYSSTATS_MAIN        SREADTIM
    SYSSTATS_MAIN        MREADTIM
    SYSSTATS_MAIN        CPUSPEED
    SYSSTATS_MAIN        MBRC
    SYSSTATS_MAIN        MAXTHR
    SYSSTATS_MAIN        SLAVETHR
    13 rows selected.
    {code}
    Now for the TKPROF of a mirrored test environment running on a 10G SID:
    {code}
    create table ALLIANCE  (
       ALLIANCEID           NUMBER(10)                      not null,
       NAME                 VARCHAR2(40)                    not null,
       CREATION_DATE        DATE,
       constraint PK_ALLIANCE primary key (ALLIANCEID)
               using index
           tablespace LIVE_INDEX
    tablespace LIVE_DATA
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          2         16           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.01       0.01          0          2         16           0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 113
    {code}
    ... large section omitted ...
    Totals for the TKPROF on the 10g SID:
    {code}
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.02          0          0          0           0
    Execute      1      0.00       0.00          0          2         16           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.02          0          2         16           0
    Misses in library cache during parse: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       65      0.01       0.01          0          1         32           0
    Execute     84      0.04       0.09         20         90        272          35
    Fetch       88      0.00       0.10         30        281          0          64
    total      237      0.07       0.21         50        372        304          99
    Misses in library cache during parse: 38
    Misses in library cache during execute: 32
       10  user  SQL statements in session.
       76  internal SQL statements in session.
       86  SQL statements in session.
    Trace file: 10gSID_ora_32003.trc
    Trace file compatibility: 10.01.00
    Sort options: default
           1  session in tracefile.
          10  user  SQL statements in trace file.
          76  internal SQL statements in trace file.
          86  SQL statements in trace file.
          43  unique SQL statements in trace file.
         949  lines in trace file.
           0  elapsed seconds in trace file.
    {code}
    Edited by: user8598842 on Mar 11, 2010 5:08 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              

    So while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
    I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
    Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
    CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
    I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
    "Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table."

  • Pickup the Latest Result Date (Query involves 3 tables)

    Hello folks,
    I have a relatively simple request. It involves 3 simple tables. I want to return the Student along with the Results based on the Latest Result Date and a Flag. Any help is appreciated.
    Thanks
    Assumptions
    STUDENT_ID is unique in STUDENT_TB
    EXAM_VISIT_ID is Unique in EXAM_RESULTS_TB.
    Scripts for table creation and INSERT statements
    create table student_tb (student_id varchar2(4));
    create table exam_tb (student_id varchar2(4), exam_visit_id number, exam_date date);
    create table exam_results (exam_visit_id number, result_date date, results number, result_flag varchar2(1));
    insert into student_tb values('1001');
    insert into student_tb values('1002');
    Insert into EXAM_TB (STUDENT_ID,EXAM_DATE,EXAM_VISIT_ID) values ('1001',to_date('01-JAN-12','DD-MON-RR'),1);
    Insert into EXAM_TB (STUDENT_ID,EXAM_DATE,EXAM_VISIT_ID) values ('1001',to_date('01-APR-12','DD-MON-RR'),2);
    Insert into EXAM_TB (STUDENT_ID,EXAM_DATE,EXAM_VISIT_ID) values ('1001',to_date('01-JUL-12','DD-MON-RR'),3);
    Insert into EXAM_TB (STUDENT_ID,EXAM_DATE,EXAM_VISIT_ID) values ('1001',to_date('01-SEP-12','DD-MON-RR'),4);
    Insert into EXAM_RESULTS_TB (EXAM_VISIT_ID,RESULT_DATE,FLAG,RESULTS) values (1,to_date('01-FEB-12','DD-MON-RR'),'N',10);
    Insert into EXAM_RESULTS_TB (EXAM_VISIT_ID,RESULT_DATE,FLAG,RESULTS) values (2,to_date('01-MAY-12','DD-MON-RR'),'Y',40);
    Insert into EXAM_RESULTS_TB (EXAM_VISIT_ID,RESULT_DATE,FLAG,RESULTS) values (3,to_date('01-AUG-12','DD-MON-RR'),'Y',32);
    Insert into EXAM_RESULTS_TB (EXAM_VISIT_ID,RESULT_DATE,FLAG,RESULTS) values (4,to_date('01-DEC-12','DD-MON-RR'),'N',20);I would like to pick up the Student along with the RESULT_DATE and RESULTS where the RESULT_DATE (EXAM_DATE is irrelevant for this exercise) is the most recent for that Student and the flag = 'Y'. EXAM_VISIT_ID is Unique. So, the record that needs to be returned from this Result set is
    1001 3 01-AUG-12 32

    Hi,
    That's an example of a Top-N Query , and here's one way to do it:
    WITH   got_r_num   AS
         SELECT  e.student_id, e.exam_visit_id
         ,     er.result_date, er.results
         ,     RANK () OVER ( PARTITION BY  e.student_id
                               ORDER BY          er.result_date  DESC
                        )           AS r_num
         FROM    exam_tb              e
         JOIN     exam_results_tb  er  ON   er.exam_visit_id  = e.exam_visit_id
         WHERE     er.flag     = 'Y'
    SELECT  student_id, exam_visit_id
    ,     result_date, results
    FROM     got_r_num
    WHERE     r_num     = 1
    ;Thanks for posting the CREATE TABLE and INSERT statements. Please make sure they work before posting them. There is some confusion about the table (EXAM_RESULTS or EXAM_RESULTS_TB) and column (FLAG or RESULT_FLAG) names.
    What results do you want if there is a tie for the most recent exam for a given student? (E.g., if we change the result_date on this row
    Insert into EXAM_RESULTS_TB (EXAM_VISIT_ID,RESULT_DATE,FLAG,RESULTS) values (2,to_date('01-MAY-12','DD-MON-RR'),'Y',40);to be August 1, 2012, just like the row with exam_visit_id=3)?
    Depending on your requirements, you may want to add tie-breaking columns to the end of the analytic ORDER BY clause, and/or use ROW_NUMBER instead of RANK.

  • Table creation - order of events

    I am trying to get some help on the order I should be carrying out table creation tasks.
    Say I create a simple table:
    create table title (
    title_id number(2) not null,
    title varchar2(10) not null,
    effective_from date not null,
    effective_to date not null,
    constraint pk_title primary key (title_id)
    I believe I should populate the data, then create my index:
    create unique index title_title_id_idx on title (title_id asc)
    But I have read that Oracle will automatically create an index for my primary key if I do not do so myself.
    At what point does Oracle create the index on my behalf and how do I stop it?
    Should I only apply the primary key constraint after the data has been loaded as well?
    Even then, if I add the primary key constraint will Oracle not immediately create an index for me when I am about to create a specific one matching my naming conventions?

    yeah but just handle it the way you would handle any other constraint violation - with the EXCEPTIONS INTO clause...
    SQL> select index_name, uniqueness from user_indexes
      2  where table_name = 'APC'
      3  /
    no rows selected
    SQL> insert into apc values (1)
      2  /
    1 row created.
    SQL> insert into apc values (2)
      2  /
    1 row created.
    SQL> alter table apc add constraint apc_pk primary key (col1)
      2  using index ( create unique index my_new_index on apc (col1))
      3  /
    Table altered.
    SQL> insert into apc values (2)
      2  /
    insert into apc values (2)
    ERROR at line 1:
    ORA-00001: unique constraint (APC.APC_PK) violated
    SQL> alter table apc drop constraint apc_pk
      2  /
    Table altered.
    SQL> insert into apc values (2)
      2  /
    1 row created.
    SQL> alter table apc add constraint apc_pk primary key (col1)
      2  using index ( create unique index my_new_index on apc (col1))
      3  /
    alter table apc add constraint apc_pk primary key (col1)
    ERROR at line 1:
    ORA-02437: cannot validate (APC.APC_PK) - primary key violated
    SQL> @%ORACLE_HOME%/rdbms/admin/utlexcpt.sql
    Table created.
    SQL> alter table apc add constraint apc_pk primary key (col1)
      2  using index ( create unique index my_new_index on apc (col1))
      3  exceptions into EXCEPTIONS
      4  /
    alter table apc add constraint apc_pk primary key (col1)
    ERROR at line 1:
    ORA-02437: cannot validate (APC.APC_PK) - primary key violated
    SQL> select * from apc where rowid in ( select row_id from exceptions)
      2  /
          COL1
             2
             2
    SQL> All this is in the documentation. Find out more.
    Cheers, APC

  • How to get save result from EXECUTE from a dynamic SQL query in another table?

    Hi everyone, 
    I have this query:
    declare @query varchar(max) = ''
    declare @par varchar(10)
    SELECT @par = col1 FROM Set
    declare @region varchar(50)
    SELECT @region = Region FROM Customer
    declare @key int
    SELECT @key = CustomerKey FROM Customer
    SET @query = 'SELECT CustomerKey FROM Customer where ' + @par + ' = '+ @key+ ' '
    EXECUTE (@query)
    With this query I want get col1 from SET and compare it to the column Region from Customer. I would like to get the matching CustomerKey for it.
    After execution it says commands are executed successfully. But I want to save the result from @query in another table. I looked it up and most people say to use sp_executesql. I tried a few constructions as sampled and I would always get this error: 
    Msg 214, Level 16, State 2, Procedure sp_executesql, Line 12
    Procedure expects parameter '@statement' of type 'ntext/nchar/nvarchar'.
    So the output should be a list of CustomerKeys in another table.
    How can I save the results from EXECUTE into a variable? Then I assume I can INSERT INTO - SELECT in another table. 
    Thanks

    CREATE TABLE Customer
    (CustomerKey INT , Name NVARCHAR(100));
    GO
    INSERT dbo.Customer
    VALUES ( 1, N'Sam' )
    GO
    DECLARE @query nvarchar(max) = ''
    declare @par varchar(10) = 'Name',
    @key varchar(10) = 'Sam'
    CREATE TABLE #temp ( CustomerKey INT );
    SET @query =
    insert #temp
    SELECT CustomerKey
    FROM Customer
    where ' + @par + ' = '''+ @key+ ''' '
    PRINT @query
    EXEC sp_executesql @query
    SELECT *
    FROM #temp
    DROP TABLE #temp;
    DROP TABLE dbo.Customer
    Cheers,
    Saeid Hasani
    Database Consultant
    Please feel free to contact me at [email protected] as well as on Twitter and Facebook.
    [My Writings on TechNet Wiki] [T-SQL Blog] [Curah!]
    [Twitter] [Facebook] [Email]

  • Dynamic Internal Table creation and population

    Hi gurus !
    my issue refers to the slide 10 provided in this slideshow : https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b332e090-0201-0010-bdbd-b735e96fe0ae
    My example is gonna sound dumb, but anyway: I want to dynamically select from a table into a dynamically created itab.
    Letu2019s use only EKPO, and only field MENGE.
    For this, I use Classes cl_abap_elemdescr, cl_sql_result_set and the Data Ref for table creation. But while fetching the resultset, program dumps when fields like MENGE, WRBTR are accessed. Obviously their type are not correctly taken into account by my program.
    Here it comes:
    DATA: element_ref             TYPE REF TO cl_abap_elemdescr,
          vl_fieldname               TYPE string,
                 tl_components         TYPE abap_component_tab,
                 sl_components         LIKE LINE OF tl_components_alv,
    linetype_lcl               TYPE REF TO cl_abap_structdescr,
    ty_table_type            TYPE REF TO cl_abap_tabledescr,
    g_resultset             TYPE REF TO cl_sql_result_set
    u2026
    CONCATENATE sg_columns-table_name '-' sg_columns-column_name INTO vl_fieldname.
    * sg_columns-table_name contains 'EKPO'
    * sg_columns-column_name contains 'MENGE'
    * getting the element as a component
    element_ref ?= cl_abap_elemdescr=>describe_by_name( vl_fieldname ).
    sl_components-name  = sg_columns-column_name.
    sl_components-type ?= element_ref.
    APPEND sl_components TO tl_components.
    * dynamic creation of internal table
    linetype_lcl = cl_abap_structdescr=>create( tl_components ).
    ty_table_type = cl_abap_tabledescr=>create(
                      p_line_type = linetype_lcl ).
    u2026
    * Then I will create my field symbol table and line. Code has been cut here.
    CREATE DATA dy_line LIKE LINE OF <dyn_table>.
    u2026
    * Then I will execute my query. Here itu2019s: Select MENGE From EKPO Where Rownum = 1.
      g_resultset = g_stmt_ref->execute_query( stmt_str ).
    * Then structure for the Resultset is set
      CALL METHOD g_resultset->set_param_struct
        EXPORTING
          struct_ref = dy_line.
    * Fetching the lines of the resultset  => Dumpu2026
      WHILE g_resultset->next( ) > 0.
        ASSIGN dy_line->* TO <dyn_wa>.
        APPEND <dyn_wa> TO <dyn_table>.
      ENDWHILE.
    Anyone has any clue to how prevent my Dump ??
    The component for MENGE seems to be described as a P7 with 2 decimals. And the resultset wanna use a QUAN type... or something like that !

    Hello
    I have expanded your sample coding for selecting three fields out of EKPO:
    *& Report  ZUS_SDN_SQL_RESULT_SET
    *& Thread: Dynamic Internal Table creation and population
    *& <a class="jive_macro jive_macro_thread" href="" __jive_macro_name="thread" modifiedtitle="true" __default_attr="1375510"></a>
    *& NOTE: Coding for dynamic structure / itab creation taken from:
    *& Creating Flat and Complex Internal Tables Dynamically using RTTI
    *& https://wiki.sdn.sap.com/wiki/display/Snippets/Creating+Flat+and+
    *& Complex+Internal+Tables+Dynamically+using+RTTI
    REPORT  zus_sdn_sql_result_set.
    TYPE-POOLS: abap.
    DATA:
    go_sql_stmt       TYPE REF TO cl_sql_statement,
    go_resultset      TYPE REF TO cl_sql_result_set,
    gd_sql_clause     TYPE string.
    DATA:
      gd_tabfield      TYPE string,
      go_table         TYPE REF TO cl_salv_table,
      go_sdescr_new    TYPE REF TO cl_abap_structdescr,
      go_tdescr        TYPE REF TO cl_abap_tabledescr,
      gdo_handle       TYPE REF TO data,
      gdo_record       TYPE REF TO data,
      gs_comp          TYPE abap_componentdescr,
      gt_components    TYPE abap_component_tab.
    FIELD-SYMBOLS:
      <gs_record>   TYPE ANY,
      <gt_itab>     TYPE STANDARD TABLE.
    START-OF-SELECTION.
    continued.

  • Bad file is not created during the external table creation.

    Hello Experts,
    I have created a script for external table in Oracle 10g DB. Everything is working fine except it does not create the bad file, But it creates the log file. I Cann't figure out what is the issue. Because my shell scripts is failing and the entire program is failing. I am attaching the table creation script and the shell script where it is refering and the error. Kindly let me know if something is missing. Thanks in advance
    Table Creation Scripts:_-------------------------------
    create table RGIS_TCA_DATA_EXT
    guid VARCHAR2(250),
    badge VARCHAR2(250),
    scheduled_store_id VARCHAR2(250),
    parent_event_id VARCHAR2(250),
    event_id VARCHAR2(250),
    organization_number VARCHAR2(250),
    customer_number VARCHAR2(250),
    store_number VARCHAR2(250),
    inventory_date VARCHAR2(250),
    full_name VARCHAR2(250),
    punch_type VARCHAR2(250),
    punch_start_date_time VARCHAR2(250),
    punch_end_date_time VARCHAR2(250),
    event_meet_site_id VARCHAR2(250),
    vehicle_number VARCHAR2(250),
    vehicle_description VARCHAR2(250),
    vehicle_type VARCHAR2(250),
    is_owner VARCHAR2(250),
    driver_passenger VARCHAR2(250),
    mileage VARCHAR2(250),
    adder_code VARCHAR2(250),
    bonus_qualifier_code VARCHAR2(250),
    store_accuracy VARCHAR2(250),
    store_length VARCHAR2(250),
    badge_input_type VARCHAR2(250),
    source VARCHAR2(250),
    created_by VARCHAR2(250),
    created_date_time VARCHAR2(250),
    updated_by VARCHAR2(250),
    updated_date_time VARCHAR2(250),
    approver_badge_id VARCHAR2(250),
    approver_name VARCHAR2(250),
    orig_guid VARCHAR2(250),
    edit_type VARCHAR2(250)
    organization external
    type ORACLE_LOADER
    default directory ETIME_LOAD_DIR
    access parameters
    RECORDS DELIMITED BY NEWLINE
    BADFILE ETIME_LOAD_DIR:'tstlms.bad'
    LOGFILE ETIME_LOAD_DIR:'tstlms.log'
    READSIZE 1048576
    FIELDS TERMINATED BY '|'
    MISSING FIELD VALUES ARE NULL(
    GUID
    ,BADGE
    ,SCHEDULED_STORE_ID
    ,PARENT_EVENT_ID
    ,EVENT_ID
    ,ORGANIZATION_NUMBER
    ,CUSTOMER_NUMBER
    ,STORE_NUMBER
    ,INVENTORY_DATE char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,FULL_NAME
    ,PUNCH_TYPE
    ,PUNCH_START_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,PUNCH_END_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,EVENT_MEET_SITE_ID
    ,VEHICLE_NUMBER
    ,VEHICLE_DESCRIPTION
    ,VEHICLE_TYPE
    ,IS_OWNER
    ,DRIVER_PASSENGER
    ,MILEAGE
    ,ADDER_CODE
    ,BONUS_QUALIFIER_CODE
    ,STORE_ACCURACY
    ,STORE_LENGTH
    ,BADGE_INPUT_TYPE
    ,SOURCE
    ,CREATED_BY
    ,CREATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,UPDATED_BY
    ,UPDATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,APPROVER_BADGE_ID
    ,APPROVER_NAME
    ,ORIG_GUID
    ,EDIT_TYPE
    location (ETIME_LOAD_DIR:'tstlms.dat')
    reject limit UNLIMITED;
    _***Shell Script*:*----------------_*
    version=1.0
    umask 000
    DATE=`date +%Y%m%d%H%M%S`
    TIME=`date +"%H%M%S"`
    SOURCE=`hostname`
    fcp_login=`echo $1|awk '{print $3}'|sed 's/"//g'|awk -F= '{print $2}'`
    fcp_reqid=`echo $1|awk '{print $2}'|sed 's/"//g'|awk -F= '{print $2}'`
    TXT1_PATH=/home/ac1/oracle/in/tsdata
    TXT2_PATH=/home/ac2/oracle/in/tsdata
    ARCH1_PATH=/home/ac1/oracle/in/tsdata
    ARCH2_PATH=/home/ac2/oracle/in/tsdata
    DEST_PATH=/home/custom/sched/in
    PROGLOG=/home/custom/sched/logs/rgis_tca_to_tlms_create.sh.log
    PROGNAME=`basename $0`
    PROGPATH=/home/custom/sched/scripts
    cd $TXT2_PATH
    FILELIST2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
    NO_OF_FILES2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
    $DEST_PATH/tstlmsedits.dat for i in $FILELIST2
    do
    cat $i >> $DEST_PATH/tstlmsedits.dat
    printf "\n" >> $DEST_PATH/tstlmsedits.dat
    mv $i $i.$DATE
    #mv $i $TXT2_PATH/test/.
    mv $i.$DATE $TXT2_PATH/test/.
    done
    if test $NO_OF_FILES2 -eq 0
    then
    echo " no tstlmsedits.dat file exists " >> $PROGLOG
    else
    echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
    echo "-------------------------------------------" >> $PROGLOG
    fi
    NO_OF_FILES1="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
    FILELIST1="`ls -lrt tstlms*.dat |awk '{print $9}'`"
    $DEST_PATH/tstlms.datfor i in $FILELIST1
    do
    cat $i >> $DEST_PATH/tstlms.dat
    printf "\n" >> $DEST_PATH/tstlms.dat
    mv $i $i.$DATE
    # mv $i $TXT2_PATH/test/.
    mv $i.$DATE $TXT2_PATH/test/.
    done
    if test $NO_OF_FILES1 -eq 0
    then
    echo " no tstlms.dat file exists " >> $PROGLOG
    else
    echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
    fi
    cd $TXT1_PATH
    FILELIST3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
    NO_OF_FILES3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
    $DEST_PATH/tstlmsedits.datfor i in $FILELIST3
    do
    cat $i >> $DEST_PATH/tstlmsedits.dat
    printf "\n" >> $DEST_PATH/tstlmsedits.dat
    mv $i $i.$DATE
    #mv $i $TXT1_PATH/test/.
    mv $i.$DATE $TXT1_PATH/test/.
    done
    if test $NO_OF_FILES3 -eq 0
    then
    echo " no tstlmsedits.dat file exists " >> $PROGLOG
    else
    echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
    echo "-------------------------------------------" >> $PROGLOG
    fi
    NO_OF_FILES4="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
    FILELIST4="`ls -lrt tstlms*.dat |awk '{print $9}'`"
    $DEST_PATH/tstlms.datfor i in $FILELIST4
    do
    cat $i >> $DEST_PATH/tstlms.dat
    printf "\n" >> $DEST_PATH/tstlms.dat
    mv $i $i.$DATE
    # mv $i $TXT1_PATH/test/.
    mv $i.$DATE $TXT1_PATH/test/.
    done
    if test $NO_OF_FILES4 -eq 0
    then
    echo " no tstlms.dat file exists " >> $PROGLOG
    else
    echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
    fi
    #connecting to oracle to generate bad files
    sqlplus -s $fcp_login<<EOF
    select count(*) from rgis_tca_data_ext;
    select count(*) from rgis_tca_data_history_ext;
    exit;
    EOF
    #counting the records in files
    tot_rec_in_tstlms=`wc -l $DEST_PATH/tstlms.dat | awk ' { print $1 } '`
    tot_rec_in_tstlmsedits=`wc -l $DEST_PATH/tstlmsedits.dat | awk ' { print $1 } '`
    tot_rec_in_tstlms_bad=`wc -l $DEST_PATH/tstlms.bad | awk ' { print $1 } '`
    tot_rec_in_tstlmsedits_bad=`wc -l $DEST_PATH/tstlmsedits.bad | awk ' { print $1 } '`
    #updating log table
    echo "pl/sql block started"
    sqlplus -s $fcp_login<<EOF
    define tot_rec_in_tstlms     = '$tot_rec_in_tstlms';
    define tot_rec_in_tstlmsedits     = '$tot_rec_in_tstlmsedits';
    define tot_rec_in_tstlms_bad     = '$tot_rec_in_tstlms_bad';
    define tot_rec_in_tstlmsedits_bad='$tot_rec_in_tstlmsedits_bad';
    define fcp_reqid ='$fcp_reqid';
    declare
    l_tstlms_file_id number := null;
    l_tstlmsedits_file_id number := null;
    l_tot_rec_in_tstlms number := 0;
    l_tot_rec_in_tstlmsedits number := 0;
    l_tot_rec_in_tstlms_bad number := 0;
    l_tot_rec_in_tstlmsedits_bad number := 0;
    l_request_id fnd_concurrent_requests.request_id%type;
    l_start_date fnd_concurrent_requests.actual_start_date%type;
    l_end_date fnd_concurrent_requests.actual_completion_date%type;
    l_conc_prog_name fnd_concurrent_programs.concurrent_program_name%type;
    l_requested_by fnd_concurrent_requests.requested_by%type;
    l_requested_date fnd_concurrent_requests.request_date%type;
    begin
    --getting concurrent request details
    begin
    SELECT fcp.concurrent_program_name,
    fcr.request_id,
    fcr.actual_start_date,
    fcr.actual_completion_date,
    fcr.requested_by,
    fcr.request_date
    INTO l_conc_prog_name,
    l_request_id,
    l_start_date,
    l_end_date,
    l_requested_by,
    l_requested_date
    FROM fnd_concurrent_requests fcr, fnd_concurrent_programs fcp
    WHERE fcp.concurrent_program_id = fcr.concurrent_program_id
    AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
    exception
    when no_data_found then
    fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
    fnd_file.put_line(fnd_file.log, 'No data found for request_id');
    fnd_file.put_line(fnd_file.log, sqlerrm);
    raise_application_error(-20001,
    'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
    sqlerrm);
    when others then
    fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
    fnd_file.put_line(fnd_file.log,
    'Error occured when retrieving request_id request_id');
    fnd_file.put_line(fnd_file.log, sqlerrm);
    raise_application_error(-20001,
    'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
    sqlerrm);
    end;
    --calling ins_or_upd_tca_process_log to update log table for tstlms.dat file
    begin
    rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
                   (l_tstlms_file_id,
                   'tstlms.dat',
                   l_conc_prog_name,
                   l_request_id,
                   l_start_date,
                   l_end_date,
                   &tot_rec_in_tstlms,
                   &tot_rec_in_tstlms_bad,
                   null,
                   null,               
                   null,
                   null,
                   null,
                   null,
                   null,
                   l_requested_by,
                   l_requested_date,
                   null,
                   null,
                   null,
                   null,
                   null);
    exception
    when others then
    fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
    fnd_file.put_line(fnd_file.log,
    'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlms file');
    fnd_file.put_line(fnd_file.log, sqlerrm);
    end;
    --calling ins_or_upd_tca_process_log to update log table for tstlmsedits.dat file
    begin
    rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
                   (l_tstlmsedits_file_id,
                   'tstlmsedits.dat',
                   l_conc_prog_name,
                   l_request_id,
                   l_start_date,
                   l_end_date,
                   &tot_rec_in_tstlmsedits,
                   &tot_rec_in_tstlmsedits_bad,
                   null,
                   null,               
                   null,
                   null,
                   null,
                   null,
                   null,
                   l_requested_by,
                   l_requested_date,
                   null,
                   null,
                   null,
                   null,
                   null);
    exception
    when others then
    fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
    fnd_file.put_line(fnd_file.log,
    'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlmsedits file');
    fnd_file.put_line(fnd_file.log, sqlerrm);
    end;
    end;
    exit;
    EOF
    echo "rgis_tca_to_tlms_process.sql started"
    sqlplus -s $fcp_login @$SCHED_TOP/sql/rgis_tca_to_tlms_process.sql $fcp_reqid
    exit;
    echo "rgis_tca_to_tlms_process.sql ended"
    _**Error:*----------------------------------*_
    RGIS Scheduling: Version : UNKNOWN
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    TCATLMS module: TCA To TLMS Import Process
    Current system time is 18-AUG-2011 06:13:27
    COUNT(*)
         16
    COUNT(*)
         25
    wc: cannot open /home/custom/sched/in/tstlms.bad
    wc: cannot open /home/custom/sched/in/tstlmsedits.bad
    pl/sql block started
    old 33:     AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
    new 33:     AND fcr.request_id = 18661823; --fnd_global.conc_request_id();
    old 63:                &tot_rec_in_tstlms,
    new 63:                16,
    old 64:                &tot_rec_in_tstlms_bad,
    new 64:                ,
    old 97:                &tot_rec_in_tstlmsedits,
    new 97:                25,
    old 98:                &tot_rec_in_tstlmsedits_bad,
    new 98:                ,
    ERROR at line 64:
    ORA-06550: line 64, column 4:
    PLS-00103: Encountered the symbol "," when expecting one of the following:
    ( - + case mod new not null others <an identifier>
    <a double-quoted delimited-identifier> <a bind variable> avg
    count current exists max min prior sql stddev sum variance
    execute forall merge time timestamp interval date
    <a string literal with character set specification>
    <a number> <a single-quoted SQL string> pipe
    <an alternatively-quoted string literal with character set specification>
    <an alternatively-q
    ORA-06550: line 98, column 4:
    PLS-00103: Encountered the symbol "," when expecting one of the following:
    ( - + case mod new not null others <an identifier>
    <a double-quoted delimited-identifier> <a bind variable> avg
    count current exists max min prior sql st
    rgis_tca_to_tlms_process.sql started
    old 12: and concurrent_request_id = '&1';
    new 12: and concurrent_request_id = '18661823';
    old 18: and concurrent_request_id = '&1';
    new 18: and concurrent_request_id = '18661823';
    old 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,&1);
    new 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,18661823);
    old 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,&1);
    new 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,18661823);
    old 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',&1);
    new 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',18661823);
    declare
    ERROR at line 1:
    ORA-20001: Error occured when executing RGIS_TCA_TO_TLMS_PROCESS.sql ORA-01403:
    no data found
    ORA-06512: at line 59
    Executing request completion options...
    ------------- 1) PRINT   -------------
    Printing output file.
    Request ID : 18661823      
    Number of copies : 0      
    Printer : noprint
    Finished executing request completion options.
    Concurrent request completed successfully
    Current system time is 18-AUG-2011 06:13:29
    ---------------------------------------------------------------------------

    Hi,
    Check the status of the batch in SM35 transaction.
    if the batch is locked by mistake or any other error, now you can release it and aslo you can process again.
    To Release -Shift+F4.
    Also you can analyse the job status through F2 button.
    Bye

  • Table creation with partition

    following is the table creation script with partition
    CREATE TABLE customer_entity_temp (
    BRANCH_ID NUMBER (4),
    ACTIVE_FROM_YEAR VARCHAR2 (4),
    ACTIVE_FROM_MONTH VARCHAR2 (3),
    partition by range (ACTIVE_FROM_YEAR,ACTIVE_FROM_MONTH)
    (partition yr7_1999 values less than ('1999',TO_DATE('Jul','Mon')),
    partition yr12_1999 values less than ('1999',TO_DATE('Dec','Mon')),
    it gives an error
    ORA-14036: partition bound value too large for column
    but if I increase the size of the ACTIVE_FROM_MONTH column to 9 , the script works and creates the table. Why is it so ?
    Also, by creating a table in this way and populating the table data in their respective partitions, all rows with month less than "JULY" will go in yr7_1999 partition and all rows with month value between "JUL" and "DEC" will go in the second partition yr12_1999 , where will the data with month value equal to "DEC" go?
    Plz help me in solving this problem
    thanks n regards
    Moloy

    Hi,
    You declared ACTIVE_FROM_MONTH VARCHAR2 (3) and you try to check it against a date in your partitionning clause:TO_DATE('Jul','Mon')so you should first check your data model and what you are trying to achieve exactly.
    With such a partition decl, you will not be able to insert dates from december 1998 included and onward. The values are stricly less than (<) not less or equal(<=) hence such lines can't be inserted. I'd advise you to check the MAXVALUE value jocker and the ENABLE ROW MOVEMENT partitionning clause.
    Regards,
    Yoann.

  • Creating SELECT query between 3 tables

    Hi there,
    Im trying to create a SELECT query between 3 tables and it is driving me round the bend.
    I have 3 tables connected together which are:
    MODUL
    modulid
    modulname
    modulevel
    STUDENT
    studentid
    surname
    inits
    s e x
    phone
    email
    logon
    STUDREGOCCUR
    modulid
    acyear
    semester
    occletter
    studentid
    result
    I want to select the fields surname, inits, studentid from the STUDENT table, modulname from the MODUL table and result from my STUDREGOCCUR table that is NOT NULL.
    I have tried SELECT STUDENT.STUDENTID, STUDENT.INITS, STUDENT.SURNAME, MODUL.MODULNAME, STUDREGOCCUR.RESULT WHERE STUDREGOCCUR.RESULT IS NOT NULL;
    I have also tried many other ways and done research hwich hasnt really helped me unfortunately.
    Im quite new to SELECT queries and im not sure where im going wrong, i would greatly appreciate if someone could help me solve my problem.
    Thanks for the help :)
    Edited by: user633643 on 06-Dec-2008 08:09

    If you want data from multiple tables you would need to perform a join. The general form would be:
    select t1.cola, t2.col2, t3.col.x
    from table_a a, table_b b, table_c c
    where a.key = b.key
    and b.key = c.key
    or perhaps c relates back directly to a so it is a.key = c.key where key can be any column whose value is equilivent in what it represents to the value it is being compared to, that is, student_number = student_number or the columns regardless of name are both building numbers, room numbers, etc ....
    This is not your exact solution but it should help.
    HTH -- Mark D Powell --

  • Failing to refresh LOV fields added in the query panel with table

    Hi.. Iam using Jdev 11.1.1.2.0
    I have a scenario like ..i need to add 2 cascaded lovs in search panel and clicking on search button, the result should be displayed in table form.
    For example..
    I have cascaded LOV fields departmentId and Firstname ,
    first name dropdown values depends on the selection of value in DepartmentIddropdown. I need to add only these 2 fields in search panel and clciking on search buton , the result should be displayed in emp table.
    I have achieved the same creating View criteria with 2 fields and dragging and dropping that as query panel with table. But my problem is First name lov is not populating based on department id. It is showing static dropdown list.
    Please help me how to achieve this ? thanks in advance.
    Regards
    Alekhya.

    Thanks for the reply.. actually if iam using those cascaded lovs in a form then we can set properties as u mentioned to refresh and display values correctly in the dropdown.
    My scenario is like i need to use those fields in query panel header
    code snippet
    <af:panelHeader text="Employees" id="ph1">
    <af:query id="qryId1" headerText="Search" disclosed="true"
    value="#{bindings.EmployeesViewCriteria1Query1.queryDescriptor}"
    model="#{bindings.EmployeesViewCriteria1Query1.queryModel}"
    queryListener="#{bindings.EmployeesViewCriteria1Query1.processQuery}"
    queryOperationListener="#{bindings.EmployeesViewCriteria1Query1.processQueryOperation}"
    resultComponentId="::resId1"/>
    </af:panelHeader>
    Iam not having any field names to add properties.
    Regards
    Alekhya

  • How can this query avoid full table scans?

    It is difficult to avoid full table scans in the following query because the values of column STATUS reiterant numbers. There are only 10 numbers values for the STATUS column (1..10)
    But the table is very large. there are more than 1 million rows in it. A full table scanning consumes too much time.
    How can this query avoid full table scans?
    Thank you
    SELECT SYNC,CUS_ID INTO V_SYNC,V_CUS_ID FROM CONSUMER_MSG_IDX
                      WHERE CUS_ID = V_TYPE_CUS_HEADER.CUS_ID AND
                            ADDRESS_ID = V_TYPE_CUS_HEADER.ADDRESS_ID AND
                            STATUS =! 8;Edited by: junez on Jul 23, 2009 7:30 PM

    Your code had an extra AND. I also replaced the "not equal" operator, which has display problems with the forum software
    SELECT SYNC,CUS_ID
       INTO V_SYNC,V_CUS_ID
      FROM CONSUMER_MSG_IDX
    WHERE CUS_ID = V_TYPE_CUS_HEADER.CUS_ID AND
           ADDRESS_ID = V_TYPE_CUS_HEADER.ADDRESS_ID AND
           STATUS != 8;Are you sure this query is doing a table scan? Is there an index on CUS_ID, ADDRESS_ID? I would think that would be mostly unique. So I'm not sure why you think the STATUS column is causing problems. It would seem to just be a non-selective additional filter.
    Justin

  • Can't  write right sql query by two tables

    Hello
    Everyone,
    I am trying to get a sql query by two tables,
    table:container
    <pre class="jive-pre">
    from_dest_id     number
    ship_from_desc     varchar
    to_dest_id     number
    </pre>
    table: label_fromat (changeless)
    <pre class="jive-pre">
    SORT_ORDER     number
    PREFIX     varchar2
    VARIABLE_NAME varchar2
    SUFFIX varchar2
    LF_COMMENT varchar2
    </pre>
    the sql which i need is
    a. table CONTAINER 's each column should have LABLE_FORMAT 's PREFIX before and SUFFIX back ,and these columns is connected
    example : the query output should be like this :
    <pre class="jive-pre">
    PREFIX||from_dest_id||SUFFIX ||PREFIX||ship_from_desc||SUFFIX ||PREFIX|| to_dest_id||SUFFIX
    </pre>
    every PREFIX and SUFFIX are come from LABEL_FORMAT's column VARIABLE_NAME (they are different)
    column SORT_ORDER decide the sequence, for the example above: Column from_dest_id order is 1, ship_from_desc is 2,to_dest_id is 3
    b. table LABEL_FORMAT's column VARIABLE_NAME have values ('from_dest_id','ship_from_desc','to_dest_id')
    If table CONTAINER only have one record i can do it myself,
    But actually it is more than one record,I do not know how to do
    May be this should be used PL/SQL,or a Function ,Cursor ,Procedure
    I am not good at these
    Any tips will be very helpful for me
    Thanks
    Saven

    Hi, Saven,
    Presenting data from multiple rows as a single string is called String Aggregation . This page:
    http://www.oracle-base.com/articles/10g/StringAggregationTechniques.php
    shows many ways to do it, suited to different requirements, and different versions of Oracle.
    In Oracle 10 (and up) you can do this:
    SELECT     REPLACE ( SYS_CONNECT_BY_PATH ( f.prefix || ' '
                                   || CASE  f.variable_name
                                        WHEN 'FROM_DEST_ID'
                                       THEN  from_dest_id
                                       WHEN 'SHIP_FROM_DESC'
                                       THEN  ship_from_desc
                                       WHEN 'TO_DEST_ID'
                                       THEN  to_dest_id
                                      END
                                   || ' '
                                   || f.suffix
                               , '~?'
              , '~?'
              )     AS output_txt
    FROM          container     c
    CROSS JOIN     label_format     f
    WHERE          CONNECT_BY_ISLEAF     = 1
    START WITH     f.sort_order     = 1
    CONNECT BY     f.sort_order     = PRIOR f.sort_order + 1
         AND     c.from_dest_id     = PRIOR c.from_dest_id
    saven wrote:If table CONTAINER only have one record i can do it myself,
    But actually it is more than one record,I do not know how to do In that case, why did you post an example that only has one row in container?
    The query above assumes
    something in container (I used from_dest_id in the example above) is unique,
    you don't mind having a space after prefix and before from_dest_id,
    you want one row of output for every row in container, and
    you can identify some string ('~?' in the example above) that never occurs in the concatenated data.

Maybe you are looking for

  • Time Capsule just for backups, not for wifi.

    I used to use my Time Capsule for both backups and wifi but constantly had issues, so I just disconnected it in favor of a simple Airport Express. No issues since switching, but now I have this Time Capsule just sitting here and don't do automatic ba

  • Transfer info from Ibook G4 10.3.9 to new MacBook Pro

    Hi How do I transfer files from my Ibook G4 10.3.9 to a new MacBook Pro? There is no migration assistant on 10.3.9. I have tried a firewire cable but it doesn't recognise the old computer. Help appreciated. Thanks Kay

  • Example Order and Invoice in the SDK examples

    hi all, i am programming with the DI API in VB6 and i want to perform a normal order and invoice transaction! The example which comes with the SDK is in .NET! How can i do this in VB6, because at the moment i have certain problems to combine the busi

  • Issue with printing report texts in Smartform

    Hi , I developed a smartform and a print program. In the title of the smartform I need print a text depend on the company code. For this I prepared this text in the print program like the following. case rbkp-bukrs.   when 'IT10'     smartform_struct

  • Importing CSV files into Multiple Tables in One Database

     I have a web based solution using Microsoft SharePoint and SQL Server that is created to centralize dat collection and reporting of program metrics used in montly reviews. A person from each program enters dat manual or by pushing the data using aut