Summary of  a table in SQL

I need a query to print summary of table in a single query without using PLSQL..
table list
with t as
select 1 table_number , 't1' table_name from dual
union all
select 2 table_number , 't2' table_name from dual
union all
select 2 table_number , 't3' table_name from dual
select * from t;
Table1
with t1 as
select 'US' country_code  , 'SYSTEM1' source, 1 emp_num , 'Not Processed' status from dual
union all
select 'India' country_code  , 'SYSTEM1' source, 2 emp_num , 'Valid' status from dual
union all
select 'India' country_code  , 'SYSTEM2' source, 3 emp_num , 'Error' status from dual
union all
select 'India' country_code  , 'SYSTEM2' source, 4 emp_num , 'Valid' status from dual
select * from t1;
i need a query where with input parameter as table_number...
for eg 1 .. i want the output should be grouped by table_name ,country_code and  source
output like
table_name country_code source total_records valid_records error_records not_processed_records valid_records_percentage
t1         India       SYSTEM1  1             1             0              0                     100
t1         India       SYSTEM2  2             1             1              0                     50
t1         US          SYSTEM1  1             0             0              1                     0
Total                                     4             2             1              1                     50                    
and i need total for these in last record

Are you looking to get the table name dynamically? In that case try this.
SQL> select * from t;
TABLE_NUMBER TABLE_NAME
           1 t1
           2 t2
           2 t3
SQL> select * from t1;
COUNTRY_CODE  SOURCE     EMP_NUM STATUS
US            SYSTEM1          1 Not Processed
India         SYSTEM1          2 Valid
India         SYSTEM2          3 Error
India         SYSTEM2          4 Valid
SQL> select case when grouping_id(table_name)
  2                 + grouping_id(country_code)
  3                 + grouping_id(source_) = 0 then table_name
  4              else 'Total'
  5         end table_name
  6       , country_code
  7       , source_
  8       , count(*) total_record
  9       , count(decode(status, 'Not Processed', 1)) not_processed
10       , count(decode(status, 'Valid', 1)) valid
11       , count(decode(status, 'Error', 1)) error_
12    from (
13            select t.table_name
14                 , t1.country_code
15                 , t1.source_
16                 , t1.emp_num
17                 , t1.status
18              from (
19                      select table_name
20                           , dbms_xmlgen.getxmltype('select * from ' || table_name) xml_output
21                        from t
22                       where table_number = 1
23                   ) t
24                 , xmltable
25                   (
26                      '/ROWSET/ROW' passing t.xml_output
27                      columns
28                      country_code varchar2(100) path 'COUNTRY_CODE',
29                      source_      varchar2(100) path 'SOURCE',
30                      emp_num      number        path 'EMP_NUM',
31                      status       varchar2(100) path 'STATUS'
32                   ) t1
33         )
34   group
35      by rollup
36         (
37            table_name
38          , country_code
39          , source_
40         )
41  having grouping_id(table_name)
42       + grouping_id(country_code)
43       + grouping_id(source_) in (0, 3);
TABLE_NAME  COUNTRY_CODE  SOURCE_    TOTAL_RECORD NOT_PROCESSED      VALID     ERROR_
t1          US            SYSTEM1               1             1          0          0
t1          India         SYSTEM1               1             0          1          0
t1          India         SYSTEM2               2             0          1          1
Total                                           4             1          2          1
SQL>
For this to work all the table that are passed must have the same structure. But then comes the question why do you have different table with same structure

Similar Messages

  • 10g: parallel pipelined table func. using table(cast(SQL collect.))?

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

  • Load XML File into temporary tables using sql loader

    Hi All,
    I have an XML file as below. I need to insert the contents into a temporary staging table using sql loader. Please advice how I need to do that.
    For example Portfolios should go into a seperate table, and all the tags inside it should be populated in the columns of the table.
    Family should go into a seperate table and all the tags inside it should be populated in the columns of the table.
    Similarly offer, Products etc.
    - <ABSProductCatalog xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    - <ProductSalesHierachy>
    - <Portfolios>
    - <Portfolio productCode="P1">
      <Attribute name="CatalogProductName" value="Access" />
      <Attribute name="Status" value="Active" />
      </Portfolio>
    - <Portfolio productCode="P2">
      <Attribute name="CatalogProductName" value="Data" />
      <Attribute name="Status" value="Active" />
      </Portfolio>
    - <Portfolio productCode="P3">
      <Attribute name="CatalogProductName" value="Voice" />
      <Attribute name="Status" value="Active" />
      </Portfolio>
    - <Portfolio productCode="P4">
      <Attribute name="CatalogProductName" value="Wireless" />
      <Attribute name="Status" value="Active" />
      </Portfolio>
      </Portfolios>
    - <Families>
    - <Family productCode="F1">
      <Attribute name="CatalogProductName" value="Internet Access Services" />
      <Attribute name="Status" value="Active" />
    - <ParentHierarchy>
      <Item productCode="P1" modelType="Portfolio" />
      </ParentHierarchy>
      </Family>
    - <Family productCode="F2">
      <Attribute name="CatalogProductName" value="Local Access Services" />
      <Attribute name="Status" value="Active" />
    - <ParentHierarchy>
      <Item productCode="P2" modelType="Portfolio" />
      </ParentHierarchy>
      </Family>
      </Families>
    - <SubFamilies>
    - <SubFamily productCode="SF1">
      <Attribute name="CatalogProductName" value="Business Internet service" />
      <Attribute name="Status" value="Active" />
    - <ParentHierarchy>
      <Item productCode="F1" modelType="Family" />
      </ParentHierarchy>
      </SubFamily>
      </SubFamilies>
    - <ProductRefs>
    - <ProductRef productCode="WSP1" modelType="Wireline Sales Product">
      <ActiveFlag>Y</ActiveFlag>
    - <ProductHierarchy>
      <SalesHierarchy family="F1" subFamily="SF1" portfolio="P1" primary="Y" />
      <SalesHierarchy family="F2" portfolio="P2" primary="N" />
      <FinancialHierarchy quotaBucket="Voice" strategicProdCategory="Local Voice" />
      </ProductHierarchy>
      </ProductRef>
    - <ProductRef productCode="MSP2" modelType="Handset">
      <ActiveFlag>Y</ActiveFlag>
    - <ProductHierarchy>
      <SalesHierarchy portfolio="P4" primary="Y" />
      </ProductHierarchy>
      </ProductRef>
      </ProductRefs>
      </ProductSalesHierachy>
    - <Offers>
    - <Offer productCode="ABN">
      <OfferName>ABN</OfferName>
      <OfferDescription>ABN Description</OfferDescription>
    - <Segments>
      <Segment>SCG</Segment>
      <Segment>PCG</Segment>
      </Segments>
      <OfferUpdateDate>2009-11-20</OfferUpdateDate>
      <ActiveFlag>Y</ActiveFlag>
      </Offer>
    - <Offer productCode="OneNet">
      <OfferName>OneNet</OfferName>
      <OfferDescription>OneNet Description</OfferDescription>
    - <Segments>
      <Segment>SCG</Segment>
      <Segment>PCG</Segment>
      <Segment>PCG2</Segment>
      </Segments>
      <OfferUpdateDate>2009-11-20</OfferUpdateDate>
      <ActiveFlag>Y</ActiveFlag>
      </Offer>
      </Offers>
    - <Products>
    - <Product productCode="WSP1" modelType="Wireline Sales Product">
      <ProductName>AT&T High Speed Internet</ProductName>
      <ProductDescription>High Speed Internet</ProductDescription>
      <LegacyCoProdIndicator>SBC</LegacyCoProdIndicator>
      <RevenueCBLCode>1234B</RevenueCBLCode>
      <VolumeCBLCode>4567A</VolumeCBLCode>
      <SAARTServiceIDCode>S1234</SAARTServiceIDCode>
      <MarginPercentRequired>Y</MarginPercentRequired>
      <PercentIntl>%234</PercentIntl>
      <UOM>Each</UOM>
      <PriceType>OneTime</PriceType>
      <ProductStatus>Active</ProductStatus>
      <Compensable>Y</Compensable>
      <Jurisdiction>Everywhere</Jurisdiction>
      <ActiveFlag>Y</ActiveFlag>
    - <Availabilities>
      <Availability>SE</Availability>
      <Availability>E</Availability>
      </Availabilities>
    - <Segments>
      <Segment>SCG</Segment>
      <Segment>PCG</Segment>
      </Segments>
      <VDIndicator>Voice</VDIndicator>
      <PSOCCode>PSOC 1</PSOCCode>
      <USBilled>Y</USBilled>
      <MOWBilled>N</MOWBilled>
      <ProductStartDate>2009-11-20</ProductStartDate>
      <ProductUpdateDate>2009-11-20</ProductUpdateDate>
      <ProductEndDate>2010-11-20</ProductEndDate>
    - <AliasNames>
      <AliasName>AT&T HSI</AliasName>
      <AliasName>AT&T Fast Internet</AliasName>
      </AliasNames>
    - <OfferTypes>
      <OfferType productCode="ABN" endDate="2009-11-20" />
      <OfferType productCode="OneNet" />
      </OfferTypes>
    - <DynamicAttributes>
    - <DynamicAttribute dataType="String" defaultValue="2.5 Mbps" name="Speed">
      <AttrValue>1.5 Mbps</AttrValue>
      <AttrValue>2.5 Mbps</AttrValue>
      <AttrValue>3.5 Mbps</AttrValue>
      </DynamicAttribute>
    - <DynamicAttribute dataType="String" name="TransportType">
      <AttrValue>T1</AttrValue>
      </DynamicAttribute>
      </DynamicAttributes>
      </Product>
    - <Product productCode="MSP2" modelType="Handset">
      <ProductName>Blackberry Bold</ProductName>
      <ProductDescription>Blackberry Bold Phone</ProductDescription>
      <LegacyCoProdIndicator />
      <RevenueCBLCode />
      <VolumeCBLCode />
      <SAARTServiceIDCode />
      <MarginPercentRequired />
      <PercentIntl />
      <UOM>Each</UOM>
      <PriceType />
      <ProductStatus>Active</ProductStatus>
      <Compensable />
      <Jurisdiction />
      <ActiveFlag>Y</ActiveFlag>
    - <Availabilities>
      <Availability />
      </Availabilities>
    - <Segments>
      <Segment>SCG</Segment>
      <Segment>PCG</Segment>
      </Segments>
      <VDIndicator>Voice</VDIndicator>
      <PSOCCode />
      <USBilled />
      <MOWBilled />
      <ProductStartDate>2009-11-20</ProductStartDate>
      <ProductUpdateDate>2009-11-20</ProductUpdateDate>
    - <AliasNames>
      <AliasName />
      </AliasNames>
    - <OfferTypes>
      <OfferType productCode="ABN" />
      </OfferTypes>
    - <DynamicAttributes>
    - <DynamicAttribute dataType="String" name="StlmntContractType">
      <AttrValue />
      </DynamicAttribute>
    - <DynamicAttribute dataType="String" name="BMG 2 year price">
      <AttrValue>20</AttrValue>
      </DynamicAttribute>
    - <DynamicAttribute dataType="String" name="MSRP">
      <AttrValue>40</AttrValue>
      </DynamicAttribute>
    - <DynamicAttribute dataType="String" name="BMGAvailableType">
      <AttrValue />
      </DynamicAttribute>
    - <DynamicAttribute dataType="String" name="ProductId">
      <AttrValue>123456</AttrValue>
      </DynamicAttribute>
    - <DynamicAttribute dataType="String" name="modelSource">
      <AttrValue>product</AttrValue>
      </DynamicAttribute>
      </DynamicAttributes>
      </Product>
      </Products>
      <CatalogChanged>Y</CatalogChanged>
      </ABSProductCatalog>

    Two options that come to mind. Others exist.
    #1 - {thread:id=474031}, which is basically storing the XML in an Object Relational structure for parsing
    #2 - Dump the XML into either an XMLType based table or column and use SQL (with XMLTable) to create a view that parses the data. This would be the same as the view shown in the above post.
    Don't use sql*loader to parse the XML. I was trying to find a post from mdrake about that but couldn't. In short, sql*loader was not build as an XML parser so don't try to use it that way.

  • Need Guide to create a table in SQL Server and Process data for JDBC

    Dear All,
    Scenario:JDBC to JDBC
    I need to practice JDBC to JDBC scenario and for that i need to create a table in SQL server for sender ,receiver and update  i have installed SQL Server and no idea about creation of table and Connection string for PI.
    I want you to explain each and every step for the Table Creation ,Driver and connection string.
    Thanks in Advance.

    Try searchin in the forum and then google. This forum is not for teaching the basics.
    VJ

  • Need To Create a table in Sql Server and do some culculation into the table from Oracle and Sql

    Hello All,
    I'm moving a data from Oracle to Sql Server with ETL (80 tables with data) and i want to track the number of records that i moving on the daily basis , so i need to create a table in SQL Server, wilth 4 columns , Table name, OracleRowsCount, SqlRowCount,
    and Diff(OracleRowsCount - SqlRowCount) that will tell me the each table how many rows i have in Oracle, how many rows i have in SQL after ETL load, and different between them, something like that:
    Table Name  OracleRowsCount   SqlRowCount  Diff
    Customer                150                 150            
    0
    Sales                      2000                1998          
    2
    Devisions                 5                       5             
    0
    (I can add alot of SQL Tasks and variables per each table but it not seems logicly to do that, i tryid to find a way to deal with that in vb but i didn't find)
    What the simplest way to do it ?
    Thank you
    Best Regards
    Daniel

    Hi Daniel,
    According to your description, what you want is an indicator to show whether all the rows are inserted to the destination table. To achieve your goal, you can add a Row Count Transformation following the OLE DB Destination, and redirect bad rows to the Row
    Count Transformation. This way, we can get the count of the bad rows without redirecting these rows. Since the row count value is stored in a variable, we can create another string type variable to retrieve the row count value from the variable used by the
    Row Count Transformation, and then use a Send Mail Task to send the row count value in an email message body. You can also insert the row count value to the SQL Server table through Execute SQL Task. Then, you can check whether bad rows were generated in the
    package by querying this table.  
    Regards,
    Mike Yin
    TechNet Community Support

  • About cluster table in sql server or SAP

    Hello Gurus,
            we have a cluster table "KONV" in our sql server 2008 , is that feature for cluster table native feature in sql server or in sap?
    because some professonal said there is no concept for cluster table in sql server just like oracle database. so please help me for
    clarification.
    Many thanks,

    I agree, this is ABAP Dictionary specific way of data encapulation, not the DB type dependant one. Basically it stores the data in RAW or LRAW format.
    Also be aware that we distinguish b/w [data clusters|http://help.sap.com/saphelp_nw04/helpdata/en/fc/eb3bf8358411d1829f0000e829fbfe/frameset.htm] and [cluster tables|http://help.sap.com/saphelp_nw04/helpdata/en/cf/21f083446011d189700000e8322d00/content.htm].
    Regards
    Marcin

  • Deleted Table in SQL and DBML still remained relation !!!!

    Hi Dear Experts
    I've faced the serious problem please help me :( !
    I've deleted a table from SQL database and also dbml and i created new table instead
    Also in Store Procedure the new table is used
    I've checked that the old table has completely removed
    but when in want to excecute the solution (the SP it tells me that previous relation (Foreignkey )
    is still remained ?!!!!!!) 
    ***THE OLD TABLE WAS IN RELATION WITH A TABLE IN SP BUT NOW IS IN RELATION WITH NEW TABLE*****
    Why this error occured even i've have removed the table?????
    Thanks a lot

    Error Message
    the insert statment Confilicted with foreign key constarint "FK_<tableName>_<oldtable>".the conflict occurred in database,table<old table>,column'x'
    thanks a lot
    Hi nasringh,
    Could you please check the relationship and the underlying records between foreign key table and primary key table? We must insert Data into the parent table which containing the Primary Key, before attempting to insert data into the child table containing
    the Foreign Key.
    For more detail information, please take a look at the following similar thread:
    http://stackoverflow.com/questions/2965837/insert-statement-conflicted-with-the-foreign-key-constraint
    http://www.codeproject.com/Questions/281774/The-INSERT-statement-conflicted-with-the-FOREIGN-K
    Regards,
    Elvis Long
    TechNet Community Support

  • Table or Object type - like #temp table in SQL Server

    Hi
    I need to create a temp table to hold certain data and then validate. What is the best way to do this oracle. Something similar to #temp tables in SQL Server.
    Thanks

    IN Oracle, you create the temporary table once, before you start your program. Then anyone can use that definition, but the system keeps the data isolated to eachr/session.
    The difference in using Oracle: all DDL, including creating temp tables, performs commits and aquires locks that you want to avoid. It creates unnecessary serialization, causes transactional consistency issues and puts Oracle's Read Consistent model at risk (of ORA-01555 errors).
    So, you (or the DBA) would "CREATE GLOBAL TEMPORARY TABLE ..." with the appropriate definition you want, and indicate whether you want the data deleted on commit, or on logoff.
    Then you write your procedure, similar to the way you would do it in SQL Server, but you would not bracket it with creating/dropping the temp table - no need.

  • Group by languages in multilingual table in sql server

    Hi ,
    I am having a multilingual table in SQL server 2008 , the table has two columns ID and Text
    The text column has English, Chinese and other language texts.
    I need a resultset grouped by the languages and count of id like
    Language   count_of_id
    English          25
    Chinese         10
    other languages 3
    Is this possible? Can you please help me?

    Good day SqlServer_learn
    I have a saying that I always uses: anything is possible in developing, if you have the appropriate resources (change the existing solution could be part of the way to solve...)
    regarding your question, there is simple solution, but for most cases, I highly recommend to change the table structure and add a column for the culture of the text (like: en-us for english,
    he-il for Hebrew and so on..).
    Since you are using unicode column like nvarchar to store multi language text,
    we can get the language from the text itself, as long as it include characters from that language (text which include only numbers for example we we consider as default language since it is the same in all languages). 
    step 1: First you need an accessory table (Named like UnicodeMapping) which include all unicode characters and the number of the char in unicode (a mapping unicode table). You can use ranges as well, but
    it will be faster for the queries if you actually have all the characters, and not just range.
    For example this table (I added English and Hebrew... Do the same with all the languages that you need):
    create table UnicodeMapping (Charecter nchar(1), UnicodeNum int, CultureN NVARCHAR(100), CollateN NVARCHAR(100))
    GO
    -- fill the table with main Hebrew characters, using a number table
    insert UnicodeMapping (Charecter, UnicodeNum, CultureN, CollateN)
    select NCHAR(n), n, 'He-IL', 'Hebrew_CI_AS'
    from _ArielyAccessoriesDatabase.dbo.ArielyNumbers
    where
    n between 1488 and 1514 -- Hebrew
    or n between 64304 and 64330 -- Hebrew
    GO
    -- fill the table with main English characters, using a number table
    insert UnicodeMapping (Charecter, UnicodeNum, CultureN, CollateN)
    select NCHAR(n), n, 'En-US', 'SQL_Latin1_General_CP1_CI_AS'
    from _ArielyAccessoriesDatabase.dbo.ArielyNumbers
    where
    n between 97 and 122 -- En
    or n between 65 and 90 -- En
    GO
    -- Do the same with all the languages that you need, and all the UNICODE ranges for those languages
    select * from UnicodeMapping
    GO
    Step 2: You can create a function which get NVARCHAR as input and return the culture as output, or work directly on the data using JOIN your table and this table.
    Assuming that each row is in specific language, In order to recognize the language, you just need to check 1 character from the original string (a text character and not a number for example which might be in any language) and examine which language this
    single character is, using the our UnicodeMapping.
    You can check this thread to see an implementation of this idea: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/ccc1d16f-926f-46c8-8579-b2eecf661e7c/sort-miultiple-language-data-in-sql-serevr-by-collation?forum=transactsql
    * dont forget to add to the table all the characters like numbers and chose them as your default language
    * in the link above I just select the first character using LEFT, but if the text start with number for example then you will get default language. If you sure that the text must start with real language character then it is best solution, but if not, than
    It is better to use a "user defined function" which will find the first character that is not in the default language. if the function do not find any char in non-default language than it return default language, else it check the language using
    the UnicodeMapping and return it.
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Cannot open table in sql server management studio express

    Hello all.
    I have uploaded a table into sql management studio express. However, when I right click on the table and try and open it, I get an error message saying;
    "SQL Execution Error.
    Executed SQL statement: select columnName1, columnName2 etc....
    Error source: Microsoft. VisualStudio.DataTools
    Error Message: Exception has been thrown by the target of an invocation"
    Because of this error, I cannot manually edit the table. However, when I write a query running select * from Table X, the table does appear that way.
    Any help regarding how to open the table would be very much appreciated!!

     have uploaded a table into sql management studio express. However, when I right click on the table and try and open it, I get an error message saying;
    "SQL Execution Error.
    Executed SQL statement: select columnName1, columnName2 etc....
    Error source: Microsoft. VisualStudio.DataTools
    Error Message: Exception has been thrown by the target of an invocation"
    Because of this error, I cannot manually edit the table. However, when I write a query running select * from Table X, the table does appear that way.
    Any help regarding how to open the table would be very much appreciated!!

  • Create a table in SQL Server, Export tables from Microsoft Excel to Microsoft SQL Server, Populate the created table

    Hello team,
    I have a project that I need to do, what is the best approach for each step?
    1- I have to create a table in Microsoft SQL Server.
    2- I have to import data/ tables from Microsoft Excel or Access to Microsoft SQL Server. Should I use Microsoft Visual Studio to move data from Excel or Access?
    3-I should populate the created table with the data from the exported data.
    4-How should I add the second and third imported table to the first table? Should I use union query?
    After I learn these, I will bring up the code to make sure what I do is right.
    Thanks for all,
    Guity
    GGGGGNNNNN

    Hello Naomi,
    I have imported all the tables into SQL Server,
    I created a table:
    CREATE
    TABLE dbo.Orders
    Now I want to populate this table with the values from imported tables, will this code take care of this task?
    INSERT INTO dbo.Orders(OrderId, OrderDate)
    SELECT OrderId, OrderDate
    FROM Sales.Orders
    UNION
    SELECT OrderId, OrderDate
    FROM Sales.Orders1
    Union
    SELECT OrderId, OrderDate
    FROM Sales.Orders2
    If not, what is the code?
    Please advise me.
    GGGGGNNNNN
    GGGGGNNNNN

  • Create a table in SQL with datatype equivalent to LongBlob

    I have a mySQL or phpMyadmin table (nor sure) (with longblob fields) that I want to convert to SQL Server.
    Here is a link to a Rar with two files, the 'ORIGINAL CODE.sql' is the original code sample and the 'NEW_SQL_CODE.sql' is the code I am writing in SQL to create a database.
    Click to download the two files.
    I fail to make the insert in the 'NEW_SQL_CODE.sql', it says (translated from spanish) something like "The binary data will be truncated"
    INSERT INTO inmuebles_fotos (ci_inm, pos, foto, mini, comentario, inet, impr_cartel, impr_visita) VALUES
    (6, 0, 0xffd8ffe000104a46494600010100000100010...etc...
    I don’t know how if I have defined the wrong data type (image) equivalent to the MySQL LongBlob. All I want to do is to make that insert in SQL and save that image as jpg if possible. I don't know if it's not posible in SQL and can only
    be done in MySQL.
    Thanks for any help.

    The original table is not mine; I am just trying to save the images as .jpg in hard drive.
    Here is the original table I have that has 500Mb in pictures, in the sample there is only 1 picture:
    CREATE TABLE IF NOT EXISTS `inmuebles_fotos` (
    `ci_inm` int(10) unsigned DEFAULT NULL,
    `pos` smallint(6) DEFAULT NULL,
    `foto` longblob,
    `mini` longblob,
    `comentario` varchar(100) DEFAULT NULL,
    `inet` tinyint(3) unsigned DEFAULT '0',
    `impr_cartel` smallint(6) DEFAULT '0',
    `impr_visita` smallint(6) DEFAULT '0'
    ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
    And here is the equivalent table in SQL that I am trying to create to import al registers so I can save the pictures from SQL Server that is what we use here.
    CREATE TABLE [dbo].[inmuebles_fotos2](
    [ci_inm] [int] NULL,
    [pos] [int] NULL,
    [foto] [image] NULL,
    [mini] [image] NULL,
    [comentario] [varchar](1) NULL,
    [inet] [int] NULL,
    [impr_cartel] [int] NULL,
    [impr_visita] [int] NULL
    Sorry for the trouble, I am trying everything I get my hands on until I get to save those images in “0x1234567890ABCDE…….” Format.
    I'll try anything you sugest me but I have only use SQL Server so that's why I'm trying this road first.
    Thanks for your help.

  • People Tools Table in SQL SERVER

    Hi,
    As we have PSDBOWNER table in Oracle or DB2 which helps us to change DB name after production refresh.
    Like wise, i want to know list of People Tools Table in SQL SERVER which needs to change DB name from PRD TO TEST after refresh.
    Rgds
    PS Admin

    PeopleTools table will have the same name independent of Database you use.
    PSDBOWNER table is the only table created on PS oracle database user. (This table will not present in the SYSADM schema.)
    Thanks
    Soundappan
    Edited by: Soundappan on Jan 5, 2012 5:12 PM

  • Problem loading table in SQL server

    Hi,
    I'm trying to load a table in SQL server from another instance of SQL server.
    I have defined the physical and and logical data stores and reverse engineered the models to retrieve the tables.
    The target table was created manually..
    If I try to run the interface i get the fololowing error
    ODI-1227: Task SrcSet0 (Loading) fails on the source MICROSOFT_SQL_SERVER connection DATAWAREHOUSE.
    Caused By: java.sql.SQLException: [FMWGEN][SQLServer JDBC Driver][SQLServer]Incorrect syntax near '<'.
    at weblogic.jdbc.sqlserverbase.ddb_.b(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddb_.a(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddb9.b(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddb9.a(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddr.v(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddq.a(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)
    at weblogic.jdbc.sqlserver.ddj.m(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddel.e(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddel.a(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddde.a(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddel.v(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddel.r(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddde.execute(Unknown Source)
    at oracle.odi.runtime.agent.execution.sql.SQLCommand.execute(SQLCommand.java:163)
    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)
    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)
    at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:558)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:464)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
    at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
    at java.lang.Thread.run(Thread.java:662)
    It is trying to run the following SQl and I'm not sure why it is trying to drop and create a view in the source system ? The interface that I'm running above has just a source to target mapping..
    drop view <Undefined>.SQLDATAWH_DATAWAREHOUSEAccountDim
    Any pointers will be helpful..
    Thanks in advance...

    whirlpool wrote:
    I think I selected MSSQL one.. but I donot have access to the server now..Is this the correct KM ?
    If you have selected IKM MSSQL Incremental Update then it is the correct IKM to choose.
    To use this IKM, the staging area must be on the same data server as the target.
    What is the LKM selected ?
    I right clicked on the Reverse-Engineering (RKM) models and imported all knowledge modules.. Is that how its done ?
    It is fine.
    is that the correct one...I donot understand why the interface is trying to drop and create a view in source system..
    It depends on the KM selected . So first get the name of LKM and IKM used in interface.

  • How can we find the most usage and lowest usage of table in Sql Server by T-SQL

    how can we find the most usage and lowest usage of table in Sql Server by T-SQL
    The table has time stamp column
    StartedOn datetime
    EndedOn datetime

    The Below query has been used , but the textdata column doesnot include the name of the table ServiceLog.
    SELECT
    FROM
    databasename,
    duration
    fn_trace_gettable('F:\Program
    Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\log_148.trc',
    default)
    WHERE
    DATABASENAME='ZTCFUTURE'
    AND TEXTDATA
    IS
    NOT
    NULL
    --AND TEXTDATA LIKE 'SERVICE%'
    order
    by cpu
    desc; 

Maybe you are looking for

  • Can we call a report in another report?

    I have developed a XMLP report for AR receivables for a customer and another XMLP report for AP vouchers for a customer. Now my requirement is to develop a report for AR/AP Balance for a customer,which is combination of above 2(ie AR Receivables repo

  • Bring to front InDesign doc CS4 js

    I have a script for Indesign that opens Ps file runs ps script on it and then updates it in InDesign, and leaves Ps as an active program on the front, and I need for InDesign doc. to come forward as an active app. I've tried few lines and none of the

  • Log Pro 9 Help!! App store problems

    I have $220 on my itunes account but whenever I go to purchase Logic Pro 9 it redirects me to billing information even though I have sufficient funds.

  • Accounting homework help, finance homework help

    The guys at ACCOUNTINGHOMEWORKTUTORcom are one of the pioneers in the field of Accounts and Finance Assignments help. They provide the guidance in the areas Accounting homework help & financial management help, cost accounting homework, lifo, fifo, b

  • Unable to access Edge Reflow through ACC

    When I login to ACC, I can see the edge section and Reflow.  When I select Reflow, I am directed to the download page, however Reflow is not listed at all within the Edge section.  Is it the version of ACC I have?  There isn't even an option to add a