Mapping a BW query to a Z table in SAP ECC

Hi BW Gurus,
I have requirement as below and i have a solution but i want to know other alternatives for the same.. so please let me know the alternatives.
A BI query would be created from an existing APO BI cube.
A Z program would be created in SAP
A Z table would be created in SAP.
The Z program would map the fields in BI cube vis-a-hopthe fields in Z table.
The Z program would execute the BI query through an RFC connection, pull the data and store the data in the Z table using the mapping done previously.
This execution would be an adhoc one - not a periodical job.
Though I have specified about executing the query through RFC option, we are open to any possible suggestion. All we need is to bring the data in the APO BI cube to the SAP Z table.
Please let me know for any other possible alternatives.
Thanks,
Shailaja

Hi Shailaja
Just a thought on this issue..Its quite simple no need to write huge lines of ABAP program.
Step 1:You can create an APD in RSANWB take the query as input and target as Direct access DSO for it and in the mapping take the fields & filter the fields through filter option in APD which ever you required /recomended in your R/3 table.
Step 2: Create an OHB(openhub/infospoke) which will transfer this data to your R/3 table.
Step3: all the step 1 & step 2 process together place it in a process chain. And in start process add Even as triggering media.
Step4: trigger the cross system event through "BP_RAISE_EVENT" FM which means develop a program in such way that the moment when the user trigger the program in R/3  it will automatically trigger the event in BW system and perform the step1 & step 2.
Challenges in this are 1)if you need delta data then you have to implement badi in info spoke level.
In your case this challenge is not necessary as you specified itz an adhoc run..
Hope its clear a little..!
Thanks
K M R
"Impossible Means I M Possible"
Winners Don't Do Different things,They Do things Differently...!.
>
Shailaja Badda wrote:
> Hi BW Gurus,
> I have requirement as below and i have a solution but i want to know other alternatives for the same.. so please let me know the alternatives.
>
> A BI query would be created from an existing APO BI cube.
> A Z program would be created in SAP
> A Z table would be created in SAP.
> The Z program would map the fields in BI cube vis-a-hopthe fields in Z table.
> The Z program would execute the BI query through an RFC connection, pull the data and store the data in the Z table using the mapping done previously.
> This execution would be an adhoc one - not a periodical job.
>  
> Though I have specified about executing the query through RFC option, we are open to any possible suggestion. All we need is to bring the data in the APO BI cube to the SAP Z table.
>  
> Please let me know for any other possible alternatives.
>
> Thanks,
> Shailaja

Similar Messages

  • Extracting Values of a Field from a Database Table in SAP ECC System

    Hi,
    I downloaded Extracting Values of a Field from a Database Table in SAP ECC System Using MII 12.0
    senario from sdn. I'm trying to do that senario in MII 12.05. But I have problem with section 6 in page 7 (you can supply senario from sdn)
    "6- Under the loop of Repeater, use action u2018Rowu2019 to append just the string part of the WA which will display only values for field u2018Batchu2019"
    I did not find WA elemen in Output element of Repeater_0
    How can I create WA element?
    Thanks.

    Cemil,
    Set up a SAP JCo Interface action block.  Use the RFC name RFC_READ_TABLE.
    In the link editor map the table to "MARA", set RowCount to something small (20 is good sample size) and create an xml transaction property named FIELDS and copy the following into it:
    <?xml version="1.0" encoding="UTF-8"?><FIELDS>
          <item>
            <FIELDNAME>MATNR</FIELDNAME>
            <OFFSET/>
            <LENGTH/>
            <TYPE/>
            <FIELDTEXT/>
          </item>
          <item>
            <FIELDNAME>MTART</FIELDNAME>
            <OFFSET/>
            <LENGTH/>
            <TYPE/>
            <FIELDTEXT/>
          </item>
          <item>
            <FIELDNAME>BSTME</FIELDNAME>
            <OFFSET/>
            <LENGTH/>
            <TYPE/>
            <FIELDTEXT/>
          </item>
          <item>
            <FIELDNAME>XCHPF</FIELDNAME>
            <OFFSET/>
            <LENGTH/>
            <TYPE/>
            <FIELDTEXT/>
          </item>
          <item>
            <FIELDNAME>DATAB</FIELDNAME>
            <OFFSET/>
            <LENGTH/>
            <TYPE/>
            <FIELDTEXT/>
          </item>
        </FIELDS>
    Then link the Transaction.FIELDS to SAP_JCo_Interface_0.Request{/RFC_READ_TABLE/TABLES/FIELDS}.  You may run into problems with two other fields and optionally they can be removed (set link type to remove xml).  I usually remove them initially for testing.  The two fields are:
    SAP_JCo_Interface_0.Request{/RFC_READ_TABLE/INPUT/NO_DATA}
    SAP_JCo_Interface_0.Request{/RFC_READ_TABLE/INPUT/DELIMITER} (or you can set this to something like a semicolon,";" or tilda,"~".  I find it easier to caclulate position by length, but that is my own idiosyncrasy.)
    Once you get this one working, we can explore how to do filtering on the dataset.  Your output should be something like this:
    <?xml version="1.0" encoding="utf-8"?>
    <RFC_READ_TABLE>
      <INPUT>
        <DELIMITER />
        <NO_DATA />
        <QUERY_TABLE>MARA</QUERY_TABLE>
        <ROWCOUNT>20</ROWCOUNT>
        <ROWSKIPS>0</ROWSKIPS>
      </INPUT>
      <TABLES>
        <DATA>
          <item>
            <WA>000000000000000023ROH 00000000</WA>
          </item>
          <item>
            <WA>000000000000000038HALB 00000000</WA>
          </item>
          <item>
            <WA>000000000000000043HAWA 00000000</WA>
          </item>
          <item>
            <WA>000000000000000058HIBE 00000000</WA>
          </item>
          <item>
            <WA>000000000000000059HIBE 00000000</WA>
          </item>
          <item>
            <WA>000000000000000068FHMI 00000000</WA>
          </item>
          <item>
            <WA>000000000000000078DIEN 00000000</WA>
          </item>
          <item>
            <WA>000000000000000088FERT 00000000</WA>
          </item>
          <item>
            <WA>000000000000000089FERT 00000000</WA>
          </item>
          <item>
            <WA>000000000000000098HALB 00000000</WA>
          </item>
          <item>
            <WA>000000000000000170NLAG 00000000</WA>
          </item>
          <item>
            <WA>000000000000000178NLAG 00000000</WA>
          </item>
          <item>
            <WA>000000000000000188NLAG 00000000</WA>
          </item>
          <item>
            <WA>000000000000000288HALB 00000000</WA>
          </item>
          <item>
            <WA>000000000000000358HAWA 00000000</WA>
          </item>
          <item>
            <WA>000000000000000359HAWA 00000000</WA>
          </item>
          <item>
            <WA>000000000000000521HAWA 00000000</WA>
          </item>
          <item>
            <WA>000000000000000578FERT 00000000</WA>
          </item>
          <item>
            <WA>000000000000000597HAWA 00000000</WA>
          </item>
          <item>
            <WA>000000000000000598VERP 00000000</WA>
          </item>
        </DATA>
        <FIELDS>
          <item>
            <FIELDNAME>MATNR</FIELDNAME>
            <OFFSET>000000</OFFSET>
            <LENGTH>000018</LENGTH>
            <TYPE>C</TYPE>
            <FIELDTEXT>Material Number</FIELDTEXT>
          </item>
          <item>
            <FIELDNAME>MTART</FIELDNAME>
            <OFFSET>000018</OFFSET>
            <LENGTH>000004</LENGTH>
            <TYPE>C</TYPE>
            <FIELDTEXT>Material Type</FIELDTEXT>
          </item>
          <item>
            <FIELDNAME>BSTME</FIELDNAME>
            <OFFSET>000022</OFFSET>
            <LENGTH>000003</LENGTH>
            <TYPE>C</TYPE>
            <FIELDTEXT>Purchase Order Unit of Measure</FIELDTEXT>
          </item>
          <item>
            <FIELDNAME>XCHPF</FIELDNAME>
            <OFFSET>000025</OFFSET>
            <LENGTH>000001</LENGTH>
            <TYPE>C</TYPE>
            <FIELDTEXT>Batch management requirement indicator</FIELDTEXT>
          </item>
          <item>
            <FIELDNAME>DATAB</FIELDNAME>
            <OFFSET>000026</OFFSET>
            <LENGTH>000008</LENGTH>
            <TYPE>D</TYPE>
            <FIELDTEXT>Valid-From Date</FIELDTEXT>
          </item>
        </FIELDS>
        <OPTIONS />
      </TABLES>
    </RFC_READ_TABLE>
    Add a repeater sourced on:
    SAP_JCo_Interface_0.Response{/RFC_READ_TABLE/TABLES/DATA/item}
    Link your repeater output to a tracer with this:
    Repeater_0.Output{/item/WA}
    What you will see in each tracer message is a single line of data with all the fields contents concatenated together.  You can look up what each field in the string represents by the length of the field as returned in the Response segment of the RFC_READ_TABLE rfc.  Then you can parse out the data you are interested in.
    Give this a try and let me know how you succeeded.
    By the way, I could not find the scenario you referred to.  Can you post a link?
    Regards,
    Mike
    Edited by: Michael Appleby on Jan 12, 2009 5:16 PM

  • Syncing UST10S and UST12 tables in SAP ECC 6.0

    Hi there,
    I am currently experiencing some discrepancies between the UST10S table containing authorizations assigned to profiles to the table UST12.
    I have heard that UST tables in SAP have been known to get out of sync and become inaccurate, and there is a way to sync them to the USR tables. Is there such a method for ECC 6.0?

    Start transaction SUIM_OLD. Wait about 5 to 60 seconds.Then the tables are synced. Then you can use SUIM.
    Inconsistencies should however seldom occur anymore.
    Cheers,
    Julius

  • Obsolete Tables in SAP ECC 6.0

    Dear All,
    Please let me know the ways to find out obsolete Tables in ECC 6.0 Version.
    I checked for the details in table RODIR providing the Object type and Obsolete field to 'X', but RODIR table provides me only Function module (Object type 'FUNC') Information.
    Kindly input your valuable suggestions.
    Thanks
    Senthilkumar L

    Hi,
    The obsolete objects(Function Modules & Tables) can be traced by using the table RODIR in which the Object Type should be selected and Obsolete feild as 'X'.
    Regards,
    Madhu

  • Could u please answer my question for filling the setup tables in SAP ECC purchasing the following error encountered..

    Enter rate  / USD rate type M for 00.00.0000 in the system settings
    Message no. SG105
    Diagnosis
    For the conversion of an amount into another currency, an entry is missing in the currency conversion table.
    Procedure
    Add the missing entry in the currency conversion table.
    Execute function
    You can then continue to process the commercial transaction

    Hi karim,
    While posting thread here, please use proper headline instead of general sentence.
    Issue: seems like some exchange rates issue at ecc side.
    Seek your FI or functional team to maintain proper USD exchange rates/rate type For M.
    Later you can try to fill set up tables.
    Once FI team done exchange rates at ecc side, do transfer global setting to bw side.
    Thanks

  • Tables in SAP

    Hi
    can anybody know exactly no of tables in SAP ECC 6.0 . whats is their application?
    kaustubh

    Hi Kaustubh,
    here are some fact about SAP AG (AS PER 2003 ANNUAL REPORT) :-
    3rd - SAP is the 3rd largest software company in the world
    30,000 - Total number of people employed by SAP
    5,400 - Number of programmers employed by SAP
    $7.024 billion - FY03 Revenue
    $1.077 million - FY03 Net Income
    12,000 - Number of companies using SAP
    79,800 - Number of SAP installations
    12,000,000 - Number of people using SAP
    120,000,000 - Total number of people in the 12,000 companies who are using SAP
    28 - Number of languages supported by SAP
    46 - Number of country-specific versions of SAP
    22 - Number of industry-specific versions of SAP
    1,000 - Number of pre-defined best practices contained in the SAP system
    10,000 - Number of tables requiring configuration in a full SAP implementation
    55,000 - Number of SAP experienced consultants worldwide
    28 - Number of years ago SAP was started
    5 - Number of people who started SAP

  • Mapping between the query/report and the role with technical names - BI Sec

    Hi,
    How to find the mapping between the query/report and the role with technical names ?
    Like in R/3 we can find the mapping using table AGR_TCODES or thourgh SUIM. However in BI all reports have tcode RRMX.
    So. how to find the role for a given query/report like sales report with technical names?

    I looked into this quite a while ago and cannot remember the exact details, but I think there were 3 tables needed together with a structure to explore with a function module, and you need to use the program ID to think them.
    If you look in SQ01 then I am sure you will find one of the tables, and then search from there onward.
    Alternately, while searching you might find a nice report which does this for you.
    Hope that helps and let us know whether you find it.
    Cheers,
    Julius

  • Mapping between the query/report and the role with technical names

    Hi,
    How to find the mapping between the query/report and the role with technical names ?
    Like in R/3 we can find the mapping using table AGR_TCODES or thourgh SUIM. However in BI all reports have tcode RRMX.
    So. how to find the role for a given query/report like sales report with technical names?
    Edited by: Phoenix on Jul 26, 2008 4:15 PM

    Posted in wrong forum

  • Using case when statement in the select query to create physical table

    Hello,
    I have a requirement where in I have to execute a case when statement with a session variable while creating a physical table using a select query. let me explain with an example.
    I have a physical table based on a select table with one column.
    SELECT 'VALUEOF(NQ_SESSION.NAME_PARAMETER)' AS NAME_PARAMETER FROM DUAL. Let me call this table as the NAME_PARAMETER table.
    I also have a customer table.
    In my dashboard that has two pages, Page 1 contains a table with the customer table with column navigation to my second dashboard page.
    In my second dashboard page I created a dashboard report based on NAME_PARAMETER table and a prompt based on customer table that sets the NAME_ PARAMETER request variable.
    EXECUTION
    When i click on a particular customer, the prompt sets the variable NAME_PARAMETER and the NAME_PARAMETER table shows the appropriate customer.
    everything works as expected. YE!!
    Now i created another table called NAME_PARAMETER1 with a little modification to the earlier table. the query is as follows.
    SELECT CASE WHEN 'VALUEOF(NQ_SESSION.NAME_PARAMETER)'='Customer 1' THEN 'TEST_MART1' ELSE TEST_MART2' END AS NAME_PARAMETER
    FROM DUAL
    Now I pull in this table into the second dashboard page along with the NAME_PARAMETER table report.
    surprisingly, NAME_PARAMETER table report executes as is, but the other report based on the NAME_PARAMETER1 table fails with the following error.
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: S1000 code: 1756 message: [Oracle][ODBC][Ora]ORA-01756: quoted string not properly terminated. [nQSError: 16014] SQL statement preparation failed. (HY000)
    SQL Issued: SET VARIABLE NAME_PARAMETER='Novartis';SELECT NAME_PARAMETER.NAME_PARAMETER saw_0 FROM POC_ONE_DOT_TWO ORDER BY saw_0
    If anyone has any explanation to this error and how we can achieve the same, please help.
    Thanks.

    Hello,
    Updates :) sorry.. the error was a stupid one.. I resolved and I got stuck at my next step.
    I am creating a physical table using a select query. But I am trying to obtain the name of the table dynamically.
    Here is what I am trying to do. the select query of the physical table is as follows.
    SELECT CUSTOMER_ID AS CUSTOMER_ID, CUSTOMER_NAME AS CUSTOMER_NAME FROM 'VALUEOF(NQ_SESSION.SCHEMA_NAME)'.CUSTOMER.
    The idea behind this is to obtain the data from the same table from different schemas dynamically based on what a session variable. Please let me know if there is a way to achieve this, if not please let me know if this can be achieved in any other method in OBIEE.
    Thanks.

  • Best practice for a same query against 2 different tables

    Hello all,
    I want to extract info about tablespaces storage, both permanent and temporary. For that I use 2 different cursors that do exactly the same query but against a different table (dba_data_files and dba_temp_files).
    CURSOR permanentTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_data_files
    WHERE tablespace_name = tablespaceName;
    CURSOR temporaryTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_temp_files
    WHERE tablespace_name = tablespaceName;
    First I'm bothered that I have to use 2 cursors to execute the same query against 2 different tables. Is there no another way around?
    Then I fetch the results of this cursors in 2 different loops because I didn't find a way to dynamically call the cursors. I am looking for best practice here, knowing that I will do the same parsing against the results of the 2 cursors.
    Thank you,

    Hi
    Check whether the below query is helpful or not
    select      fs.tablespace_name "Tablespace",
         fs.tempspace "Temp MB",
         df.totalspace "Total MB"
         from
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) TotalSpace
         from
         dba_data_files
         group by
         tablespace_name
         ) df,
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) tempSpace
         from
         dba_temp_files
         group by
         tablespace_name
         ) fs
         where
         df.tablespace_name = fs.tablespace_name;
    Thanks

  • How can i display all the query items to a table?

    how can i display all the query items to a table in a jsp file?
    i always have an out of memory error..

    any body??any idea?
    is it possible thru configuration or i have to write a program by the abaper??
    Biswa

  • How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?

    1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
    When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed  in the stat plan  ?
    2. Does rowsource statistics gives some kind of  understanding of Extended stats ?

    You can get Row Source Statistics only *after* the SQL has been executed.  An Explain Plan midway cannot give you row source statistics.
    To get row source statistics either set STATISTICS_LEVEL='ALL'  in the session that executes theSQL OR use the Hint "gather_plan_statistics"  in the SQL being executed.
    Then use dbms_xplan.display_cursor
    Hemant K Chitale

  • Select Query failing on a  table that has per sec heavy insertions.

    Hi
    Problem statement
    1- We are using 11g as an database.
    2- We have a table that is partitioned on the date as range partition.
    3- The insertion of data is very high.i.e. several hundreds records per sec. in the current partitioned.
    4- The data is continuously going in the current partitioned as and when buffer is full or per sec timer expires.
    5-- We have to make also select query on the same table and on the current partitioned say for the latest 500 records.
    6- Effecient indexes are also created on the table.
    Solutions Tried.
    1- After analyzing by tkprof it is observed that select and execute is working fine but fetch is taking too much time to show the out put. Say it takes 1 hour.
    2- Using the 11g sql advisior and SPM several baseline is created but the success rate of them is observed also too low.
    please suggest any solution to this issue
    1- i.e. Redisgn of table.
    2- Any better way to quey to fix the fetch issue.
    3- Any oracle seetings or parameter changes to fix the fetch issue.
    Thanks in advance.
    Regards
    Vishal Sharma

    I am uploading the latest stats please let me know how can improve as this is taking 25 minutes
    ####TKPROF output#########
    SQL ID : 2j5w6bv437cak
    select almevttbl.AlmEvtId, almevttbl.AlmType, almevttbl.ComponentId,
      almevttbl.TimeStamp, almevttbl.Severity, almevttbl.State,
      almevttbl.Category, almevttbl.CauseCode, almevttbl.UnitType,
      almevttbl.UnitId, almevttbl.UnitName, almevttbl.ServerName,
      almevttbl.StrParam, almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2,
      almevttbl.ExtraStrParam3, almevttbl.ParentCustId, almevttbl.ExtraParam1,
      almevttbl.ExtraParam2, almevttbl.ExtraParam3,almevttbl.ExtraParam4,
      almevttbl.ExtraParam5, almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,
      almevttbl.SrcIPAddress12,almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
      almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
      almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
      almevttbl.DestIPAddress14,  almevttbl.DestPort, almevttbl.SrcPort,
      almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
      almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
      almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
      almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
      almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
      IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
      IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24
    FROM
           AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT  * FROM
      ( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
      FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where       ((AlmEvtTbl.Customerid
      = 0 or AlmEvtTbl.ParentCustId = 0))  ORDER BY AlmEvtTbl.TIMESTAMP DESC) 
      WHERE ROWNUM  <  602) order by timestamp desc
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.10       0.17          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       42   1348.25    1521.24       1956   39029545          0         601
    total       44   1348.35    1521.41       1956   39029545          0         601
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 82 
    Rows     Row Source Operation
        601  PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11043 us cost=0 size=7426 card=1)
        601   TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11030 us cost=0 size=7426 card=1)
        601    INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=39029377 pr=1956 pw=1956 time=11183 us cost=0 size=0 card=1)(object id 72557)
        601     FILTER  (cr=39027139 pr=0 pw=0 time=0 us)
    169965204      COUNT STOPKEY (cr=39027139 pr=0 pw=0 time=24859073 us)
    169965204       VIEW  (cr=39027139 pr=0 pw=0 time=17070717 us cost=0 size=13 card=1)
    169965204        PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=13527031 us cost=0 size=48 card=1)
    169965204         TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=10299895 us cost=0 size=48 card=1)
    169965204          INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=1131414 pr=0 pw=0 time=3222624 us cost=0 size=0 card=1)(object id 72557)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                      42        0.00          0.00
      SQL*Net message from client                    42       11.54        133.54
      db file sequential read                      1956        0.20         28.00
      latch free                                     21        0.00          0.01
      latch: cache buffers chains                     9        0.01          0.02
    SQL ID : 0ushr863b7z39
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0)
    FROM
    (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("PLAN_TABLE") FULL("PLAN_TABLE")
      NO_PARALLEL_INDEX("PLAN_TABLE") */ 1 AS C1, CASE WHEN
      "PLAN_TABLE"."STATEMENT_ID"=:B1 THEN 1 ELSE 0 END AS C2 FROM
      "SYS"."PLAN_TABLE$" "PLAN_TABLE") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.01          1          3          0           1
    total        3      0.00       0.01          1          3          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 82     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=3 pr=1 pw=1 time=0 us)
          0   TABLE ACCESS FULL PLAN_TABLE$ (cr=3 pr=1 pw=1 time=0 us cost=29 size=138856 card=8168)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.01          0.01
    SQL ID : bjkdb51at8dnb
    EXPLAIN PLAN SET STATEMENT_ID='PLUS30350011' FOR select almevttbl.AlmEvtId,
      almevttbl.AlmType, almevttbl.ComponentId, almevttbl.TimeStamp,
      almevttbl.Severity, almevttbl.State, almevttbl.Category,
      almevttbl.CauseCode, almevttbl.UnitType, almevttbl.UnitId,
      almevttbl.UnitName, almevttbl.ServerName, almevttbl.StrParam,
      almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2, almevttbl.ExtraStrParam3,
       almevttbl.ParentCustId, almevttbl.ExtraParam1, almevttbl.ExtraParam2,
      almevttbl.ExtraParam3,almevttbl.ExtraParam4,almevttbl.ExtraParam5,
      almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,almevttbl.SrcIPAddress12,
      almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
      almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
      almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
      almevttbl.DestIPAddress14,  almevttbl.DestPort, almevttbl.SrcPort,
      almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
      almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
      almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
      almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
      almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
      IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
      IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24 FROM 
           AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT  * FROM
      ( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
      FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where       ((AlmEvtTbl.Customerid
      = 0 or AlmEvtTbl.ParentCustId = 0))  ORDER BY AlmEvtTbl.TIMESTAMP DESC) 
      WHERE ROWNUM  <  602) order by timestamp desc
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.28       0.26          0          0          0           0
    Execute      1      0.01       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.29       0.27          0          0          0           0
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 82 
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       13      0.71       0.96          3         10          0           0
    Execute     14      0.20       0.29          4        304         26          21
    Fetch       92   2402.17    2714.85       3819   70033708          0        1255
    total      119   2403.09    2716.10       3826   70034022         26        1276
    Misses in library cache during parse: 10
    Misses in library cache during execute: 6
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                      49        0.00          0.00
      SQL*Net message from client                    48       29.88        163.43
      db file sequential read                      1966        0.20         28.10
      latch free                                     21        0.00          0.01
      latch: cache buffers chains                     9        0.01          0.02
      latch: session allocation                       1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      940      0.51       0.73          1          2         38           0
    Execute   3263      1.93       2.62          7       1998         43          23
    Fetch     6049      1.32       4.41        214      12858         36       13724
    total    10252      3.78       7.77        222      14858        117       13747
    Misses in library cache during parse: 172
    Misses in library cache during execute: 168
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                        88        0.04          0.62
      latch: shared pool                              8        0.00          0.00
      latch: row cache objects                        2        0.00          0.00
      latch free                                      1        0.00          0.00
      latch: session allocation                       1        0.00          0.00
       34  user  SQL statements in session.
    3125  internal SQL statements in session.
    3159  SQL statements in session.
    Trace file: ora11g_ora_2064.trc
    Trace file compatibility: 11.01.00
    Sort options: default
           6  sessions in tracefile.
          98  user  SQL statements in trace file.
        9111  internal SQL statements in trace file.
        3159  SQL statements in trace file.
          89  unique SQL statements in trace file.
       30341  lines in trace file.
        6810  elapsed seconds in trace file.
    ###################################### AutoTrace Output#################  
    Statistics
           3901  recursive calls
              0  db block gets
       39030275  consistent gets
           1970  physical reads
            140  redo size
         148739  bytes sent via SQL*Net to client
            860  bytes received via SQL*Net from client
             42  SQL*Net roundtrips to/from client
             73  sorts (memory)
              0  sorts (disk)
            601  rows processed

  • Sling mapping issue with Query String

    For a button component, URL behaves differently with query string. Please let me know the sling mapping example for query string.
    Button component behavior across env:-
    Working Scenario:-
    - In Author edit mode, URL pointing to "/content/<websitelink>"
    - In Author preview mode shows URL "<host>:<port>/content/<websitelink>.html"
    - In Publisher mode shows URL, "<host>:<port>/<websitelink>", we do not want "/content" before website link so it is fine.
    Failure Scenario with query string:-
    - In Author edit mode URL pointing to "/content/<websitelink>?username=han"
    - In Author preview mode with URL "<host>:<port>/content/<websitelink>?username=han"
    - In Publisher mode, <host>:<port>/content/<websitelink>?username=han - Here we see the content, which is not acceptable in our scenario.
    Please let me know the mapping changes when we have a URL with query string.

    1) I assume you configure as per the rules mentioned in http://sling.apache.org/site/mappings-for-resource-resolution.html --> 'Mapping Entry Specification' under /etc/map. Based on the configuration in /etc/map and the website that we publish, it translates and shows all the applicable combination in the Felix console http://<host>:<port>/system/console/jcrresolver.
    2) How does Adobe CQ 5.4 resolve the right mapping, if we have multiple map folder like below in /etc directory with the same website with different rules.
    etc/map --> http --> <website> with property sling:match, sling:internalRedirect
    etc/map.publish --> http --> <website> with property sling:match, sling:internalRedirect
    etc/map.publish-stag --> http --> <website> with property sling:match, sling:internalRedirect
    Kindly clarify.

  • Query related to Internal Table

    Hi ,
          I have a small query related to internal table , can we dump millions of records to an internal .
    The actual requirment is like i need to develop a report in BI side where i have to dump records into an internal table  from PSA tables without filtering .
    Can we do so ....
    or do we have any other option to dump the data to an internal tables .
    need some tips on the same .
    Thanks ,
    VInay.

    Hello Vinay,
    I believe the following extract will give you a brief idea on the size limitations for an internal table.......
    Internal tables are dynamic data objects, since they can contain any number of lines of a particular type. The only restriction on the number of lines an internal table may contain are the limits of your system installation. <u><i>The maximum memory that can be occupied by an internal table (including its internal administration) is 2 gigabytes. A more realistic figure is up to 500 megabytes. An additional restriction for hashed tables is that they may not contain more than 2 million entries.</i></u>
    Hope it proved useful
    Reward if helpful
    Regards
    Byju

Maybe you are looking for

  • A problem about iPhone 3GS after upgrading to iOS4

    After upgrading to iOS4, when I switch on the 'Airoplane Mode' and record sound, I couldn't play back the material I had recorded. However, when my iPhone was in the version 3.1.X, materials recorded with 'Airplane Mode' on could be played back.

  • Ramdisk issue in Solaris 10

    When I try to create a ramdisk I get the following error; # ramdiskadm -a disk1 4g ramdiskadm: couldn't create ramdisk "disk1": Resource temporarily unavailable Has anyone seen this and how it was resolved.? Thanks in advance Ali

  • SOAP:ENV Error: Resource..not found on this server

    I created a simple java class with a method that returns org.w3c.dom.Element type. Created web service for this class and method in Jdeveloper. Deployed this web service in OC4J locally (as explained by other web services examples). Then I created a

  • SMS problem iPhone 4S

    Hi! My problem is that the SMS dosen't appear grouped as conversations... What can I do? In messaje settings I don't have that option! Thanks!

  • Could you please tell me a method in order to print a space?

    Hi,I'm new at java and I dunno how to print a space!!! If i use System.out.println(" ") prints and a '\n' which i don't want. Thanks in advance!