Performing DDLs in themiddle of a transaction: use Autonomous Transactions?

We want to use Oracle Spatial to store some of our data, for example SDO_PC, a Spatial type for arbitrarily large point clouds. The problem is that the standard way to populate a SDO_PC instance (in a row) is via SDO_PC_PKG.CREATE_PC, which turns out to be a DDL operation, and furthermore operates on an input table containing just the points to inserts for that SDO_PC instance, thus one needs to create a temp table as well per point cloud (or reuse the same, serializing access to it), another DDL.
But when the user hit save in our application, she may have 10 or 20 or 100 point clouds to save, and I'd want the entire save to be in a single large transaction (I asked about it in this forum a while back, and was answered to make my transactions as large as they should "logically" be, which is a "save" operation in this case).
How does one resolve doing the save in a single transaction, when the save itself requires calling SDO_PC_PKG.CREATE_PC (which is a DDL) multiple times?
Are "Autonomous Transactions" the solution? (I've read in Ask Tom that Autonomous Transactions are miss-used most of the time; would it be true here as well?)
Must the application open a separate Connection to the same server, and perform the DDLs in this other connection? If so, will the first "transactional save" connection be able to access the side effects (tables, etc...) of the DDLs of the "DDL" connection?
What other solution to this issue could be implemented?
In a related issue, in this Spatial thread Performance of insert with spatial index someone proposed dropping a Spatial Index before an insert to recreate it afterward, to overcome a large overhead on the insert, but this also implies DDLs in the middle of a large multi-table transaction, so would a solution to the SDO_PC_PKG.CREATE_PC issue above apply equally well in this case as well?
As everyone can tell, I'm fairly new to databases and Oracle, so I'd appreciate the help of more experienced DBAs / Application Database Architects on this. Thanks, --DD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Thanks Justin. I didn't think of this, even though I've looked at Workspace Manager a bit, and it's something we are considering using in the future.
But I'm worried about 1) the complexity that Workspace Manager adds, where every table is renamed and replaced by a view, and 2) the performance implications of all the instead of triggers to write to the underlying data tables.
Furthermore, I wonder how Workspace Manager would scale with hundreds of users, since having the save done in a separate branch kinda implies having a branch per user, and thus hundred of branches. Plus Workspace Manager uses VPD I think, yet another complication (again, we'll probably have to bite the bullet eventually, but I'd rather that be later than sooner).
Thus I'm reluctant to go the Workspace Manager route at this point, because we already have so much going on with Oracle that I'd rather leave this layer out, at least for now.
What about the parallel connection or autonomous transaction options? Thanks, --DD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • Discussion Forum: Unable to perform the operation: Row was not found using

    Hi all,
    I deployed the Discussion Forum and it shows the Portlet with "Creating a product ..". Very fine.
    I can create a Product. Fine also.
    I can fill in a new Thread, but then, when I try to submit it, I get the error:
    Unable to perform the operation: Row was not found using request parameter: 000100000004313437380000012A000000F8FE832E94.
    When I click Cancel. I see the Overview of the Products. I drill down to the Threads for my Product and the new thread appears on the list. As soon as I click on the Thread in order to read it the following error appears:
    Unable to perform the operation: Row was not found using request parameter: 0001000000043134373800000134000000F8FE832E94.
    Any help is appreciated,
    Michael

    You need to set the Login Frequency (while registering the provider) to 'Once Per Session'. This will allow the portlet to track the user transaction state.
    Make sure that you do not bounce the oc4j container in between the portlet operations, since this will leave the transaction into inconsistent state.
    You have to ensure that whenever you bounce the oc4j container/provider mid-tier you logout of portal and login again.
    This will solve the problem.

  • Performing DDL

    Hi all,
    Can anyone suggest an answer for this
    What's the package used in the Reports (Developer 2000) to perform DDL (supposing that, I want to Create a table from reports)
    Thank you

    SRW.DO_SQL('create table ...')

  • How can I perform this kind of range join query using DPL?

    How can I perform this kind of range join query using DPL?
    SELECT * from t where 1<=t.a<=2 and 3<=t.b<=5
    In this pdf : http://www.oracle.com/technology/products/berkeley-db/pdf/performing%20queries%20in%20oracle%20berkeley%20db%20java%20edition.pdf,
    It shows how to perform "Two equality-conditions query on a single primary database" just like SELECT * FROM tab WHERE col1 = A AND col2 = B using entity join class, but it does not give a solution about the range join query.

    I'm sorry, I think I've misled you. I suggested that you perform two queries and then take the intersection of the results. You could do this, but the solution to your query is much simpler. I'll correct my previous message.
    Your query is very simple to implement. You should perform the first part of query to get a cursor on the index for 'a' for the "1<=t.a<=2" part. Then simply iterate over that cursor, and process the entities where the "3<=t.b<=5" expression is true. You don't need a second index (on 'b') or another cursor.
    This is called "filtering" because you're iterating through entities that you obtain from one index, and selecting some entities for processing and discarding others. The white paper you mentioned has an example of filtering in combination with the use of an index.
    An alternative is to reverse the procedure above: use the index for 'b' to get a cursor for the "3<=t.b<=5" part of the query, then iterate and filter the results based on the "1<=t.a<=2" expression.
    If you're concerned about efficiency, you can choose the index (i.e., choose which of these two alternatives to implement) based on which part of the query you believe will return the smallest number of results. The less entities read, the faster the query.
    Contrary to what I said earlier, taking the intersection of two queries that are ANDed doesn't make sense -- filtering is the better solution. However, taking the union of two queries does make sense, when the queries are ORed. Sorry for the confusion.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • ORA-38301: can not perform DDL/DML over objects in Recycle Bin

    Oracle 10.2.0.4:
    When performing DDL on a table I get the error "ORA-38301: can not perform DDL/DML over objects in Recycle Bin". I ran purge recyclebin but didn't help. I then ran following sql (below) and it returned whole bunch of rows for that user. Does it mean that purfe recyclebin didn't work? What should I do?
    select r.obj#, r.original_name, u.username from recyclebin$ r, dba_users u where r.owner#=u.user_id

    I ran purge recyclebin but didn't helpis this a rac env?
    check the metalink note performing DML/DDL operation over object in bin ORA-38301 - 578075.1
    Bug 4760728 - ORA-38301 during DROP TABLE when already dropped from a different node 4760728.8

  • Perform multiplication, division n get remainder without using arithmetic o

    hello,
    perform multiplication, division n get remainder without using arithmetic operators
    thanks in advance
    manasi

    ram.manasi wrote:
    i can program myself however i am new to programming and have no clue how to perform arithmmetic operations without using arithmetic operators n would like to know how to go about itwell, we're not your private code-monkeys nor are we tutors. We are usually best at answering specific questions but many of us get our hackles up when someone simply demands an answer. I suggest that you find out what your teacher is expecting of you here. I would guess that this involves some recursion, but it is up to you to find out. Do some work. Then if you have a specific question, please feel free to come back and ask for help. nicely.

  • DDL Statement (design view) of Tables used in SAP

    Hi,
    I am installing SAP 4.7 on oracle. unfortunately the some media is corrupt and as a result there are some missing tables and views. one of the missing table is DD02T.
    Can any one tell me from where i can get the DDL Statement (Table design) for Tables used in SAP as for my example i am missing Table DD02T.
    If i get this information, I can create those missing tables and this way will complete my installation.
    Regards,
    Abhishek

    You are pretty correct. command for creating table and views are prresent in SAPVIEW.STR.  I am getting problems in the installation of mainlny Data Dictionary tables like DD02T and DD02L etc.
    I tried to solve the problem by converting err to ok in one of the tsk file. it made the installationm process up but again it gives me the error during the SAP License Post Processing Step. as i think it needs some table or view which are not present. so how to get rid of??? do you know those table which are affected by the sap license step, so that i can create manualy.
    I am new to SAP and installing it for learing purpose.
    Regards,
    Abhsihek

  • Get GPS info using Autonomous and CellSite Mode parallelly using MultiThreading

    I tried to get the GPS latitude and longitude values using Autonomous and CellSite Mode parallelly using two threads, but while execution only one thread is being active and I get values from only that, the other thread doesn't return any values at all.
    Is it possible to retrieve the GPS information using multiple threads running parallelly and also can I display the latitude and longitude values from the threads on the screen with less accuracy rate among the values.
    Thanks in advance...

    Your thread may not get noticed as it is in General Support threads. You may post your thread in Java Development to get faster response.
    Ron
    Click "Accept as Solution" if your problem is solved. To give thanks, click thumbs up Blackberry Battery Saving Tips | Follow me on Twitter

  • Cannot perform DDL DML inside a query

    Hi,
    I am using Tom Kyte's in_list function to create a select list inside a select query in one of the reports in my application, given as follows:
    SELECT col1, col2,...htmldb_item.select_list_from_query(1,NULL,
    '(SELECT column_value d,column_value r FROM the((SELECT CAST
    (mypackage.in_list(''||colon_delimited_list||'') AS
    myTableType) FROM dual) ))' )...FROM A, B WHERE ...
    But the following error appears:
    report error:
    ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML
    ORA-06512: at "FLOWS_020200.WWV_FLOW", line 7002
    ORA-06512: at "FLOWS_020200.WWV_FLOW_UTILITIES", line 113
    ORA-06550: line 1, column 45:
    PLS-00103: Encountered the symbol "(" when expecting one of the following:
    begin case declare exit for goto if loop mod null pragma
    raise return select update while with
    If I replace the query passed to function select_list_from_query, with some simple dummy query, then it works fine.
    Please tell me is there any limitation of its use as this case?
    What may be the possible best solution?
    Regards,
    Amir Aman

    I have tried Scott's parenthesis solution first.
    It is working but with making an additional change, i.e., passing '||''''||colon_delimited_list||''''||' instead just ''||colon_delimited_list||''.
    Denes, I have also seen your examples, really a helpful resource.
    Thank you very much!
    Amir Aman

  • How do I write a Dll to perform a union in visual studio for use in LabVIEW?

    Hello,
    I'm new to writing dll(s). I'm attempting to write a Dll that will perform a union on a number from LabVIEW and then return the result to LabVIEW for my use. I've been struggling with this for a few days and I'm finally giving in to get some help.
    This screenshot is the only description that I have to help me with what I am trying to do. Basically, I'm trying to pass a value that will be different based on a measured partial discharge value. In the screen below, the value used is "1061111989" This number is my reading. I need to pass this value from labVIEW into my dll to perform a union on it.
    In my Dll, I need to write MeterReading.IntVal's value and get the MeterReading.FloatVal back.
    I have Visual Studio 2012 Full and LabVIEW 2012 Full, I'm LabVIEW CLAD and I've written 17 test floors in the US to date; however, I've never had any need of Visual Studio until now. Can anyone help me out with this?
    Thanks in advance, -Chris
    -Chris
    "You must be some sort of an engineer?" -Close Friend
    Solved!
    Go to Solution.

    Sometimes, I try to solve simple issues in very complicated ways. What got me so confused here was the solution from the engineer of the device that I am reading from was using a union to get the value. I'm not familiar with unions so I thought that it was some structure that was not creatable in LabVIEW. Anyways, a typecast to single point gave me the proper value. Thank you.
    -Chris
    "You must be some sort of an engineer?" -Close Friend

  • Performance issues - which oracle xml technology to use?

    Our company has spent some time researching different oracle xml technologies to obtain the fastest performence but also have the flexibily of a changing xml schemas across revs.
    Not registering schemas gives the flexibity of quickly changing schemas between revs and simpler table structure, but hurts performance quite a bit compared to registering schemas.
    Flat non xml tables seems the fastest but seeing that everything is going xml, this doesn;t seems like a choice.
    Anyhow, let me know any input/experience anyone can offer.
    here's what we have tested all with simple
    10000 record tests, each of the form:
    insert into po_tab values (1,
    xmltype('<PurchaseOrder xmlns="http://www.oracle.com/PO.xsd"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.oracle.com/PO.xsd
    http://www.oracle.com/PO.xsd">
    <PONum>akkk</PONum>
    <Company>Oracle Corp</Company>
    <Item>
    <Part>9i Doc Set</Part>
    <Price>2550</Price>
    </Item>
    <Item>
    <Part>8i Doc Set</Part>
    <Price>350</Price>
    </Item>
    </PurchaseOrder>'));
    we have tried three schenerios
    -flat tables, non xml db
    -xml db with registering schemas
    -xml db but not registering schemas, just using XmlType
    and adding oracle text xml indexes with paths to speed up
    queries
    now for the results
    - flat tables, non xml db (we where thinking of using it like this to ease fetching of data for ui views in ad hoc situations, then have code to export the data into
    xml, is there any oracle tool that will let me
    export the data into
    xml automatically via a schema?)
    create table po_tabSimple(
    id int constraint id_pk PRIMARY KEY,
    part varchar2(100),
    price number
    insert into po_tabSimple values (i, 'test', 1.000);
    select part from po_tabSimple;
    2 seconds (Quickest)
    -xml db with registering schemas
    declare
    doc varchar2(1000) := '<schema
    targetNamespace="http://www.oracle.com/PO.xsd"
    xmlns:po="http://www.oracle.com/PO.xsd"
    xmlns="http://www.w3.org/2001/XMLSchema">
    <complexType name="PurchaseOrderType">
    <sequence>
    <element name="PONum" type="decimal"/>
    <element name="Company">
    <simpleType>
    <restriction base="string">
    <maxLength value="100"/>
    </restriction>
    </simpleType>
    </element>
    <element name="Item" maxOccurs="1000">
    <complexType>
    <sequence>
    <element name="Part">
    <simpleType>
    <restriction base="string">
    <maxLength value="1000"/>
    </restriction>
    </simpleType>
    </element>
    <element name="Price" type="float"/>
    </sequence>
    </complexType>
    </element>
    </sequence>
    </complexType>
    <element name="PurchaseOrder" type="po:PurchaseOrderType"/>
    </schema>';
    begin
    dbms_xmlschema.registerSchema('http://www.oracle.com/PO.xsd', doc);
    end;
    create table po_tab(
    id number,
    po sys.XMLType
    xmltype column po
    XMLSCHEMA "http://www.oracle.com/PO.xsd"
    element "PurchaseOrder";
    select EXTRACT(po_tab.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tab;
    4 sec
    -xml db but not registering schemas, just using XmlType
    and adding oracle text xml indexes with paths to speed up
    queries
    create table po_tabOld(
    id number,
    po sys.XMLType
    select EXTRACT(po_tabOld.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tabOld;
    41 seconds without indexes
    41 seconds with indexes
    here are the indexes used
    CREATE INDEX po_tabColOld_idx ON po_tabOld(po) indextype is ctxsys.ctxxpath;
    CREATE INDEX po_tabOld_idx ON po_tabOld X (X.po.extract('/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal());

    Our company has spent some time researching different oracle xml technologies to obtain the fastest performence but also have the flexibily of a changing xml schemas across revs.
    Not registering schemas gives the flexibity of quickly changing schemas between revs and simpler table structure, but hurts performance quite a bit compared to registering schemas.
    Flat non xml tables seems the fastest but seeing that everything is going xml, this doesn;t seems like a choice.
    Anyhow, let me know any input/experience anyone can offer.
    here's what we have tested all with simple
    10000 record tests, each of the form:
    insert into po_tab values (1,
    xmltype('<PurchaseOrder xmlns="http://www.oracle.com/PO.xsd"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.oracle.com/PO.xsd
    http://www.oracle.com/PO.xsd">
    <PONum>akkk</PONum>
    <Company>Oracle Corp</Company>
    <Item>
    <Part>9i Doc Set</Part>
    <Price>2550</Price>
    </Item>
    <Item>
    <Part>8i Doc Set</Part>
    <Price>350</Price>
    </Item>
    </PurchaseOrder>'));
    we have tried three schenerios
    -flat tables, non xml db
    -xml db with registering schemas
    -xml db but not registering schemas, just using XmlType
    and adding oracle text xml indexes with paths to speed up
    queries
    now for the results
    - flat tables, non xml db (we where thinking of using it like this to ease fetching of data for ui views in ad hoc situations, then have code to export the data into
    xml, is there any oracle tool that will let me
    export the data into
    xml automatically via a schema?)
    create table po_tabSimple(
    id int constraint id_pk PRIMARY KEY,
    part varchar2(100),
    price number
    insert into po_tabSimple values (i, 'test', 1.000);
    select part from po_tabSimple;
    2 seconds (Quickest)
    -xml db with registering schemas
    declare
    doc varchar2(1000) := '<schema
    targetNamespace="http://www.oracle.com/PO.xsd"
    xmlns:po="http://www.oracle.com/PO.xsd"
    xmlns="http://www.w3.org/2001/XMLSchema">
    <complexType name="PurchaseOrderType">
    <sequence>
    <element name="PONum" type="decimal"/>
    <element name="Company">
    <simpleType>
    <restriction base="string">
    <maxLength value="100"/>
    </restriction>
    </simpleType>
    </element>
    <element name="Item" maxOccurs="1000">
    <complexType>
    <sequence>
    <element name="Part">
    <simpleType>
    <restriction base="string">
    <maxLength value="1000"/>
    </restriction>
    </simpleType>
    </element>
    <element name="Price" type="float"/>
    </sequence>
    </complexType>
    </element>
    </sequence>
    </complexType>
    <element name="PurchaseOrder" type="po:PurchaseOrderType"/>
    </schema>';
    begin
    dbms_xmlschema.registerSchema('http://www.oracle.com/PO.xsd', doc);
    end;
    create table po_tab(
    id number,
    po sys.XMLType
    xmltype column po
    XMLSCHEMA "http://www.oracle.com/PO.xsd"
    element "PurchaseOrder";
    select EXTRACT(po_tab.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tab;
    4 sec
    -xml db but not registering schemas, just using XmlType
    and adding oracle text xml indexes with paths to speed up
    queries
    create table po_tabOld(
    id number,
    po sys.XMLType
    select EXTRACT(po_tabOld.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tabOld;
    41 seconds without indexes
    41 seconds with indexes
    here are the indexes used
    CREATE INDEX po_tabColOld_idx ON po_tabOld(po) indextype is ctxsys.ctxxpath;
    CREATE INDEX po_tabOld_idx ON po_tabOld X (X.po.extract('/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal());

  • Performance issue in select when subselect is used

    We have a select statement that is using a where clause that has dates in. I've simplified the SQL below to demonstrate roughly what it does:
    select * from user_activity
    where starttime >= trunc(sysdate - 1) + 18/24
    and starttime < trunc(sysdate) + 18/24
    (this selects records from a table where a starttime field has values beween 6pm today and 6pm yesterday).
    We are using this statement to create a materialized view which we refresh daily. Occasionally we have the need to refresh the data for a historic day, which means that we need to go in and change the where lines, e.g. to get data from 3 days ago instead of yesterday, the where clause becomes:
    select * from user_activity
    where starttime >= trunc(sysdate - 3) + 18/24
    and starttime < trunc(sysdate - 2) + 18/24
    Having to recreate the views like this is a nuisance, so we decided to be 'smart' and create a table with a setting in that we could use to control the number of days the view looks back. So if our table is called days_ago and field called days (with a single record, set to 1), we now have
    select * from user_activity
    where starttime >= trunc(sysdate - (select days from days_ago) + 18/24
    and starttime < trunc(sysdate - ((select days from days_ago) + 1) + 18/24
    The original SQL takes a few seconds to run. However the 'improved' version takes 25 minutes.
    The days table only has 1 record in, selecting directly from it is instantaneous. Running the select on its own.
    Does anything jump out as being daft in this approach? We cannot explain why the performance has suddenly dropped off for such a simple change.

    Hi,
    Do you really need a view to do this?
    Can't you define a bind varibale, and use it in your query:
    VARIABLE  days_ago   NUMBER
    EXEC     :days_ago := 3;
    SELECT      ...
    WHERE     starttime >= TRUNC (SYSDATE - :days_ago') + (18 / 24)
    AND     starttime <  TRUNC (SYSDATE - :days_ago') + (42 / 24)     -- 42 = 24 + 18If you really must have a view, then it might be faster if you got the parameter from a SYS_CONTEXT variable, rather than from a table.
    Unfortunately, SYS_CONTEXT always returns a string, so you have to be carefule encoding the number as a string when you set the variable, and decoding it when you use the variable:
    WHERE     starttime >= TRUNC (SYSDATE - TO_NUMBER ( SYS_CONTEXT ('MY_VIEW_NAMESPACE', 'DAYS_AGO'))) + (18 / 24)
    AND     starttime <  TRUNC (SYSDATE - TO_NUMBER ( SYS_CONTEXT ('MY_VIEW_NAMESPACE', 'DAYS_AGO'))) + (42 / 24)     -- 42 = 24 + 18For more about SYS_CONTEXT, look it up in the SQL language m,anual, and follow the links there:
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions172.htm#sthref2268
    If performance is important enough, consider storing the "fiscal day" as a separate (indexed) column. Starting in Oracle 11.1, you can use a virtual column for this. In earleir versions, you'd have to use a trigger. By "fiscal day", I mean:
    TRUNC (starttime + (6/24))If starttime is between 18:00:00 on December 28 and 17:59:59 on December 29, this will return 00:00:00 on December 29. You could use it in a query (or view) like this:
    WHERE     fiscal_day   = TRUNC (SYSDATE) - 2

  • Problem in performing multiple Point-In-Time Database Recovery using RMAN

    Hello Experts,
    I am getting an error while performing database point in time recovery multiple times using RMAN. Details are as follows :-
    Environment:
    Oracle 11g, ASM,
    Database DiskGroups : DG_DATA (Data files), DG_ARCH(Archive logs), DG_REDO(Redo logs Control file).
    Snapshot DiskGroups :
    Snapshot1 (taken at 9 am): SNAP1_DATA, SNAP1_ARCH, +SNAP1_REDO
    Snapshot2 (taken at 10 am): SNAP2_DATA, SNAP2_ARCH, +SNAP2_REDO
    Steps performed for point in time recovery:
    1. Restore control file from snapshot 2.
         RMAN> RESTORE CONTROLFILE from '+SNAP2_REDO/orcl/CONTROLFILE/Current.256.777398261';
    2. For 2nd recovery, reset incarnation of database to snapshot 2 incarnation (Say 2).
    3. Catalog data files from snapshot 1.
    4. Catalog archive logs from snapshot 2.
    5. Perform point in time recovery till given time.
         STARTUP MOUNT;
         RUN {
              SQL "ALTER SESSION SET NLS_DATE_FORMAT = ''dd-mon-yyyy hh24:mi:ss''";
              SET UNTIL TIME "06-mar-2013 09:30:00";
              RESTORE DATABASE;
              RECOVER DATABASE;
              ALTER DATABASE OPEN RESETLOGS;
    Results:
    Recovery 1: At 10.30 am, I performed first point in time recovery till 9:30 am, it was successful. Database incarnation was raised from *2* to *3*.
    Recovery 2: At 11:10 am, I performed another point in time recovery till 9:45 am, while doing it I reset the incarnation of DB to *2*, it failed with following error :-
    Starting recover at 28-FEB-13
    using channel ORA_DISK_1
    starting media recovery
    media recovery failed
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/06/2013 11:10:57
    ORA-00283: recovery session canceled due to errors
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover if needed
    start until time 'MAR 06 2013 09:45:00'
    ORA-00283: recovery session canceled due to errors
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '+DG_REDO/orcl/onlinelog/group_1.257.807150859'
    ORA-17503: ksfdopn:2 Failed to open file +DG_REDO/orcl/onlinelog/group_1.257.807150859
    ORA-15012: ASM file '+DG_REDO/orcl/onlinelog/group_1.257.807150859' does not exist
    Doubts:
    1. Why did recovery failed 2nd time, but not 1st time and why is RMAN looking for online redo log group_1.257.807150859 in 2nd recovery ?
    3. I tried restoring control file from AutoBackup, in that case both 1st and 2nd recovery succeeded.
    However for this to work, I always need to keep the AutoBackup feature enabled.
    How reliable is control file AutoBackup ? Is there any alternative to using AutoBackup, can I restore control file from snapshot backup only ?
    4. If I restore control file from AutoBackup, then from what point of time/SCN does RMAN restores the control file ?
    Please help me out in this issue.
    Thanks.

    992748 wrote:
    Hello experts,
    I'm little newbie to RMAN recovery. Please help me in these doubts:
    1. If I have datafiles, archive logs & control files backup, but current online REDO logs are lost, then can I perform incomplete database recovery ?yes, if you have backups of everything else
    2. Till what maximum time/scn can incomplete database recovery be performed ??Assuming the only thing lost is the redo logs, you can recover to the last scn in the last archivelog.
    3. What is role of online REDO logs in incomplete database recovery ? They provide the final redo changes - the ones that have not been written to archivelogs
    Are they required for incomplete recovery ?It depends on how much incomplete recovery you need to do.
    Think of all of your changes as a constant stream of redo information. As a redolog fills, it is copied to archive, then (eventually) reused. over time, your redo stream is in archivelog_1, continuing into archvivelog_2, then to 3, and eventually, when you get to the last archivelog, into the online redo. A recovery will start with the oldest necessary point in the redo stream and continue forward. Whether or not you need the online redo for a PIT recovery depends on how far forward you need to recover.
    But you should take every precaution to prevent loss of online redo logs .. starting with having multiple members in each redo group ... and keeping those multiple members on physically separate disks.

  • Perform setting OOP ALV for multiple reports using Field Symbols

    Hi, Abapers ... i try to write a programme which using ONE oop ALV but 2 different structure internal table. the last result should be 2 radio button. first button is r_wbs and 2ns r_kpi. r_wbs will display 4 column answer and r_kpi will display 10 columns answer with different column name. i successfully implemented using FIELDS SYMBOLS but the problems i failed to perform customized setting forALV (report's tittle, column name, different layout etc)  for 2 different reports.  this is the programme. Please Give Opinion, simple example will be more helpful. Thanks You Very Much
    *&this report experimental how to print into ONE alv
    *&with 2 diffrent structure internal table
    REPORT  zfiroopalv.
    SELECTION-SCREEN BEGIN OF BLOCK mode WITH FRAME TITLE text-002.
    PARAMETERS r_wbs RADIOBUTTON GROUP mode DEFAULT 'X'.
    PARAMETERS r_kpi RADIOBUTTON GROUP mode.
    SELECTION-SCREEN END OF BLOCK mode.
    CLASS lcl_main DEFINITION.
    PUBLIC SECTION.
    CLASS-DATA: md_wbs TYPE c LENGTH 1.
    METHODS: process,
             write.
    DATA: mdo_data TYPE REF TO data.
    TYPES: BEGIN OF st_wbs,
    rsnum TYPE zmeime002a-rsnum,
    rspos TYPE zmeime002a-rspos,
    a TYPE zmmgitab01-menge,
    b TYPE zmeime002a-bdmng,
    c TYPE zmeime002a-bdmng,
    d TYPE zmeime002a-bdmng,
    e TYPE zmeime002a-bdmng,
    f TYPE zmmgitab01-menge,
    g TYPE zmmgitab01-menge,
    END OF st_wbs.
    TYPES: BEGIN OF st_kpi,
    regio TYPE zmeime002a-regio,
    gsber TYPE zmeime002a-gsber,
    gtext TYPE zmeime002a-gtext,
    x TYPE zmmgitab01-menge,
    y TYPE zmmgitab01-menge,
    z TYPE zmmgitab01-menge,
    END OF st_kpi.
    CLASS-DATA: it_wbs TYPE TABLE OF st_wbs,
                wa_wbs LIKE LINE OF it_wbs.
    CLASS-DATA: it_kpi TYPE TABLE OF st_kpi,
                wa_kpi LIKE LINE OF it_kpi.
    PRIVATE SECTION.
    DATA: set_display_setting TYPE REF TO cl_salv_table.
    DATA: display_settings TYPE REF TO cl_salv_display_settings.
    DATA: salv_table TYPE REF TO cl_salv_table.
    DATA: error TYPE REF TO cx_root.
    DATA: errtext TYPE string.
    ENDCLASS.
    CLASS lcl_kpi DEFINITION INHERITING FROM lcl_main.
    PUBLIC SECTION.
    METHODS: process_kpi.
    PRIVATE SECTION.
    ENDCLASS.
    * C.L.A.S.S lcl_main D.E.F.I.N.I.T.I.O.N
    CLASS lcl_wbs DEFINITION INHERITING FROM lcl_main.
    PUBLIC SECTION.
    METHODS: process_wbs.
    PRIVATE SECTION.
    ENDCLASS.
    * m.a.i.n. .p.r.o.g.r.a.m.
    START-OF-SELECTION.
      DATA: o_main TYPE REF TO lcl_main.
    DATA: p_wbs TYPE c.
    CREATE OBJECT o_main.
      CASE 'X'.
      WHEN r_wbs.
          o_main->md_wbs = 'X'.
      WHEN r_kpi.
          o_main->md_wbs = ' '.
      ENDCASE.
      o_main->process( ).
      o_main->write( ).
    CLASS lcl_main IMPLEMENTATION.
    *ENDMETHOD.
    METHOD process.  " NOTE: public method
    DATA: o_main TYPE REF TO lcl_main,
          o_wbs TYPE REF TO lcl_wbs,
          o_kpi TYPE REF TO lcl_kpi.
    CREATE OBJECT: o_wbs,o_kpi.
      IF ( me->md_wbs = 'X' ).
          CALL METHOD o_wbs->process_wbs( ).  " NOTE: private method
          GET REFERENCE OF me->it_wbs INTO me->mdo_data.
      ELSE.
          CALL METHOD o_kpi->process_kpi( ).  " NOTE: private method
          GET REFERENCE OF me->it_kpi INTO me->mdo_data.
      ENDIF.
    ENDMETHOD.
    METHOD write.
    FIELD-SYMBOLS:
      <lt_outtab>    TYPE table.
      ASSIGN me->mdo_data->* TO <lt_outtab>.
    cl_salv_table=>factory(
    EXPORTING
    list_display = if_salv_c_bool_sap=>false
    IMPORTING
    r_salv_table = salv_table
    CHANGING
    t_table = <lt_outtab>
    salv_table->display( ).
    ENDMETHOD.
    ENDCLASS.
    CLASS lcl_kpi IMPLEMENTATION.
    METHOD process_kpi.
    *********** run some select statement into it_kpi*******
    ENDMETHOD.
    ENDCLASS.
    CLASS lcl_wbs IMPLEMENTATION.
    METHOD process_wbs.
    *********** run some select statement into it_wbs*******
    ENDMETHOD.
    ENDCLASS.

    Hi,
    I had similar requirement wherein I was supposed to display different data using 2 different internal tables on a subscreen area.
    The screen consists of two parts: 1) selection-screen with few input fields and two buttons 2) Subscreen area where the report need to be displayed. This report is displayed based on the button that the user is selecting. For this I have done the following things:
    1. Capture the sy-ucomm when user is clicking on any of the two buttons in PAI. Then perform data fetch operation.
             MODULE USER_COMMAND_9003 INPUT.
                 CASE OK_CODE.
                     WHEN 'DETAIL'.
                       GV_RPT = OK_CODE.
                       PERFORM F_GET_DETAIL_DATA.
                     WHEN 'REPORT'.
                       GV_RPT = OK_CODE.
                       PERFORM F_GET_REPT_DATA.
                   ENDCASE.
             ENDMODULE.                 " USER_COMMAND_9003  INPUT
    2.  Declare two different ALV's with the fieldcat similar to 2 internal tables respectively. Use the above sy-ucomm PBO to call appropriate ALV.
             MODULE DISPLAY_ALV OUTPUT.
               IF GV_RPT EQ 'DETAIL'.
                 PERFORM F_FIELDCAT_DETAIL.
                 PERFORM F_LAYOUT_DETAIL.
                 PERFORM F_EXCLUDE_TOOLBAR_DETAIL.
                 PERFORM F_DISPLAY_ALV_DETAIL.
               ELSEIF GV_RPT EQ 'REPORT'.
                 PERFORM F_FIELDCAT_REPT.
                 PERFORM F_LAYOUT_REPT.
                 PERFORM F_EXCLUDE_TOOLBAR_REPT.
                 PERFORM F_DISPLAY_ALV_REPT.
               ENDIF.
             ENDMODULE.                 " DISPLAY_ALV  OUTPUT
    3. Before displaying ALV you need to free the container and ALV.
    FORM F_DISPLAY_ALV_DETAIL .
    IF GC_CONTAINER_ES IS NOT INITIAL.
        CALL METHOD GC_CONTAINER_ES->FREE
          EXCEPTIONS
            CNTL_ERROR        = 1
            CNTL_SYSTEM_ERROR = 2
            OTHERS            = 3.
        IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                   WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
      ENDIF.
      IF GC_ALV_GRID_ES IS NOT INITIAL.
        CALL METHOD GC_ALV_GRID_ES->FREE
          EXCEPTIONS
            CNTL_ERROR        = 1
            CNTL_SYSTEM_ERROR = 2
            OTHERS            = 3.
        IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                   WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
      ENDIF.
      IF GC_CONTAINER_TB IS NOT INITIAL.
        CALL METHOD GC_CONTAINER_TB->FREE
          EXCEPTIONS
            CNTL_ERROR        = 1
            CNTL_SYSTEM_ERROR = 2
            OTHERS            = 3.
        IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                   WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
      ENDIF.
      IF GC_ALV_GRID_TB IS NOT INITIAL.
        CALL METHOD GC_ALV_GRID_TB->FREE
          EXCEPTIONS
            CNTL_ERROR        = 1
            CNTL_SYSTEM_ERROR = 2
            OTHERS            = 3.
        IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                   WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
      ENDIF.
      CREATE OBJECT GC_CONTAINER_ES
        EXPORTING
          CONTAINER_NAME              = 'CC_9003'
        EXCEPTIONS
          CNTL_ERROR                  = 1
          CNTL_SYSTEM_ERROR           = 2
          CREATE_ERROR                = 3
          LIFETIME_ERROR              = 4
          LIFETIME_DYNPRO_DYNPRO_LINK = 5
          OTHERS                      = 6.
      CREATE OBJECT GC_ALV_GRID_ES
        EXPORTING
          I_PARENT          = GC_CONTAINER_ES
        EXCEPTIONS
          ERROR_CNTL_CREATE = 1
          ERROR_CNTL_INIT   = 2
          ERROR_CNTL_LINK   = 3
          ERROR_DP_CREATE   = 4
          OTHERS            = 5.
      CALL METHOD GC_ALV_GRID_ES->SET_TABLE_FOR_FIRST_DISPLAY
        EXPORTING
          IS_LAYOUT                     = GS_LAYOUT_ES
          IT_TOOLBAR_EXCLUDING          = GT_TOOLBAR_ES
        CHANGING
          IT_OUTTAB                     = GT_ES_REPT
          IT_FIELDCATALOG               = GT_FIELDCAT_ES
        EXCEPTIONS
          INVALID_PARAMETER_COMBINATION = 1
          PROGRAM_ERROR                 = 2
          TOO_MANY_LINES                = 3
          OTHERS                        = 4.
    ENDFORM.                    " F_DISPLAY_ALV_DETAIL
    Similarly define the FORM F_DISPLAY_ALV_REPT.     
    Hope this will be useful for you. If you have any more queries let me know.

  • Low performance issue with EPM Add-in SP16 using Office Excel 2010

    We are having low performance issues using the EPM Excel Add-in SP16 with Excel 2010 and Windows 7
    The same reports using Excel 2007 (Windows 7 or XP) run OK
    Anyone with the same issue?
    Thanks

    Solved. It's a known issue. The solution, as provided by SAP Support is:
    Dear Customer,
    This problem may relate to a known issue, which happens on EPM Add-in
    with Excel 2010 when zoom is NOT 100%.
    to verify this, could please please try to set the zoom to 100% for
    your EPM report in question. and observe if refreshing performance is
    recovered?.
    Quite annoying, isn't it?

Maybe you are looking for