Data Archiving - Use of AS_API_READ versus Select from Infostructure

Hello,
I am archive enabling a custom program. Custom infostructures have been set up with the fields needed for initial selection criteria.  This was done in order to join infostructure tables in a select statement and get a results set.  This will also allow execution of complex select statements directly against the infostructure table.   
In reading the documentation for AS_API_READ, it mentions that reading the infostructure tables (ZARIX* tables)  directly is not desirable due to possible future changes. 
If AS_API_READ is used with selection criteria for this program,  the result set would have to be filtered with additional code to meet the complex selection requirements.   
I'm looking for some feedback/more detail on the risk of using a direct select instead of AS_API_READ.
Thanks,
Sue

Hello,
I am archive enabling a custom program. Custom infostructures have been set up with the fields needed for initial selection criteria.  This was done in order to join infostructure tables in a select statement and get a results set.  This will also allow execution of complex select statements directly against the infostructure table.   
In reading the documentation for AS_API_READ, it mentions that reading the infostructure tables (ZARIX* tables)  directly is not desirable due to possible future changes. 
If AS_API_READ is used with selection criteria for this program,  the result set would have to be filtered with additional code to meet the complex selection requirements.   
I'm looking for some feedback/more detail on the risk of using a direct select instead of AS_API_READ.
Thanks,
Sue

Similar Messages

  • System getting hanged whilst using Insert into table select * from table

    I have a peculiar problem.
    I am using the below statements:
    Query 1:
    insert into table ppms.erin_out@ppms_dblink select * from erin_out;
    Query 2:
    insert into table ppms.erin_out@ppms_dblink values(23,'dffgg',12',dfdfdgg,dfdfdg);
    I am in 'interfaces' schema (testing server) and executing above statements. We have testing server and development server, both are identical, i.e one is clone of the other.
    ppms_dblink is created in interfaces schema. ppms_dblink points to different database server which has two schemas 'clarity' and 'ppms'. ppms_dblink is create through authentication details of clarity schema.
    erin_out table is created on ppms schema on the same dababase server pointed by ppms_dblink.
    Question is :
    TOAD hangs while running query 1.
    Query 2 is working perfectly.
    As I have pl/sql script which is using query 1. I want to know why query 1 is creating problem.
    If I use query 2 in my pl/sql query then it may create performance issue as i have to use cursor then.
    On clarity schema, I have insert, update, select, modify rights on ppms.erin_out.
    I have tried same queries from another database server.
    That is I tried queries from 'interfaces' schema of development server ( clone of the testing server ). Its working perfectly.
    Message was edited by:
    user484158

    Dhanchik:
    The table from which I select rows, to insert into table on dblink, is having only one record. It may contatin maximum 100 rows at a time because I am scheduling the procedure through daemon process. Anyway transaction is not more than 100 records. I am trying with just 1 record for testing.
    So 1) Problem is not about the cost, TOAD is getting hanged ( to insert 1 record, cost does not mean much)
    2) there is no large amount of data, so no question of deteriorated performance
    Aron Tunzi:
    I think that should not be problem, because I am able to insert a record through query 2.
    Warren Tolentino :
    I am testing with 1 record only. Its not performance issue.
    Message was edited by:
    रचित

  • Is it possible to set up Time Capsule on one network and then access its stored data (movies) using Back to My Mac from another network that also has Apple TV to play them?

    I recently purchased a Time Capsule to store data at my office.  I have an Apple TV at my home that I would like to start using to play my movies.  With the storage of the new Time Capsule, I am curious if I can somehow store my iTunes movies on the Time Capsule at my office and then use Back to My Mac to access the TC from my home and then play them via Apple TV.  Does this sound possible?  We have an iMac at the office that remains on 24/7 connected to the TC.

    ATV cannot play movies from the TC, even if it is on your home network.. ATV is a streamer.. it is not a media player.. and the TC is a dumb as a board hard disk in a router.. so it has nothing in it either.
    You cannot play movies from ATV without itunes, which streams to the ATV.
    If the iMac is running 24/7 then you should be able to connect to the iMac remotely and run from iTunes on that, which could then play movies stored on the TC.. with the following issues.. if you use wireless on the iMac forget it. The iMac has to be ethernet connected as it has to pick up the movie on the library and play it back to the TC to network to your home.
    The next issues is speed of slowest connection.
    Your download to your home maybe great.. what is the upload speed from your office like??
    If you pay for uploads and downloads you will pay for the movie 3x.. first to download it.. second to upload it.. office to net..  and third to download it again.. net to home..  Sorry but it is better to play the movie directly from online itunes store.
    If you plan to try and play non-itunes movies over the connection, it won't work.. ATV as stated is linked to itunes and cannot play movie files.
    Overall this sounds like you need to think it out a bit more.

  • Write custom logic for data archiving using Open text(SAP archiving)

    Hi,
    We are currently doing archiving for sales documents and billing documents using Open text. As part of this we need to have a search functionality based on Sales document,customer number, customer address etc...
    since customer address doesn't exist in sales order tables we need to write logic so that the document gets archived and retrieved using the address.
    can you please help me out in how and where to write logic for this...can this be achieved?
    i observed an area on Attribute Object for a Node which gives an option for Event and User exit. can this be used..if so how?
    Thanks,
    Anil

    hi,
    please write more detailed information what do you really need...

  • How to use insert into...select * from...if source table is huge in size

    The source tables are having crores of data (growing tables). Tables with 4crores, 17cr. We want these tables to be copied frequently to our local database. Previously it was done by export-import through windows scheduled task. but now we are planning to do it as database jobs. We are fetching the datas with query
    insert into dest_tablee( select * from source_table@dblink) when we tried with this it was throwing exception like enough table space is not there. And also it was found that frequent commits has to be used while populating datas from big tables. So tried with cursor But it was very slow and again we got the exception like 'the table space unable to extend segment by 16 in undo tablespace 'UNDOTBS1'.
    After that we tried with the group by. In this case we got the exception like unable to extend table and also index in the table space. For this the solution is to add datafile. Again we have increase the table space. Now the procedure is running very slow(taking much time. It might be because of the conditions used in the query).
    Is there any other option to copy the datas from such a big tables? can we use the same sort of query?
    Friends please help me to sort it out.
    Thanks in Advance

    Hi,
    you have lot of data DONT use cursor, did you try using the COPY command.
    How frequently you will be doing the COPYING of the data ?
    If you have any constraints you can disable and enable after all the records have been copied.
    Please look at this link this should help.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869
    Thanks
    Message was edited by:
    hkandpal
    Link added

  • No rows returned by spatial query wrapped in SELECT * FROM ...

    Hi,
    I'm getting some really weird behaviour when running a sub query with SDO_EQUAL. The SDO_EQUAL query on its own works fine, but if I wrap it in SELECT * FROM then I get no results. If I wrap SDO_ANYINTERACT in SELECT * FROM then I get the expected result.
    It looks like the spatial index is used when running the regular SDO_EQUAL query, but not when wrapped in SELECT * FROM. Weird. The spatial index is also not used when SDO_ANYINTERACT is wrapped in SELECT * FROM... so I'm not sure why that returns the right answer.
    I am getting this problem on 11.2.0.2 on Red Hat Linux 64bit and 11.2.0.1 on Windows XP 32bit (that's all the 11g versions I've tried). The query works as expected on 10.2.0.5 on Windows Server 2003 64bit.
    Any ideas?
    Confused in Dublin (John)
    Test case...SQL>
    SQL> -- Create a table and insert the same geometry twice
    SQL> DROP TABLE sdo_equal_query_test;
    Table dropped.
    SQL> CREATE TABLE sdo_equal_query_test (
      2  id NUMBER,
      3  geometry SDO_GEOMETRY);
    Table created.
    SQL>
    SQL> INSERT INTO sdo_equal_query_test VALUES (1,
      2  SDO_GEOMETRY(3003, 81989, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1),
      3  SDO_ORDINATE_ARRAY(1057.39, 1048.23, 4, 1057.53, 1046.04, 4, 1057.67, 1043.94, 4, 1061.17, 1044.60, 5, 1060.95, 1046.49, 5, 1060.81, 1047.78, 5, 1057.39, 1048.23, 4)));
    1 row created.
    SQL>
    SQL> INSERT INTO sdo_equal_query_test VALUES (2,
      2  SDO_GEOMETRY(3003, 81989, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1),
      3  SDO_ORDINATE_ARRAY(1057.39, 1048.23, 4, 1057.53, 1046.04, 4, 1057.67, 1043.94, 4, 1061.17, 1044.60, 5, 1060.95, 1046.49, 5, 1060.81, 1047.78, 5, 1057.39, 1048.23, 4)));
    1 row created.
    SQL>
    SQL> -- Setup metadata
    SQL> DELETE FROM user_sdo_geom_metadata WHERE table_name = 'SDO_EQUAL_QUERY_TEST';
    1 row deleted.
    SQL> INSERT INTO user_sdo_geom_metadata VALUES ('SDO_EQUAL_QUERY_TEST','GEOMETRY',
      2  SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', 0, 100000, .0001), SDO_DIM_ELEMENT('Y', 0, 100000, .0001), SDO_DIM_ELEMENT('Z', -100, 4000, .0001))
      3  ,81989);
    1 row created.
    SQL>
    SQL> -- Create spatial index
    SQL> DROP INDEX sdo_equal_query_test_spind;
    DROP INDEX sdo_equal_query_test_spind
    ERROR at line 1:
    ORA-01418: specified index does not exist
    SQL> CREATE INDEX sdo_equal_query_test_spind ON sdo_equal_query_test(geometry) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    Index created.
    SQL>
    SQL> -- Ensure data is valid
    SQL> SELECT sdo_geom.validate_geometry_with_context(sdo_cs.make_2d(geometry), 0.0001) is_valid
      2  FROM sdo_equal_query_test;
    IS_VALID
    TRUE
    TRUE
    2 rows selected.
    SQL>
    SQL> -- Check query results using sdo_equal
    SQL> SELECT b.id
      2  FROM sdo_equal_query_test a, sdo_equal_query_test b
      3  WHERE a.id = 1
      4  AND b.id != a.id
      5  AND sdo_equal(a.geometry, b.geometry) = 'TRUE';
            ID
             2
    1 row selected.
    SQL>
    SQL> -- Check query results using sdo_equal wrapped in SELECT * FROM
    SQL> -- Results should be the same as above, but... no rows selected
    SQL> SELECT * FROM (
      2       SELECT b.id
      3       FROM sdo_equal_query_test a, sdo_equal_query_test b
      4       WHERE a.id = 1
      5       AND b.id != a.id
      6       AND sdo_equal(a.geometry, b.geometry) = 'TRUE'
      7  );
    no rows selected
    SQL>
    SQL> -- So that didn't work.  Now try sdo_anyinteract... this works ok
    SQL> SELECT * FROM (
      2       SELECT b.id
      3       FROM sdo_equal_query_test a, sdo_equal_query_test b
      4       WHERE a.id = 1
      5       AND b.id != a.id
      6       AND sdo_anyinteract(a.geometry, b.geometry) = 'TRUE'
      7  );
            ID
             2
    1 row selected.
    SQL>
    SQL> -- Now try a scalar query
    SQL> SELECT * FROM (
      2       SELECT b.id
      3       FROM sdo_equal_query_test a, sdo_equal_query_test b
      4       WHERE a.id = 1
      5       AND b.id != a.id
      6  );
            ID
             2
    1 row selected.
    SQL> spool offHere's the explain plan for the query that works. Note that the spatial index is used.
    SQL> EXPLAIN PLAN FOR
      2  SELECT b.id
      3  FROM sdo_equal_query_test a, sdo_equal_query_test b
      4  WHERE a.id = 1
      5  AND b.id != a.id
      6  AND sdo_equal(a.geometry, b.geometry) = 'TRUE';
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3529470109
    | Id  | Operation                     | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                            |     1 |  7684 |     3   (0)| 00:00:01 |
    |   1 |  RESULT CACHE                 | f5p63r46pbzty4sr45td1uv5g8 |       |       |            |       |
    |   2 |   NESTED LOOPS                |                            |     1 |  7684 |     3   (0)| 00:00:01 |
    |*  3 |    TABLE ACCESS FULL          | SDO_EQUAL_QUERY_TEST       |     1 |  3836 |     3   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS BY INDEX ROWID| SDO_EQUAL_QUERY_TEST       |     1 |  3848 |     3   (0)| 00:00:01 |
    |*  5 |     DOMAIN INDEX              | SDO_EQUAL_QUERY_TEST_SPIND |       |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("B"."ID"!=1)
       4 - filter("A"."ID"=1 AND "B"."ID"!="A"."ID")
       5 - access("MDSYS"."SDO_EQUAL"("A"."GEOMETRY","B"."GEOMETRY")='TRUE')
    ..... other stuff .....     Here's the explain plan for the query that does not work. Note that the spatial index is not used.
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM (
      3     SELECT b.id
      4     FROM sdo_equal_query_test a, sdo_equal_query_test b
      5     WHERE a.id = 1
      6     AND b.id != a.id
      7     AND sdo_equal(a.geometry, b.geometry) = 'TRUE'
      8  );
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 1024466006
    | Id  | Operation           | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |                            |     1 |  7684 |     6   (0)| 00:00:01 |
    |   1 |  RESULT CACHE       | 2sd35wrcw3jr411bcg3sz161f6 |       |       |            |          |
    |   2 |   NESTED LOOPS      |                            |     1 |  7684 |     6   (0)| 00:00:01 |
    |*  3 |    TABLE ACCESS FULL| SDO_EQUAL_QUERY_TEST       |     1 |  3836 |     3   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL| SDO_EQUAL_QUERY_TEST       |     1 |  3848 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("B"."ID"!=1)
       4 - filter("A"."ID"=1 AND "B"."ID"!="A"."ID" AND
                  "MDSYS"."SDO_EQUAL"("A"."GEOMETRY","B"."GEOMETRY")='TRUE')
    ..... other stuff .....               

    That looks like a bug to me. As a workaround, you can materialize the inline view by adding rownum>0. Please see the reproduction and workaround below.
    SCOTT@orcl_11gR2> SELECT *
      2  FROM   (SELECT b.id
      3            FROM   sdo_equal_query_test a, sdo_equal_query_test b
      4            WHERE  a.id = 1
      5            AND    b.id != a.id
      6            AND    sdo_equal (a.geometry, b.geometry) = 'TRUE')
      7  /
    no rows selected
    Execution Plan
    Plan hash value: 1024466006
    | Id  | Operation          | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |                      |     1 |  7676 |     6   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS      |                      |     1 |  7676 |     6   (0)| 00:00:01 |
    |*  2 |   TABLE ACCESS FULL| SDO_EQUAL_QUERY_TEST |     1 |  3832 |     3   (0)| 00:00:01 |
    |*  3 |   TABLE ACCESS FULL| SDO_EQUAL_QUERY_TEST |     1 |  3844 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("B"."ID"<>1)
       3 - filter("A"."ID"=1 AND "B"."ID"<>"A"."ID" AND
                  "MDSYS"."SDO_EQUAL"("A"."GEOMETRY","B"."GEOMETRY")='TRUE')
    Note
       - dynamic sampling used for this statement (level=2)
    SCOTT@orcl_11gR2> SELECT *
      2  FROM   (SELECT b.id
      3            FROM   sdo_equal_query_test a, sdo_equal_query_test b
      4            WHERE  a.id = 1
      5            AND    b.id != a.id
      6            AND    sdo_equal (a.geometry, b.geometry) = 'TRUE'
      7            AND    ROWNUM > 0)
      8  /
            ID
             2
    1 row selected.
    Execution Plan
    Plan hash value: 2329953927
    | Id  | Operation                       | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |                            |     1 |    13 |     3   (0)| 00:00:01 |
    |   1 |  VIEW                           |                            |     1 |    13 |     3   (0)| 00:00:01 |
    |   2 |   COUNT                         |                            |       |       |            |          |
    |*  3 |    FILTER                       |                            |       |       |            |          |
    |   4 |     NESTED LOOPS                |                            |     1 |  7676 |     3   (0)| 00:00:01 |
    |*  5 |      TABLE ACCESS FULL          | SDO_EQUAL_QUERY_TEST       |     1 |  3832 |     3   (0)| 00:00:01 |
    |*  6 |      TABLE ACCESS BY INDEX ROWID| SDO_EQUAL_QUERY_TEST       |     1 |  3844 |     3   (0)| 00:00:01 |
    |*  7 |       DOMAIN INDEX              | SDO_EQUAL_QUERY_TEST_SPIND |       |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter(ROWNUM>0)
       5 - filter("B"."ID"<>1)
       6 - filter("A"."ID"=1 AND "B"."ID"<>"A"."ID")
       7 - access("MDSYS"."SDO_EQUAL"("A"."GEOMETRY","B"."GEOMETRY")='TRUE')
    Note
       - dynamic sampling used for this statement (level=2)
    SCOTT@orcl_11gR2>

  • Set preset values that can be selected from a pull-down manu

    I am writing a VI in LabView 7.1 to collect data. I have a set of variables that need to be set at the beginning. I used Basic Information subVI to input these variables to be saved in the data file header, but everytime I run the VI I have to type in pretty much the same thing. The variables I use usually can be selected from a list of number, text or Y/N. I thought about using Configure File VI, but I think that will only give me a fixed set, not a choice. I wonder if there is a way to preset a list of variables so that I can select one value from the list rather than type in everything each time.
    Thanks,
    Ron

    1. Write a Vi that have all your variables i.e. enum, ring, text controls, boolean controls etc
    When this Vi is called, it allows user to configure all settings
    There will be a "save" button. Upon clicked, it should prompt user to input a filename (i.e. *.ini) and the file will be saved to a pre-determined directory i.e. LabVIEW Default Directory
    With the above, a new *.ini file is created for respective set of variables.
    2. Knowing that all *.ini files are stored at LabVIEW Default Directory, make use of the attached example VI, modify it to suit your needs, to get all *.ini filenames and update to the Combo Box control
    With the Combo Box control, you can now choose the desired configuration file (*.ini) to be loaded for the rest of the test needs.
    3. The selected *.ini file path is now input to a Vi that does the extraction of all variable settings from the selected *.ini file.
    Above is just one of many ways that you could have a pull-down menu for selecting a set of settings from a set of configuration files.
    Hope this make sense to you
    Cheers!
    Ian F
    Since LabVIEW 5.1... 7.1.1... 2009, 2010
    依恩与LabVIEW
    LVVILIB.blogspot.com
    Attachments:
    IFK_CFIO_Get Filenames to Combo Box.vi ‏30 KB

  • CS-MARS, data archiving issue.

    Hello at all.
    I'm using CS-MARS-20.
    Reimaged, restored and upgraded to software version 6.1.8, I'm not able to start data archiving successfully.
    I have configured data archiving using a remote SFTP server, click Apply, Start and Activate: data archiving status is running, service is enabled and remote server is available.
    But MARS doesn't archive nothing!
    Using another SFTP remote client with the same account used from appliance, I can create folders, files and remove.
    Any ideas?
    Thanks.
    Regards.
    Andrea

    Now we are moving to a NFS solution.
    Data archiving status is running, archiving service is enabled and remote server is available but events, incidents and configuration are not exported.
    I can use CLI (pnexp command) to export configuration successfully.
    Any ideas?
    Why data archiving doesn't work?
    Thanks.
    Andrea

  • SAP Data Archiving in R/3 Ecc6.0, MS-SQL 2008 Server:

    We are using MS-SQL 2008 Database, these days our production database growing quite faster. I want to setting up Data Archiving using T'Code DB15. Can anyone give steps to setup Data Archinving using DB15?
    Edited by: Ahsan's on Jul 19, 2010 11:44 AM

    Archiving data is bound to legal things depending on the country and the company you work. Setting up archiving is not following a few steps, you'd need to have a system where to put the archiving data.
    Markus

  • Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?

    Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?
    select * from Sales.[SalesOrderHeader]
    select SalesOrderNumber,RevisionNumber,rowguid from Sales.[SalesOrderHeader]
    As you can see both the query execution plan and subtree cost is same. So how selecting the particular columns optimize the query ?

    Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?
    select * from Sales.[SalesOrderHeader]
    select SalesOrderNumber,RevisionNumber,rowguid from Sales.[SalesOrderHeader]
    As you can see both the query execution plan and subtree cost is same. So how selecting the particular columns optimize the query ?
    Yes, selecting specific columns is always better than select *.
    If you always need few columns in result, then just use SELECT col1, col2 FROM YourTable. If you SELECT * FROM YourTable; that is extra useless overhead.
    If in future if someone adds Image/BLOB/Text type columns in your table, using SELECT * will worsen the performace for sure.
    Let's say if you have SP and you use INSERT INTO DestTable SELECT * FROM TABLE which runs fine BUT again if someone adds few more columns then your SP will fail saying provided columns don't match.
    -Vaibhav Chaudhari

  • Can we use 0INFOPROV as a selection in Load from Data Stream

    Hi,
    We have implemented BW-SEM BPS and BCS (SEM-BW - 602 and BI 7 ) in our company.
    We have two BPS cubes for Cost Center and Revenue Planning and we have Actuals Data staging cube, we use 0SEM_BCS_10 to load actuals.
    We created a MultiProvider on BPS cubes and Staging cube as a Source Data Basis for BCS.
    Issue:
    When loading plan data or Actuals data into BCS (0BCS_C11) cube using Load from Data Stream method, we have performance issue, We automated load process in a Process Chain. Some times it take about 20 hrs for only Plan data load for 3 group currencies and then elimination tasks.
    What I noticed is, (for example/) when loading Plan data, system is also reading Actuals Cube which is not required, there is no selection available in Mapping or selection tab where I can restrict data load from a particular cube.
    I tried to add 0INFOPROV into databasis but then it doen't show up as selection option in the data collection tasks.
    Is there a way where I can restrict data load into BCS using this load option and able to restrict on cube I will be reading data from ?
    I know that there is a filter Badi available, but not sure how it works.
    Thanks !!
    Naveen Rao Kattela

    Thanks Eugene,
    We do have other characteristics like Value Type (10 = Actual and 20 = Plan) and Version (100 = USD Actual and 200 = USD Plan), but when I am loading data into BCS using Load from Data Stream method, the request goes to all the underlying cubes, which in my case are Planning cubes and Actual Cube, but I don't want request to goto Actual Cube when I am running only Plan load. I think its causing some performance issue.
    For this reason I am thinking if I can use 0INFOPROV as we use in Bex queries to filter the InfoProvider so that the data load performance will improve. 
    I was able to to bring in 0INFOPROV into DataBasis by adding 0INFOPROV  in the characteristics folder used by the Data Basis.
    I am able to see this InfoObject Data Stream Fileds tab. I checked marked it to use it in the selection and regenerated the databasis.
    I was expecting that now this field would be available for selection in data collection method, but its not.
    So If its confirmed that there is no way we can use 0INFOPROV as a selection then I would suggest my client for a redesign of the DataBasis it self.
    Thanks,
    Naveen Rao Kattela

  • Data Corrupted when using CREATE AS SELECT * FROM over DB LINK

    Hi ,
    I wonder if anyone has suffered a simillar issue as this:
    I have a DB Link Between a 10gR1 (Base install not patched) database on Windows 2003, and a 10gR2 (with latest Patch Database ) on Sun Solaris. The 10.1 DB is the SOURCE 10.2 is TARGET
    When using a statement like the following on the Target :
    CREATE TABLE a AS SELECT * FROM TABLE a@SOURCE
    The statement completes, however for some (not all) tables, the data content is seriously corrupted, I noticed this when trying to apply a UKey on the newly created table in TARGET. The data was completely messed up for around 600 out of 450000 rows, I could not easily tell which rows were messed up as it was the columns that build the UKey that were affected :-(
    The same happens if I precreate the table and use INSERT /*+APPEND*/ INTO AS SELECT.... etc
    I realise that using an unpatched 10.1 is not nessasarily advisable, however currently it is difficult for us to patch or upgrade the DB...
    If anyone has seen this before an/or has any ideas what might be the cause , I would appreciate any help I can get.
    Cheers
    JAmes

    The Problem Manifest itsself in that some fileds are created with incorrect Values, specificaly in easy to identify cases NULL where the original values where not NULL.
    These field do not contain special chars like ä or ß , so it does not seem to indicate a charset issue.
    JAmes

  • Select from .. as of - using archived redo logs - 10g

    Hi,
    I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
    I've been searching for a while and cant find an answer.
    My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
    I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
    When is issue the following query
    select * from supplier_codes AS OF TIMESTAMP
    TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
    I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
    My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
    Any help would be greatly appreciated!
    Thanks
    Robert

    If you want to go back 24 hours, you need to undo the changes...
    See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search].

  • Sys tables from flashback data archive?

    Hi,
    I am prototyping flashback data archive and I have been following some tutorials.
    When I query
    user_flashback_archive_tables
    I can see the table for which I have configured FBDA. I can see also the name of the History table, like SYS_FBA_HIST_xxxxx (xxxxxx being the object_id of the table).
    But I just cannot find the table in USER_TABLES or USER_TAB_PARTITIONS.
    The history seems to be getting stored... The sql-developer display the history in the "flashback" tab. But why aren't the SYS_FBA_HIST_xxxx tables getting created?
    (I am using 11gR2 running on EL5 Linux)

    The SYS_FBA_xxx belongs to SYS user. To be able to see them you must query DBA_TABLES and have SELECT privilege on DBA_TABLES or role SELECT_CATALOG_ROLE.You can also just try to connect as SYS:
    SQL> select table_name from dba_tables where table_name like 'SYS_FBA%';
    TABLE_NAME
    SYS_FBA_HIST_73563
    SYS_FBA_TCRV_73563
    SYS_FBA_DDL_COLMAP_73563
    SYS_FBA_DL
    SYS_FBA_USERS
    SYS_FBA_PARTITIONS
    SYS_FBA_TRACKEDTABLES
    SYS_FBA_BARRIERSCN
    SYS_FBA_TSFA
    SYS_FBA_FA
    10 rows selected.Edited by: P. Forstmann on 7 oct. 2010 20:52

  • How to push a perticular request from PSA to Data target using DTP in BI

    Hello all.
      I have mulitple requests in PSA & wanted to move only perticular requests (Randomly,  say first request & last request ) to data target using DTP in BI 7.0. How can we do this.. ? Is there any option in DTP (BI 7.0)for selective loading of requests from PSA to Data Target..
      Thanks in advance..
    Cheers,
    sami.

    Hi,
      It is possible by using 'zero fetch' in DTP
    If you want only the recent req to be loaded.
    1. In PSA, change the status of the recent request to red
    2. Do a zero fetch
    that is processing mode - No data transfer; delta status in source: fetched
    With this processing mode you execute a delta without transferring data.
    3. change the particular request status back to green
    4. Run a delta load
    Regards,
    Priya.D

Maybe you are looking for

  • Can I sync my iPad and iPhone via iCloud

    Sorry if this has been answered before, but I searched and could not find a solution. I, obviously, did not set up iOS5 correctly because I now have two sets of Contacts on my iPhone.  One is called Contacts from my Mac and the other is called from i

  • Ical error 403

    Hello all.  I am running an imac with 10.9.1.  I started having problems with my email account not sending--just lagging in the drafts pile.  I did some things, like resetting the pram and finally got it to work.  A little later I noticed that ical w

  • Purchase Order Approvals

    Dear All. I would like to know if there is a way to activate approval procedures for Purchase Orders already entered in SAP. The Purchase Order have not gone to the next stage of Goods receipt or AP Invoice. Kind Regards Rajindra Parmar

  • How to use wifi and personal hotspot at same time in ios 8.0.2 ??a

    I Am connecting my iphone with my router using wifi . i am using windows laptop but it's wifi adapter is not working thats why I want to connect my iPhone with it . when I start my personal hotspot wifi is disable automaticaly . any solution for this

  • When installing Skype, I get error 2001

    Every time I try to installing Skype, I get the iTunes error 2001 (not very helpful at all)! The message push-up after I push the "Get App" button. /Tommy