Impact with online reorg on Source ECC Oracle tables

Hi,
We are planning to do online reorg on few Oracle tables of our ECC system. The same table are replicated from SLT to HANA.
Could you please let me know if it affects the replication ? do I need to stop the replication in SLT while online reorgs run on source ECC tables?

Hi Sandeep,
In that case I do not think you have to stop the Master Job in SLT.Still I would suggest to test this scenario in Sandbox/Sandpit or Development system first before you execute in Prod.
Regards,
Joydeep.

Similar Messages

  • Problem with Date - Text File Source and Oracle Target

    Hi All,
    I have a source data (text file) with date column in the format of 'MM/DD/YYYY'. My target is oracle. I am using the LKM FILE TO SQL and IKM SQL Control Append. When i execute this interface, i am getting an error as
    7000 : null : com.sunopsis.jdbc.driver.file.b.i
    com.sunopsis.jdbc.driver.file.b.i
    at com.sunopsis.jdbc.driver.file.b.f.getColumnClassName(f.java)
    at the Load Data step.
    How to load date columns from text file to oracle tables? Please help me in resolving this...
    Thanks in Advance,
    Ram Mohan T

    The worst solution is to define for your text file the date as a string and then to rebuild your date type in Oracle.
    something like
    convert(Substr(myfield,4,2)||Substr(myfield,1,2)||Substr(myfield,7,4) ,'MMDDYYYY')
    this is maybe the worst solution but it should work.
    Regards
    Brice

  • Problems with special characters uploading data into oracle database

    Greetings
    I have a problem uploading data into oracle tables from sybase datasource. The data that I want to upload is in Spanish, for example when I have a varchar field with the data 'Consultoría', in oracle table the data upload with interrogation symbols.
    I have my source and target datastores configured as follows.
    Any suggestion? Thank you for your time

    Chack section 10 Locales and Multi-byte Functionality in the SAP Data Services Reference Guide for an exhaustive description of NLS settings.
    As a summary, following settings are required:
    locales in the datastore definitions match the actual locale used in the associated databases
    the locale of the DS engine allows for source to target translation
    the character set in the target database supports storage of accented characters

  • Map FLAT file to oracle table using 9.04 version - PLS HELP!!!!

    Hello all
    I am having a problem with mapping a flat file to oracle table. The validation is successful, when I go to Project/Deployment manager. Try to deploy the mapping itself and the target table. It said succesful, and the last step is another "Deploy", this one is fail. Saying could not locate the file (which is a flat file) , but it is there on the server.
    I have read all the help on line and follow what they show me, but still not work
    Any ideas? Please provide detail answer if you know it.
    Thank in advance

    Hallo,
    just give a rights on connector
    Variant 1
    1. connect to user sys
    2. grant read,write on directory <connector_name> to <target_schema>;
    or
    Variant 2
    1. as user sys or system give CREATE_ANY_DIRECTORY to <target_schema>
    2. manualy make CREATE DIRECTORY <connector_name> as '<full_path_to_directory>';
    and enjoy :)
    PS: <connector_name> you can take from script CREATE_TABLE wisch in Generation phase was created!
    Kirill

  • Dynamically creating oracle table with csv file as source

    Hi,
    We have a requirment..TO create a dynamic external table.. Whenever the data or number of columns change in the CSV file the table should be replaced with current data and current number of columns...as we are naive experienced people in oracle please give us a clear solution.. We have tried with a code already ..But getting some errors. Code given below..
    thank you
    we have executed this code by changing the schema name and table name ..Remaining everything same ...
    Assume the following:
    - Oracle User and Schema name is ALLEXPERTS
    - Database name is EXPERTS
    - The directory object is file_dir
    - CSV file directory is /export/home/log
    - The csv file name is ALLEXPERTS_CSV.log
    - The table name is all_experts_tbl
    1. Create a directory object in Oracle. The directory will point to the directory where the file located.
    conn sys/{password}@EXPERTS as sysdba;
    CREATE OR REPLACE DIRECTORY file_dir AS '/export/home/log';
    2. Grant the directory privilege to the user
    GRANT READ ON DIRECTORY file_dir TO ALLEXPERTS;
    3. Create the table
    Connect as ALLEXPERTS user
    create table ALLEXPERTS.all_experts_tbl
    (txt_line varchar2(512))
    organization external
    (type ORACLE_LOADER
    default directory file_dir
    access parameters (records delimited by newline
    fields
    (txt_line char(512)))
    location ('ALLEXPERTS_CSV.log')
    This will create a table that links the data to a file. Now you can treat this file as a regular table where you can use SELECT statement to retrieve the data.
    PL/SQL to create the data (PSEUDO code)
    CREATE OR REPLACE PROCEDURE new_proc IS
    -- Setup the cursor
    CURSOR c_main IS SELECT *
    FROM allexperts.all_experts_tbl;
    CURSOR c_first_row IS ALLEXPERTS_CSV.logSELECT *
    FROM allexperts.all_experts_tbl
    WHERE ROWNUM = 1;
    -- Declare Variable
    l_delimiter_count NUMBER;
    l_temp_counter NUMBER:=1;
    l_current_row VARCHAR2(100);
    l_create_statements VARCHAR2(1000);
    BEGIN
    -- Get the first row
    -- Open the c_first_row and fetch the data into l_current_row
    -- Count the number of delimiter l_current_row and set the l_delimiter_count
    OPEN c_first_row;
    FETCH c_first_row INTO l_current_row;
    CLOSE c_first_row;
    l_delimiter_count := number of delimiter in l_current_row;
    -- Create the table with the right number of columns
    l_create_statements := 'CREATE TABLE csv_table ( ';
    WHILE l_temp_counter <= l_delimiter_count
    LOOP
    l_create_statement := l_create_statement || 'COL' || l_temp_counter || ' VARCHAR2(100)'
    l_temp_counter := l_temp_counter + 1;
    IF l_temp_counter <=l_delimiter_count THEN
    l_create_statement := l_create_statement || ',';
    END IF;
    END;
    l_create_statement := l_create_statement || ')';
    EXECUTE IMMEDIATE l_create_statement;
    -- Open the c_main to parse all the rows and insert into the table
    WHILE rec IN c_main
    LOOP
    -- Loop thru all the records and parse them
    -- Insert the data into the table created above
    END LOOP;

    The initial table is showing errors and the procedure is created with compilation errors
    After executing the create table i am getting the following errors
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-00554: error encountered while parsing access parameters
    KUP-01005: syntax error: found "identifier": expecting one of: "badfile,
    byteordermark, characterset, column, data, delimited, discardfile,
    disable_directory_link_check, exit, fields, fixed, load, logfile, language,
    nodiscardfile, nobadfile, nologfile, date_cache, processing, readsize, string,
    skip, territory, varia"
    KUP-01008: the bad identifier was: deli
    KUP-01007: at line 1 column 9
    ORA-06512: at "SYS.ORACLE_LOADER", line 19

  • BALDAT long2log conversion for online reorg

    Hi,
    I am considering doing a long2lob conversion for table BALDAT
    to be capable of online reorgs after archiving/delete runs for BALDAT.
    I am a bit scared about the possible performance impacts that note 835552
    is threatening about.
    Has anyone done this already for table BALDAT and can tell a bit about
    the impacts that had to be dealt with.
    Thanks
    Volker

    ya, I tend to be tedious sometimes
    Now the sandbox is my own toy (No other users on it),
    BALDAT there has some 400M and after a couple of counts right before
    and after and around I am pretty confident that every action was already
    completely in the buffercache with nearly no I/O involved for any action
    (This is why I did the reorg right after the long2lob again, just to be sure).
    Now 80 secs Delta do not seem to do harm, but the beast I need to apply
    this to has quite a couple of TB, BALDAT has 100G and I have processings
    that do write to BALDAT with 120-200 processes when they are finished
    all at the same time.
    I do not like to find these to be serializing on a BLOB Segment header later.
    And as this step is only revertable with quite some efford, I like to be sure beforehand.
    Volker

  • Interactive Report with PL/SQL Function Source

    Is it possible to create interactive report with PL/SQL function source returing a query? If not, has anyone done any work to simulate the interactive reporting feature for a normal report using API?

    I haven't tried that before but you could:
    1. create a collection from your result set returned by a dynamic query,
    2. create a view on that collection,
    3. use the view in your interactive report.
    The usability of this proposal depends from a question how "dynamic" your query is - does it always have the same number of columns or not.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://apex.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • Error while using Oracle Table as source file :- ODQ for ODI

    Hi All,
    I am getting some errors while working on ODQ with Oracle Tables as source file.
    If I am trying with text files (*.txt) as source and output it works fine.
    Please let me know how we can connect to an oracle table which is my source file.
    In the exported project -> “settings” folder
    In the file named eN_transfmr_pXX.stx
    For
    /CATEGORY/INPUT/PARAMETER/INPUT_SETTINGS/ARGUMENTS/EN
    TRY/DATA_FILE_NAME=
    What I need to give? (URL – source file)
    I tried with
    1. jdbc:oracle:thin:@xxx.xxx.x.xx:1521:ORCL
    2. jdbc:oracle:thin:UserName/Password@// xxx.xxx.x.xx:1521:ORCL
    i am not sure , is there any thing missing ???
    (Note: for text file I am giving “D:\Sourcefolder\customer.txt”)
    If I am running the batch file directly from CMD Prompt it’s displaying the error message
    “Cannot open file”
    If I am connecting with ODI it’s displaying the error
    com.sunopsis.dwg.function.SnpsFunctionBaseException: OS command returned 3503. …………………….
    Thanks in advance…
    Rathish A M

    Hi Ratish,
    ODQ supports files as inputs not Oracle tables, what you should do is:
    - define an ODQ process that takes a file as an input.
    - create an ODI process that dumps your Oracle table into a file that will be used by ODQ. (interface or OdiSqlUnload step)
    - run the ODQ process in ODI (in a package)
    - create an ODI interface that will load your ODQ output file into a DB.
    You can profile Oracle tables directly using Oracle Data Profiling.
    Thanks,
    Julien

  • OIM 9.1.0.2 - Weblogic JDBC Multi Data Sources for Oracle RAC

    Does OIM OIM 9.1.0.2 BP07 support Weblogic JDBC Multi Data Sources (Services>JDBC>Multi Data Sources) for Oracle RAC instead of inserting the "Oracle RAC JDBC URL" on JDBC Data Sources for xlDS and xlXADS (Services>JDBC>Data Sources> xlDS|xlXADS > Connection Poll> URL) ?
    If yes, is there are any other modifications that need to be made on OIM, or just changing the data sources?

    Yes, it's supported. You install against one instance directly of the Rac Server. Then you update the config.xml file and the jdbc resource in your weblogic server with the full rac address. It is documented for installation against RAC. http://docs.oracle.com/cd/E14049_01/doc.9101/e14047/database.htm#insertedID2
    -Kevin

  • How to set JDBC Data Sources in Oracle MapViewer for Oracle database 12c Release 1 (12.1.0.1)

    How to set JDBC Data Sources in Oracle MapViewer for Oracle database 12c Release 1 (12.1.0.1)?
    The following is my configuration in the conf\mapViewerConfig.xml:
    <map_data_source name="mvdemo12"
    jdbc_host="127.0.0.1"
    jdbc_sid="orcl12c1"
    jdbc_port="1522"
    jdbc_user="mvdemo"
    jdbc_password="7OVl2rJ+hOYxG5T3vKJQb+hW4NPgy9EN"
    jdbc_mode="thin"
    number_of_mappers="3"
    allow_jdbc_theme_based_foi="true"
    editable="true"/>
    <!--  ****  -->
    But it does not work.
    After use "sqlplus mvdemo/[email protected]:1522/pdborcl", it connected to the Oracle database 12c.
    Does anyone know it?
    Thanks,

    For 11.1.1.7.1 use the syntax for jdbc_sid, i.e.
    //mypdb1.foo.com as described in the README,
    - MapViewer native (non-container) data sources can now use database service name in place of SID. To supply a db service name, you will use the same jdbc_sid attribute, but specify the service name with double slashes in front, such as follows:
      <map_data_source name="myds"
        jdbc_host="foo.com"
        jdbc_sid="//mypdb1.foo.com"
        jdbc_port="1522"
      />
    For 11.1.1.7.0 use a container_ds instead.
    i.e. instead of using
    <map_data_source name="my_12c_test"
                       jdbc_host="mydbinstance"
                       jdbc_sid="pdborcl12c"
                       jdbc_port="1522"
                       jdbc_user="mytestuser"
                       jdbc_password="m2E7T48U3LfRjKwR0YFETQcjNb4gCMLG8/X0KWjO00Q="
                       jdbc_mode="thin"
                       number_of_mappers="6"
                       allow_jdbc_theme_based_foi="false"
                       editable="false"
       />
    use
      <map_data_source name="my_12c_test"
                       container_ds="jdbc/db12c"
                       number_of_mappers="6"
                       allow_jdbc_theme_based_foi="false"
                       editable="false"
       />
    In my case the Glassfish 3.1.2.2 JDBC connection pool definition was
    Property
    url  jdbc:oracle:thin:@mydbinstance:1522/pdborcl12c.rest_of.service.name
    Uncheck the Wrap JDBC Objects option in Advanced panel, i.e. the Edit JDBC Connection Pool Advanced properties page.
    Add a JDBC resource for that newly created pool
    Use that in mapviewerconfig.xml as above

  • Oracle Table Values are not displayed when tried to display with Essbase

    Hi,
    I was trying to create a report with Oracle RDB Table and Essbase by following the steps given in "Federating Essbase and Relational Data Sources in Oracle Business Intelligence Suite Enterprise Edition Plus" document at the location http://www.oracle.com/technology/obe/obe_bi/bi_ee_1013/fed_data/fed_data.html.
    I am able to see the members from the Essbase cubes and I can also see the Oracle Table values, if they are displayed individually (i.e. different reports) but when I try to get one report with Essbase values with Oracle table records then Oracle table records are not displayed and when see the query log I don't see the query for Oracle table but I see obviously MDX queries for the essbase.
    Please help.
    Regards,
    Paresh

    Hi,
    Smitha, you can defnetly use dynamic table in interactive form. I had similar problem and I acheived like follwing:Basically you have to bind the table .
    If you want to have fixed number of rows in the interactive form, then in wddoinit method bind the internal table to ur table node. for exmaple: if u want 2 rows in the form loop times . So by default when you open the form you will get two rows for the table.
    **************BIND THE ITAB ****************************
    DO 2 TIMES.
    APPEND LW_LFBK TO LT_LFBK.
    CLEAR LW_LFBK.
    ENDDO.
    CALL METHOD lo_nd_t_lfbk->bind_table
    EXPORTING
    new_items = LT_LFBK.
    If you want to have dynmic table then take a submit button in the form instead of normal button,
    in onaction submit write a loop every time you click that new submit button it should add a new row.
    use above coding in onactionsubmit instead of doinit.
    Thats it.
    Regards,
    Ravi

  • Staging area on source in Oracle

    Hi,
    My source and target, both are Oracle DBs.
    a) I created one interface with STAGING AREA ON TARGET. For this I used LKM SQL to SQL and IKM SQL Incremental Update.
    Its working fine and all the work tables, flow table etc are gettting created in the
    Work Schema (ie. ODI_TRG) I have given while defining physical schema of my target.
    I want to try with staging area on source....
    b) I checked the "Staging area different from Target" option and selected the logical schema of the source in the Definitions tab of my interface.
    In the corresponding physical schema, Work Schema I gave is ODI_SRC.
    Now I expected all the work tables, flow tables etc to be created ODI_SRC. But still they are getting created in ODI_TRG!!! Why???
    But finally to insert/update the target table, its trying to read the flow table from ODI_SRC, even though it created the flow table in ODI_TRG!!!!
    And as a result its failing with this error code...
    942 : 42000 : java.sql.SQLException: ORA-00942: table or view does not exist
    Please clarify.

    I think you didn't change your KM.
    Step1: import the "IKM SQL to SQL Append" into your project
    Step2: In your interface, on the definition table, select the "Staging Area Different From Target" checkbox, and select the schema of your source data in the drop-down list box.
    Step3: go to the flow tab, and select the target, and select the IKM SQL to SQL Append.
    Step4: Execute.
    Refer this thread what KM to change.
    Optimizing the KM

  • Performance Impact with OR concatenation / Inlist Iterator

    Hello guys,
    is there any performance impact with using OR concatenations or some IN-Lists?
    The function of both is the "same":
    1) Concatenation (OR-processing)
    SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
    - Which are then ‘concatenated’
    2) Inlist Iterator
    SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
    - Every value executed seperately
    - Same as concatenation of 3 “OR-red” values
    So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
    Thanks and Regards
    Stefan

    The note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
    The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
    The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
    Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
    Quick example to demonstrate the difference between concatenation and iteration:
    drop table t1;
    create table t1 as
    select
         rownum     id,
         rownum     n1,
         rpad('x',100)     padding
    from
         all_objects
    where
         rownum <= 10000
    create index t1_i1 on t1(id);
    execute dbms_stats.gather_table_stats(user,'t1')
    set autotrace traceonly explain
    select
         /*+ use_concat(t1) */
         n1
    from
         t1
    where
         id in (10,20,30,40,50,60,70,80,90,100)
    set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
       1    0   CONCATENATION
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       5    4       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       7    6       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       8    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       9    8       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      10    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      11   10       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      12    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      13   12       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      14    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      15   14       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      16    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      17   16       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      18    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      19   18       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      20    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      21   20       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
       1    0   INLIST ITERATOR
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Universal adaptor for OBIEE with JDEdwards as OLTP source

    Hi All,
    we are in the process of implementing OBIEE for procurement and spend analytics with JDE as OLTP source. As you knew this subject area doesnot come with adaptor for JD Edwards in the recent version.
    Does anyone having idea about creating a universal adaptor for this or any sample for this or any clue how to proceed for this?
    Thanks

    Hi,
    Do you have Oracle BI Applications? If you look carefully in the DAC there are twee execution plans called "Procurement and Spend: Universal" and "Procurement and Spend Only: Universal", it seems to me that they could be helpfull for you. I'm also trying to figure out how to use the universal adaptors for HRMS.
    Regards,
    Edwin

  • Integrating Essbase cubes with Oracle Tables in BI Server

    I'm trying to link together data from an aggregated Essbase Cube with some static table data from our oracle system. Both the essbase and oracle data exist correctly in their own right in the physical, business and presentation levels. Aggragted data is client sales, static data is client details
    Within the OBIEE Administration tool I've tried to drag the physical oracle table for clients onto the clients essbase section in the business area, and it seems to work OK, until you try and report on them together and I get the following error:
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 42043] An external aggregate is found in an outer query block. (HY000)
    Can anyone advise on what I'm doing wrong?
    Thanks

    Thanks Christian I found some very useful articles (one or two by you) - I'll have to look harder on the net before posting.
    One thing I found out, with respect to vertical federation, that others may benefit from, is that I fuond it much easier to start from the most detailed level and then attach the less detailed source rather start with the less detailed and add the additional on.

Maybe you are looking for

  • Multiple Apple IDs on one iPhone 5S

    I recently got an Iphone 5s.  I've had several iPODs. and had to disable one of my emails that my Apple ID was linked to. However, that Apple ID has not been used for several years. I didn't manage to find it. However, I'd like to have both my new an

  • Connecting a new computer through linksys router

    I have a WRT54G wireless router ... I want to connect a new laptop through it ... but I need either a network key or a password ... cant locate these .. can anyone help?

  • Questions about creating a foreign key on a large table

    Hello @ll, during a database update I lost a foreign key between two tables. The tables are called werteart and werteartarchiv_pt. Because of its size, werteartarchiv_pt is a partitioned table. The missing foreign key was a constraint on table wertea

  • How do I send photo to Facebook

    I feel stupid. How do I send a photo to Facebook?

  • Safari not open https pages

    safari does not open most of the https pages, for example - Wikipedia http://youtu.be/7a4geTKYcPI exact same problem when trying to open https://discussions.apple.com or almost any other site there are also many problems with the mapping javascript (