Exporting & Importing Physical & Logical schemas, data servers, agents

Hi,
I am using ODI 10g.
I want to export Physical & Logical schemas, data servers, agents from my ODI test environment and to import them to ODI production environment.
My requirement is to do this export import through some scripts instead of doing it manually.
Please guide for this.
Thanks,
Divya

Hi Divya,
Personally i feel rather than exporting individual components/objects, its suggested to export master repository as such.
You can make use of ODI utility OdiExportMaster (under <ur package>->Tools->Oracle Data Integrator Objects) for exporting and Import Master Repository wizard (All Programs->Oracle-> Oracle Data Integrator->Repository Management-> Master Repository Import )for importing.
Thanks,
Guru.

Similar Messages

  • Export / import with logical system

    Re: AC 5.3 - RAR - SP17
    I have DEV and QA connected to GRC DEV\QA and am defining my ruleset here. The ruleset is defined against a logical system.
    I have PRD connected to GRC PRD and want to import the ruleset that I have developed in GRC DEV\QA. I have exported the ruleset using utilies \ export and then try to import using the utilies \ import function I get an error message and cannot import the rules into the GRC PRD system. When exporting I have selected the ruleset to export against the logical system. When importing I am selecting the physical system. Will that work ? What am I missing ?
    Edited by: Jan  Chan on Jul 8, 2011 5:02 PM

    I determined that there was some corrupt data in the ruleset connected to the logical system that I was exporting. Attempted to correct it by removing the entries and regenerating the rules under the logical system area. All attempts to correct the data have not worked - even removing the "Action" completely from the ruleset but the data still contains the corrupt data associated with the logical system. May need to use "manage deletion" to completely remove it but my entire ruleset is defined for this logical system.
    I don't think there's a simple way to just change the "system" on each functions "action" - mass function maintenance does not appear to work for this.

  • Export import  in sql developer data modeling tool

    I tried sql developer data modeling tool
    but i have a proble
    i select file&gt; import &gt; data dictionary
    then created a connection to db and selected some tables and got the er digram successfully
    then i goto file &gt; export&gt;to data modeling design and save it to xml file
    but when i give this file to another developer and he imported the file File&gt;import&gt;data modeling design
    the digram is not displayed
    is it a bug or im doing something wrong

    ok it was my fault
    it's not the xml only
    there is a folder with the same name bside the xml file and should be included within the exchange

  • Export/Import utility: unusual schema and table names

    Hi, I am working on the correct TABLE, SCHEMA.TABLE and SCHEMA names validation.
    Unfortunately, I didn't find good examples in documentation and therefore turn to the forum.
    Example: I have "My User" schema and "My Table" table and SCOTT with "scott table".
    Export:
    If I connect as My User (using client 10), I have to write
    TABLES=('"my table"', 'SCOTT."scott table"')
    If I connect as My User, (client 9 and older?) I have to write
    TABLES=('"My Table"', '"SCOTT.scott table"')
    Am I wright? It seems to work.
    ==The first question:
    Does the quoting method depends on the client version?
    I mean, '"SCOTT.scott table"' works on 9 but 'SCOTT."scott table"' works on 10.
    ==
    Further, If I connect as SCOTT, (client 10), I write
    TABLES=('"My User"."My Table"', 'SCOTT."scott table"')
    It seems to work.
    If I connect as SCOTT, (client 9 and older?), neither
    TABLES=('"My User"."My Table"', 'SCOTT."scott table"')
    nor
    TABLES=('"My User.My Table"', '"SCOTT.scott table"')
    doesn't work!
    == The second question:
    How should I write "My User"."My Table" on the 9 client?
    Import:
    I don't know how to make import to "My User".
    FULL=Y
    TOUSER='"my user"'
    # and
    #TOUSER="my user"
    doesn't work. The same with USER mode and TABLE mode import.
    Thanks to everyone who can help.

    I'm not sure how you would have been able to create a user with the user name 'my user' since gaps in the username are not allowed.
    Generally, Oracle isn't case sensitive, so the fact that 'my user' gets converted to 'MY USER' isn't a problem - infact, oracle will always present usernames in UPPER CASE unless you explicitly use the LOWER() function when selecting them from the dba_users table.
    So anyway, I think that the steps you need are:
    (from SQL*Plus, logged in a as a DBA user e.g. sys or system)
    create user MY_USER
    identified by apassword;
    grant connect, resource to my_user;
    Then, from the operating system command line:
    imp my_user/apassword@connect_string file=dump_filename.dmp FROMUSER=scott TOUSER=my_user;
    that should work.....
    R

  • Export and import of a schema

    Hi all,
    I'm having some trouble doing a simply export than import of a schema (data + structure, tables, sequences, triggers, indexes) from 10g XE to a same version server on another machine.
    I follow different ways:
    1) Export from "script sql" both as file as sql commands. Trying to import I get this error:
    Not found
    The requested URL /apex/f was not found on this server2) (traduction from italy language sorry if it is not exact in english) Loading/Downloading data -> Download -> I tryed both file and xml format. In this case I must select one table a time... but only the structure is exported, no data... fortunately I have only 3 table to export, but if I got many tables?
    In short, which is the best practice to export schema with inside data from a server to another with same version??
    Thankyou

    get this message from command line:
    C:\Documents and Settings\user>expdp unime
    Export: Release 10.2.0.1.0 - Production on Giovedý, 26 Febbraio, 2009 12:48:08
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    Password:
    Connesso a: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    ORA-39002: operazione non valida (operation not valid)
    ORA-39070: Impossibile aprire il file di log. (impossibile to open log file)
    ORA-39145: il parametro oggetto directory deve essere specificato e diverso da null (the directory object parameter must be specified and different from null)
    C:\Documents and Settings\user>get the same from sqlplus:
    C:\Documents and Settings\user>sqlplus
    SQL*Plus: Release 10.2.0.1.0 - Production on Gio Feb 26 12:41:48 2009
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Immettere il nome utente: unime
    Immettere la password:
    Connesso a:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    SQL> CONN sys/**** as SYSDBA
    Connesso.
    SQL> ALTER USER unime IDENTIFIED BY **** ACCOUNT UNLOCK;
    Utente modificato.
    SQL> GRANT CREATE ANY DIRECTORY TO unime;
    Concessione riuscita.
    SQL> CREATE OR REPLACE DIRECTORY test_dir AS '/u01/app/oracle/oradata/';
    Creata directory.
    SQL> GRANT READ, WRITE ON DIRECTORY test_dir TO unime;
    Concessione riuscita.
    SQL> host expdp unime
    Export: Release 10.2.0.1.0 - Production on Giovedý, 26 Febbraio, 2009 12:44:40
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    Password:
    Connesso a: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    ORA-39002: operazione non valida
    ORA-39070: Impossibile aprire il file di log.
    ORA-39145: il parametro oggetto directory deve essere specificato e diverso da null
    SQL> quit
    Disconnesso da Oracle Database 10g Express Edition Release 10.2.0.1.0 - ProductionEdited by: Silicio on 26-feb-2009 12.47

  • Regarding logical schemas

    Hi Experts,
    i have one MASTER_REPOSITORY Three work repositories for dev,test,prod, when i importing from development do i need phycal schemas and logical schemas
    and
    in other case
    i have two master repositories ,when i am importing from master_repo1 to master_repo2  do i need physica schemas and logical schemas
    please help me
    regards
    ksbabu

    Hi,
    i have one MASTER_REPOSITORY Three work repositories for dev,test,prod, when i importing from development do i need phycal schemas and logical schemas
    and
    in other case
    i have two master repositories ,when i am importing from master_repo1 to master_repo2  do i need physica schemas and logical schemas
    Yes in both the cases you required the schemas.
    when you import the master repository, Physical & logical schema are imported as well with this(Topology informations are stored in Master repository) and logical schema is very imported to run the scenarios
    Hope this helps
    Thanks

  • Getting unique constraint error when creating a logical schema

    Hi All,,
    I'm creating the logical schema and i'm getting the following error.
    to give you more clear picture on this....
    we are having the application called "DTA" in planning and i was able to set up as a data server with the same name both in essbase and planning....we got a naming issue and deleted the dataserver from essbase and trying to creat it back and i'm having the issue..
    FYI...i was able to create data server for the other application successful both in planning and essbase without any issues...
    java.sql.SQLException: ORA-00001: unique constraint (HYPODIMD.AK_LSCHEMA) violated
    java.sql.SQLException: ORA-00001: unique constraint (HYPODIMD.AK_LSCHEMA) violated
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:185)
         at oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStatement.java:633)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1086)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:2984)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3057)
         at com.sunopsis.sql.SnpsQuery.executeUpdate(SnpsQuery.java)
         at com.sunopsis.dwg.dbobj.generated.GeneratedSnpLschema.insertAction(GeneratedSnpLschema.java)
         at com.sunopsis.dwg.DwgObject.insert(DwgObject.java)
         at com.sunopsis.dwg.DwgObject.insert(DwgObject.java)
         at com.sunopsis.graphical.frame.b.jt.cy(jt.java)
         at com.sunopsis.graphical.frame.bp.cB(bp.java)
         at com.sunopsis.graphical.frame.bp.bG(bp.java)
         at com.sunopsis.graphical.frame.b.jt.bG(jt.java)
         at com.sunopsis.graphical.frame.bo.q(bo.java)
         at com.sunopsis.graphical.frame.bo.bu(bo.java)
         at com.sunopsis.graphical.frame.bo.y(bo.java)
         at com.sunopsis.graphical.frame.bo.b(bo.java)
         at com.sunopsis.graphical.frame.w.actionPerformed(w.java)
         at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
         at javax.swing.AbstractButton$ForwardActionEvents.actionPerformed(Unknown Source)
         at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
         at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
         at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(Unknown Source)
         at java.awt.Component.processMouseEvent(Unknown Source)
         at java.awt.Component.processEvent(Unknown Source)
         at java.awt.Container.processEvent(Unknown Source)
         at java.awt.Component.dispatchEventImpl(Unknown Source)
         at java.awt.Container.dispatchEventImpl(Unknown Source)
         at java.awt.Component.dispatchEvent(Unknown Source)
         at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
         at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
         at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
         at java.awt.Container.dispatchEventImpl(Unknown Source)
         at java.awt.Window.dispatchEventImpl(Unknown Source)
         at java.awt.Component.dispatchEvent(Unknown Source)
         at java.awt.EventQueue.dispatchEvent(Unknown Source)
         at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
         at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
         at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
         at java.awt.EventDispatchThread.run(Unknown Source)
    Plz advise..
    Thanks in advance

    I don't have two logical schema's with one name.
    In general i had one more planning app called DFR. for this i created a physical/logical schema as planning.dfr for planning.
    And for essbase essbase.dfr and this is working well wn i'm trying to create for DTA it is giving the error which i mentioned.
    Plz Let me know your thoughts
    -K-

  • TAPE로 EXPORT, IMPORT, LOADER 사용하기(PIPE 사용)

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    TAPE로 EXPORT, IMPORT, LOADER 사용하기(PIPE 사용)
    ================================================
    Purpose
    대용량의 DATA를 BACKUP 받거나 DATA를 처리할 때에는 TAPE을 이용하는
    경우가 있다. 이럴 때 EXPORT, IMPORT, SQL*LOADER에서 TAPE 를 이용하는
    방법을 종류별로 정리하였다.
    Explanation
    1. TAPE DEVICE로 EXPORT 받기
    % exp userid=system/manager full= y file=/dev/rmt/0m volsize=245M
    FILE은 TAPE이 있는 DEVICE 이름이고 VOLSIZE는 TAPE에 들어갈 DATA의
    SIZE이다. 만약 첫번 째 TAPE이 245M에 이르게 되면 다음 TAPE를 넣으라는
    메시지가 나온다.
    (주의) VOLSIZE should be < tape capacity
    2. Tape device에서 import받기
    % imp userid=system/manager full=y file=/dev/rmt/0m volsize=245M
    첫번째 TAPE이 245M에 이르면 다음 TAPE를 위한 메시지가 나온다.
    3. PIPE와 DD를 이용한 tape의 export
    % mknod /tmp/exp_pipe p # Make the pipe
    % dd if=/tmp/exp_pipe of=<tape device> & # Write from pipe to tape
    % exp file=/tmp/exp_pipe <other options> # Export to the pipe
    4. PIPE와 DD를 이용한 tape의 import
    % mknod /tmp/imp_pipe p # Make the pipe
    % dd if=<tape device> of=/tmp/imp_pipe & # Write from tape to pipe
    % imp file=/tmp/imp_pipe <other options> # Import from the pipe
    5. PIPE 와 DD를 이용한 remote server의 tape device에 export하기
    % mknod /tmp/exp_pipe p
    % dd if=/tmp/exp_pipe | rsh <hostname> dd of=<file or device> &
    % exp file=/tmp/exp_pipe <other options>
    6. PIPE 와 DD 를 이용한 remote server의 tape device에서 import하기
    % mknod /tmp/imp_pipe p
    % rsh <hostname> dd if=<file or device | dd of=/tmp/imp_pipe &
    % imp file=/tmp/imp_pipe <other options>
    7. TAPE에 있는 DATA FILE을 SQL*LOADER로 받기.
    % mknod /tmp/load_pipe p
    % dd if=<tape_device> of=/tmp/load_pipe &
    (주의) 만약 tape이 EBCDIC이면 다음 명령으로 ASCII로 바꾸어 줍니다.
    % dd if=<tape_device conv=ascii of=/tmp/load_pipe &
    % sqlldr userid=user/pass control=contol.ctl log=loader.log
    infile='/tmp/load_pipe'
    * PIPE는 I/O operation의 보다 빠른 상호 작용을 위한 memory 안의 가상
    화일이다. PIPE buffer는 Sun Solaris에서는 5K, HP에서는 8K, SGI에서는
    10K이다. 이것은 FIFO를 따르며 command는 다음과 같다.
    % mknod filename p
    * DD는 한 device로부터 다른 곳으로 data를 raw copy하는 명령어이다.

    re 1)
    you need to explicitely state the length of the varchar field, otherwise there is a limit (I think it's 2000 characters but I'm not sure)
    So your .ctl file should looke like this:
    INTO TABLE SCHEMA.TESTABSTRACT
    TRUNCATE
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS
      PROP_ID,
      ABSTRACT  VARCHAR(4000)
    )

  • System copy using R3load ( Export / Import )

    Hi,
    We are testing System copy using R3load ( Export / Import ) using our production data.
    Environment : 46C / Oracle.
    while executing export, it takes more than 20 hours, for the data to get exported, we want to reduce the export time drastically. hence, we want to achieve it by scrutinizing the input parameters.
    during export, there is a parameter by name splitting the *.STR and *.EXT files for R3load.
    the default input for the above parameter is No, do not split STR and EXT files.
    My question 1 : If we input yes instead of No and split the *.STR and *.EXT files, will the export time get reduced ?
    My Question 2: If the answer is yes to Question 1, will the time reduced will be significant enough ? or how much percentage of time we can expect it to reduce, compare to a No Input.
    Best Regards
    L Raghunahth

    Hi
    The time of the export depends on the size of your database (and the size of your biggest tables) and your hardware capacity.
    My question 1 : If we input yes instead of No and split the *.STR and *.EXT files, will the export time get reduced ?
    In case you have a few very large tables, and you have multiple cpu's and a decent disk storage, then splitting might significantly reduce the export time.
    My Question 2: If the answer is yes to Question 1, will the time reduced will be significant enough ? or how much percentage of time we can expect it to reduce, compare to a No Input.
    As you did not tell us about your database size and hardware details there is no way to give you anything but very basic metrics. Did you specify a parallel degree at all, was your hardware idling for 20 hrs or fully loaded already?
    20 hrs for a 100gb database is very slow. It is reasonable (rather fast in my opinion)  for a 2tb database.
    Best regards, Michael

  • Grants for physical schema and data-servers

    Hi,
    I'd like to know
    What are the Grants needed for the Owners of each physical schema?
    For example: Grants DDL (drop table n) and DML (select / update / delete / insert).
    Grants needed for the users' connection data-servers?
    Bovolini

    It depends on what technology you plan to use.
    If you plan to use Oracle - Is the data server connection user different from the owner of the physical schemas ?
    In addition to the connect, resource to each of these users, you will also need to give data server connection user privileges on the objects of the owner of the physical schemas.

  • Dynamic Creation of Physical Data Server / Agent cache Refresh

    Scenario:
    I have a requirement to load data from xml source to oracle DB, and the xml source will change at run time,but the xsd of the xml would remain same ( so I don't have to change the Logical data Server, models, mappings, interfaces and scenarios - only the Physical Data Server will change at runtime).I have created all the ODI artifacts using ODI studio in my Work Repo and then I'm using odi sdk to create the physical dataserver for the changed xml data source and then invoking the agent programmatically.
    Problem:
    The data is being loaded from the xml source to oracle DB for the first time, but it is not working fine from the second time onwards. If I restart the agent, it is again working fine for one more time. on the first run, I think the agent maintains some sort of cache for the physical data server details and so when ever I change the data server, something is going wrong and that is leading to the following exception. So I want to know, if there is any mechanism to handle dynamic data servers or if there is any way of clearing the agent cache, if any.
    Caused By: org.apache.bsf.BSFException: exception from Jython:
    Traceback (most recent call last):
    File "<string>", line 41, in <module>
    AttributeError: 'NoneType' object has no attribute 'createStatement'
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.execInBSFEngine(SnpScriptingInterpretor.java:346)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.exec(SnpScriptingInterpretor.java:170)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java:2458)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:48)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:1)
         at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:540)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1596)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor$2.doAction(StartScenRequestProcessor.java:582)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor.doProcessStartScenTask(StartScenRequestProcessor.java:513)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor$StartScenTask.doExecute(StartScenRequestProcessor.java:1070)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$1.run(DefaultAgentTaskExecutor.java:50)
         at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor.executeAgentTask(DefaultAgentTaskExecutor.java:41)
         at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.doExecuteAgentTask(TaskExecutorAgentRequestProcessor.java:93)
         at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.process(TaskExecutorAgentRequestProcessor.java:83)
         at oracle.odi.runtime.agent.support.DefaultRuntimeAgent.execute(DefaultRuntimeAgent.java:68)
         at oracle.odi.runtime.agent.servlet.AgentServlet.processRequest(AgentServlet.java:445)
         at oracle.odi.runtime.agent.servlet.AgentServlet.doPost(AgentServlet.java:394)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:503)
         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:389)
         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
         at org.mortbay.jetty.Server.handle(Server.java:326)
         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
         at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:879)
         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:747)
         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
         at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:520)

    Hi ,
    If you want to load multiple files ( same structure) through one connection then in topology create M.XSD for M.XML file
    Create three directories
    RAW -- It will contain file with original name
    PRO- Processing area where file will be moved one by one & renamed it as M.XML.
    OUT- Once file data will be loaded into tables move the file M.XML from PRO to OUT.
    Go to odiexperts to create loop,
    Use odifilemove ( to move & rename/masking) to move A.XML from RAW to PRO & rename to M.XML
    use ODIfilemove to move M.XML to OUT folder & then rename back to A.XML
    Use variables to store file names & refresh
    NoneType' object has no attribute 'createStatement' : It seems that structure of your file is different & your trying to load different files in same schema. If stucture is same then use Procedure "SYNCHRONIZE ALL" after every load...
    Edited by: neeraj_singh on Feb 16, 2012 4:47 AM

  • Context,Physical schema and Logical schema

    Hi,
    How the context,physical schema,logical schema and agent are interrelated.
    Please explain
    Thanks
    Jack

    Hi Jack,
    Context:
    A context is a set of resources allowing the operation or simulation of one or more data processing applications. Contexts allow the same jobs (Reverse, Data Quality Control, Package, etc) to be executed on different databases and/or schemas.
    Its used to run the object(process) in different database.
    Physical Schema:
    The physical schema is a decomposition of the data server, allowing the Datastores (tables, files, etc) to be classified. Objects stored in data servers with this mode of classification can be accessed by specifying the name of the schema attached to the object name.
    Ex
    Oracle classifies its tables by "schema" (or User). Each table is linked to a schema, thus SCOTT.EMP represents the table EMP in the schema SCOTT.
    Logical schema:
    A logical schema is an alias that allows a unique name to be given to all the physical schemas containing the same datastore structures.
    ->The aim of the logical schema is to ensure the portability of the procedures and models on the different physical schemas. In this way, all developments in ODI Designer are carried out exclusively on logical schemas.
    Thanks
    Madha

  • Will deleting a column at logical schema delete the same at physical level by DDL Sync?

    Will deleting a column at logical schema delete the same at physical level by DDL Sync?

    Hi David,
    First of all thanks for your quick response and for your help logging the enhancement request,
    I am testing more or less your suggestion but I  am not sure if I understood exactly what you mean,
    1)I imported from data dictionary in a new model and into the options menu on the schema select screen I un-ckecked partitions and triggers,
    I guessed that the import should not get from the data dictionary the information about the partitions but the result is that the tables partitioned (by list in this case) are partitioned by range without fields into the physical model on SDDM,
    2)I select one of the tables modify a NO partitioned option and propagate the option for the rest of the tables
    3) I imported again from data dictionary but this time I included the partitions into the option menu on select schema screen,
    into tabular view on compare models screen I can select all the tables with different partitioned option, also I can change for "list partitions" and select only the partitions that I want to import.
    So I have a solution for my problem, thanks a lot for your suggestion
    The second step I'm not sure is needed or maybe I can avoid the step with some configuration setting in any of the preferences screen,
    if not, I think the options to not include partitions into select schema screen are not so clear, at least for me,
    please, could you confirm me if a way to avoid the second step exists or if I misunderstood this option?
    thanks in advance

  • How to 1 logical schema to many physical schemas?

    I have a database schema which is instantiated on many different servers. I set up a physical schema pointing to one, and a logical schema pointing to that physical schema. I imported the schema to a model, created interfaces for the tables, and created a package to execute it; and that is all working for that one physical instance.
    1) How can I implement that same model, interfaces, and package for each of the physical instances?
    1a) Can I change the JDBC parameters at package run time to point to a different database? How?
    1b) Can I select a different physical schema for the logical schema at package run time so that I only have to set up a different physical schema for each database? How?
    Thank you.

    "But if you have a lot of context (for example 1000 stores), you can define a generic physical schema, a logical one. The physical is based on variables (host, port,..). "
    Using contexts is working for me, but at least one of my schemas has more than 50 server instances, so this approach would be beneficial. Before I posted this question, I had tried to use variables for the host, port, and SID without success. I used a global variable and gave it default values, but it failed. Then I tried setting the value in a package and creating a scenario, but that too failed. What am I missing?

  • Physical Schema and logical schema

    Hi,
    When creating the data server in the topology corresponding to the appropriate technology we are creating a physical schema. But then why do we need to create logical schema. Is it created for execution of the interface? And can multiple physical schemas be mapped to same logical schema?

    Hi
    Physical schema represents the actual connection to the data source or data target. Logical schema represents the logical name associated to that source or target.
    One logical schema can be associated with multiple physical schema along with context, i.e. one logical schema is associated with different physical schema using different context.
    It can be understood with following example:
    You have 3 environments: Dev, QA, Prod, each having different database servers as DB1, DB2, DB3, respectively. Similarly we have 3 context corresponding to Dev, QA and Prod. You create logical schema with name DB_source
    Now you associate physical DB servers to logical schema (DB_source) for each context:
    DEV: DB1
    QA: DB2
    PROD: DB3
    Now when u develop ODI interfaces, you use one context DEV which associates DB_source to DB1. While mentioning context for execution, keep it as "Execution". This means, whatever context you choose during execution, corresponding physical DBs will be used.
    Thus if you change the execution context, corresponding physical schema will be used during execution.
    Let me know if you have further questions !!

Maybe you are looking for

  • How to call a AM method with parameters from Managed Bean?

    Hi Everyone, I have a situation where I need to call AM method (setDefaultSubInv) from Managed bean, under Value change Listner method. Here is what I am doing, I have added AM method on to the page bindings, then in bean calling this Class[] paramTy

  • Something's wrong with my N82

    hi guys,im using N82... and i realized that my media player is not well functioning... firstly, everytime i added songs into it, after a few days, those songs will disappear from my media player, but still in my memory card... also, the 'Refresh' see

  • Which is better, VM Fusion or Parallels?

    which one of these emulation packages are currently better and should stay on top? does one run more window apps than the other?

  • Unlimited Subscription

    i purchased unlimited calling to any country but i can only call from skype on my cell phone. its not working on my computer. why? can i not use my same account on my phone and my computer to make the unlimited calls?

  • Avi. file converted to quicktime sequence to play in Apple TV

    Hi, I want to know if it is normal that Apple TV does not see my avi. videos transferred to quicktime sequence, it does not show any of my videos??