Work Table Prefix In ODI

Can we use Session or Any Other time related value as a Work Table Prefix in Pyhsical Schema of ODI?

Hi,
The answer is Yes but there are there are a problem at Oracle.
It is possible, with java variables, change the prefix at physical schema (I have this running at production), BUT, without touch a KM you need to concern about the table name length (if you are at Oracle) because it will no cut in 30 characters automatically.
However, if you change 3 or 4 steps of the KM's (less than 1 hour of work) you get the same result without limits. Just change how to create the table name at the steps...
Does it help you?

Similar Messages

  • Why do we need Work Table in ODI???

    Hello All,
    Please help me on this..
    Why do we need Work table in ODI???
    Why ODI is creating c$_0table_name (work table)?
    Thanks
    Ravikiran

    Hi,
    this is the standard "Load Data" from LKM SQL to SQL
    +<%for (int i=odiRef.getDataSetMin(); i <= odiRef.getDataSetMax(); i++){%>+
    +<%=odiRef.getDataSet(i, "Operator")%>+
    select     <%=odiRef.getPop("DISTINCT_ROWS")%>
    +     <%=odiRef.getColList(i, "", "[EXPRESSION]\t[ALIAS_SEP] [CX_COL_NAME]", ",\n\t", "", "")%>+
    from     <%=odiRef.getFrom(i)%>
    where     (1=1)
    +*<%=odiRef.getFilter(i)%>*+
    +*<%=odiRef.getJrnFilter(i)%>*+
    +<%=odiRef.getJoin(i)%>+
    +<%=odiRef.getGrpBy(i)%>+
    +<%=odiRef.getHaving(i)%>+
    +<%}%>+
    as you can see in C$_ you could have some filtered data. If you have got a source table containing invoicing (10kk rows) you could filter last_update column and obtain 1k rows for example and copy only this tiny subset.

  • Error in creating work table

    Hello!
    Now I am fighting with a new issue with ODI 11.1.1.5. I executed an interface several times with no issues and during the next execution the interface produced the following error:
    ODI-1228: Task SrcSet0 (Loading) fails on the target SUNOPSIS_ENGINE connection MEMORY_ENGINE.
    Caused By: java.sql.SQLException: object name already exists: C$_0Accounts in statement [create table "C$_0Accounts"
         C1_PARENT     VARCHAR(80) NULL,
         C2_CHILD     VARCHAR(80) NULL,
         C3_DESCRIPTION     VARCHAR(80) NULL
         at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
         at org.hsqldb.jdbc.JDBCPreparedStatement.fetchResult(Unknown Source)
         at org.hsqldb.jdbc.JDBCPreparedStatement.execute(Unknown Source)
         at oracle.odi.runtime.agent.execution.sql.SQLCommand.execute(SQLCommand.java:163)
         at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)
         at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)
         at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:537)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:338)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:272)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:263)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:822)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: org.hsqldb.HsqlException: object name already exists: C$_0Accounts
         at org.hsqldb.error.Error.error(Unknown Source)
         at org.hsqldb.SchemaObjectSet.checkAdd(Unknown Source)
         at org.hsqldb.SchemaManager.checkSchemaObjectNotExists(Unknown Source)
         at org.hsqldb.StatementSchema.setOrCheckObjectName(Unknown Source)
         at org.hsqldb.StatementSchema.getResult(Unknown Source)
         at org.hsqldb.StatementSchema.execute(Unknown Source)
         at org.hsqldb.Session.executeCompiledStatement(Unknown Source)
         at org.hsqldb.Session.execute(Unknown Source)
         ... 19 more
    Any hints on how to resolve it will be very highly appreciated!

    Hi, John!
    Thank you so much for your hint! This issue happens with the following dependency: 1) I run an interface and the interface stumbles at a given step with some error, 2) I run the same interface for the second time and it produces the error at the very first step that the work table already exists.
    If I exit ODI studio and then relaunch it, then the interface does not have the error of existing work table. But if the interface has some other error at some step and I run the interface for the second time, then again the error that the work table already exists is reproduced. So every time I want to get rid of this error, I have to exit and relaunch the ODI studio.
    Is there any solution that would not force me to relaunch the ODI studio every time I have this error?

  • Work table empty

    I'm trying to match a temporary table (source) with a xml file (target) and i got the following message :
    Caused By: java.sql.SQLException: Try to insert null into a non-nullable column: column: E1FIKPFPK table: TMP_E1FIKPF in statement [insert into TMP_E1FIKPF (BUKRS) select        C1_BUKRS from TMP_C$_0E1FIKPF where     (1=1)]
         at com.sunopsis.jdbc.driver.xml.SnpsXmlPreparedStatementRedirector.executeUpdate(SnpsXmlPreparedStatementRedirector.java:629)
    The previous step (create table & loading src) are good, but apparently my work table is empty.. however when i'm executing my sql request by myself, i'm getting all my data..
    I just can't understand why.

    Actually the burks column need to have content... That's why in the mapping i'm doing a match with a tempory work table and the column burks. And in the temporary work table there is data !
    I'm suppose that ODI create a c$ table to transfert data from my tempory table T to the xml branch E1FIKPFPK but apparently it cant' recover the data from the c$ table.
    that's why I'm getting the error "null into a non-nullable column"
    Edited by: 844878 on 16 mars 2011 14:04
    Edited by: 844878 on 16 mars 2011 14:04

  • Fail to open report - Custom table prefix specified in the InfoStore error

    The version is BOXI3
    The open report method as
    IInfoObject  report = this.openManagedReportAsIInfoObject(reportName, parentFolderID, iStore, reportAppFactory);
    reportClientDocument = reportAppFactory.openDocument(report, OpenReportOptions._openAsReadOnly, java.util.Locale.US); // exception here
    Exception :
    com.crystaldecisions.sdk.occa.managedreports.ras.internal.ManagedRASException: Cannot open report document. --- Custom table prefix specified in the InfoStore does not exist.  This is a configuration problem. Please contact your system administrator.
    cause:com.crystaldecisions.sdk.occa.report.lib.ReportSDKServerException: Custom table prefix specified in the InfoStore does not exist.  This is a configuration problem. Please contact your system administrator.---- Error code:-2147467259 Error code name:failed
    detail: Custom table prefix specified in the InfoStore does not exist.  This is a configuration problem. Please contact your system administrator.
         at com.crystaldecisions.sdk.occa.managedreports.ras.internal.RASReportAppFactory.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.managedreports.ras.internal.RASReportAppFactory.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.managedreports.ras.internal.RASReportAppFactory.openDocument(Unknown Source)
         at com.crystaldecisions.sdk.occa.managedreports.ras.internal.RASReportAppFactory.openDocument(Unknown Source)
    Any one knows why and how to config propertly ?
    Thanks.
    Forest

    Hello Forest,
    Please try using following code :
    ISessionMgr sessionMgr = CrystalEnterprise.getSessionMgr();
    IEnterpriseSession enterpriseSession = sessionMgr.logon("username", "password", "hostname:port", "secEnterprise");
    IInfoStore iStore = (IInfoStore) enterpriseSession.getService("InfoStore");
    IInfoObjects infoObjects = iStore.query("Select SI_ID From CI_INFOOBJECTS Where SI_NAME='World Sales Report' And SI_INSTANCE=0");
    IInfoObject infoObject = (IInfoObject)infoObjects.get(0);    
    IReportAppFactory reportAppFactory = (IReportAppFactory) enterpriseSession.getService("RASReportFactory");
    ReportClientDocument rcd = new ReportClientDocument();
    rcd = reportAppFactory.openDocument(infoObject,0, java.util.Locale.US);
    Thanks,
    Chinmay

  • Table prefix improving performance

    i read in the oracle manual that when using joins it is a good practice to use table prefixes as it increases performance ,how?

    f7218ad2-7d9f-4e71-ba26-0d6e4b38f87e wrote:
    i read in the oracle manual that when using joins it is a good practice to use table prefixes as it increases performance ,how?
    uh, maybe so the parser doesn't have as many hoops to jump through to resolve non-specific object references.
    You could cite your reference ...
    ============================================================================
    BTW, it would be really helpful if you would go to your profile and give yourself a recognizable name.  It doesn't have to be your real name, just something that looks like a real name.  Who says my name is really Ed Stevens?  But at least when people see that on a message they have a recognizable identity.  Unlike the system generated name of 'ed0f625b-6857-4956-9b66-da280b7cf3a2', which is no better than posting as "Anonymous".
    All you ed0f625b-6857-4956-9b66-da280b7cf3a2's look alike . . .
    ============================================================================

  • DBCC SHRINKFILE: Page 4:11283400 could not be moved because it is a work table page.

    Hi,
    I issued this command on Tempdb but it doesnot shrink the file.
    dbcc shrinkfile (tempdev_3,1)
    go
    Messages:
    DBCC SHRINKFILE: Page 4:11283400 could not be moved because it is a work table page.
    I have  checked that there are no tables associated with any user in tempdb. Any help is appreciated.
    Regards,
    Razi

    this basically is a re-write of an existing stored procedure that will show
    temp tables in the tempdb, and their relative size. I also have a query to
    show the size of tempdb in MB that i used with this one, it is also a modification
    to an existing system stored proc that shows allocated and free space in MB.
    --10 june 2007 slane
    --shows temp tables in the
    --tempdb
    use tempdb
    declare @id int
    declare @dt smalldatetime
    create table #spt_space_all
    id int,
    name varchar(500),
    rows varchar(200) null,
    reserved varchar(200) null,
    data varchar(200) null,
    index_size varchar(200)null,
    unused varchar(200) null,
    create_date smalldatetime null,
    declare TMP_ITEMS CURSOR LOCAL FAST_FORWARD for
    select id from sysobjects
    where xtype='U'
    open TMP_ITEMS
    fetch next from TMP_ITEMS into @id
    declare @pages int
    WHILE @@FETCH_STATUS = 0
    begin
    create table #spt_space
    id int,
    rows int null,
    reserved dec(15) null,
    data dec(15) null,
    indexp dec(15) null,
    unused dec(15) null
    create_date smalldatetime null,
    set nocount on
    if @id is not null
    set @dt = (select crdate from sysobjects where id=@id)
    begin
    ** Now calculate the summary data.
    ** reserved: sum(reserved) where indid in (0, 1, 255)
    insert into #spt_space (reserved)
    select sum(reserved)
    from sysindexes
    where indid in (0, 1, 255)
    and id = @id
    ** data: sum(dpages) where indid < 2
    ** + sum(used) where indid = 255 (text)
    select @pages = sum(dpages)
    from sysindexes
    where indid < 2
    and id = @id
    select @pages = @pages + isnull(sum(used), 0)
    from sysindexes
    where indid = 255
    and id = @id
    update #spt_space
    set data = @pages
    /* index: sum(used) where indid in (0, 1, 255) - data */
    update #spt_space
    set indexp = (select sum(used)
    from sysindexes
    where indid in (0, 1, 255)
    and id = @id)
    - data
    /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */
    update #spt_space
    set unused = reserved
    - (select sum(used)
    from sysindexes
    where indid in (0, 1, 255)
    and id = @id)
    update #spt_space
    set rows = i.rows
    from sysindexes i
    where i.indid < 2
    and i.id = @id
    update #spt_space set create_date=@dt
    end
    insert into #spt_space_all
    select name = @id,object_name(@id),
    rows = convert(char(11), rows),
    reserved = ltrim(str(reserved * d.low / 1024.,15,0) +
    ' ' + 'KB'),
    data = ltrim(str(data * d.low / 1024.,15,0) +
    ' ' + 'KB'),
    index_size = ltrim(str(indexp * d.low / 1024.,15,0) +
    ' ' + 'KB'),
    unused = ltrim(str(unused * d.low / 1024.,15,0) +
    ' ' + 'KB'),create_date
    from #spt_space, master.dbo.spt_values d
    where d.number = 1
    and d.type = 'E'
    drop table #spt_space
    FETCH NEXT FROM TMP_ITEMS
    INTO @id
    end
    CLOSE TMP_ITEMS
    DEALLOCATE TMP_ITEMS
    select * from #spt_space_all where [name] not like '%#spt_space_all%'
    drop table #spt_space_all
    GO

  • Optimization work table.

    Hi,
    I use some work tables in my code pl / sql.
    to optimize the table is it possible to remove the index when insersion data in these tables.
    And when they have data recreating indexes to optimize ther request on it?
    Do you think this is a good method?
    Can you give me an example of code to delete and create the index ?
    Thanks,

    >
    I use some work tables in my code pl / sql.
    to optimize the table is it possible to remove the index when insersion data in these tables.
    And when they have data recreating indexes to optimize ther request on it?
    Do you think this is a good method?
    Can you give me an example of code to delete and create the index ?
    >
    You don't need to drop and recreate the indexes. You can marks them unusable, do the load and then rebuild the indexes.
    That technique can be effective especially when large numbers of records are involved.
    There are examples in the doc. See Altering Indexes in the DBA guide
    http://docs.oracle.com/cd/E11882_01/server.112/e25494/indexes004.htm#CIHJCEAJ
    >
    Making an Index Unusable
    When you make an index unusable, it is ignored by the optimizer and is not maintained by DML. When you make one partition of a partitioned index unusable, the other partitions of the index remain valid.
    You must rebuild or drop and re-create an unusable index or index partition before using it.
    The following procedure illustrates how to make an index and index partition unusable, and how to query the object status.

  • Why does the work table have a 0 in the prefix? C$_0

    My phyiscal schema specifies that the mask (by default) should be C$. Does anyone know where the 0 is coming from?

    Hi Alastair,
    I'm using LKM File to Oracle (External Table), setting LKM option DELETE_TEMPORARY_OBJECTS to False, and I want to be able to refer to the created temporary table, $C_#table_name later in my package. However, I need from an application code standpoint to be able to guarantee the name of the External Table, i.e. what that # will be. Whether I can select it from a call method or from the repository or something; I just need to make sure what that number is always going to be so that my downstream processes don't fail in the reference. So what did you end up doing?

  • NLS_LENGTH_SEMANTICS and work tables in KMs

    Hi everybody,
    I'm working on an interface that uses a 11g-based workarea. A quick query on v$nls_parameters returns:
    NLS_CHARACTERSET: AL32UTF8
    NLS_LENGTH_SEMANTICS: BYTE
    Because of this every time a km creates a "$" table, CHAR and VARCHAR2 column lengths are implicitly defined in bytes.
    Obviously when a 30-chars string (source datastores are on DB2) exceeds those 30 bytes target length, the execution fails.
    Is there any way to avoid this? I've tried adding "ALTER SESSION ..." as a step in the "LKM SQL to Oracle" km I'm using but from what I understand every action performed by it on the database lives in its own session.
    I'd prefer to avoid performing ALTER SYSTEM as the 11g instance has other apps/products installed.
    Sorry for my bad english and thanks in advance for any suggestion...

    Sutirtha, your post was sort of inspiring!
    I've found a partial solution to my problem by changing in Topology Manager the CHAR and VARCHAR2 datatypes implementation for the Oracle technology:
    !http://www.abload.de/img/odi-datatype-implementu55n.png!
    Now only a little annoyance remains, caused by a known bug (4485954) within the Oracle JDBC driver: when I reverse engineer tables from a UTF8 physical schema, ALL_TAB_COLUMNS.COLUMN_LENGTH is used instead of ALL_TAB_COLUMNS.CHAR_LENGTH.
    I should try to use a newer jdbc .jar but IIRC that bug is resolved only with db 11g that ships with JDK 1.5/1.6 drivers which ODI (1.4 based) cannot use...

  • Very simple, but not working Table to Flat File

    I'm new to ODI, but I am having too much difficulty in performing a very basic task. I have data in a table and I want to load the data to a flat file. I've been all over this board, all over the documentation, and all over the Googles, and I can not get this to work. Here is a run down of what I have done thus far:
    1. created a physical schema under FILE_GENERIC that points to my target directory
    2. created a logical schema that points to my new physical schema
    3. imported a test file as a model (very simple, two string columns, 50 chars each)
    4. set my parameters for my file (named columns, delimited, filename resource name, etc.)
    5. created a new interface
    6. dragged my new file model as the target
    7. dragged my source table as well
    8. mapped a couple of columns
    9. had to select two KMs: LKM - SQL to SQL (for the source) & IKM - SQL to File Append (for the target)
    10. execute
    Now, here is where I started hitting problems. This failed in the "load data" step with the following error:
    +7000 : null : java.sql.SQLException: Column not found: C1_ERRCODE+
    I found a note on the forum saying to change the "Staging Area Different From Target" setting on the Definition tab. Did that and selected the source table's schema in the dropdown, ran again. Second error:
    +942 : 42000 : java.sql.SQLException: ORA-00942: table or view does not exist+
    This occurred in the "Integration - insert new rows" step.
    The crazy thing is that in a step prior to this ("Export - load data"), the step succeeded and the data shows up in the output file!
    So why is ODI trying to export the data again? And why am I getting an ORA error when the target is a file?
    Any assistance is appreciated...
    A frustrated noob!
    Edited by: Lonnie Morgan (CALIBRE) on Aug 12, 2009 2:58 PM

    I found the answer. But still not sure why this matters...
    Following this tutorial, I recreated my mapping:
    [http://www.oracle.com/technology/obe/fusion_middleware/ODI/ODIproject_table-to-flatfile/ODIproject_table-to-flatfile.htm]
    I was still getting the same error. I reimported my IKM and found that the "Multi-Connections" box was unchecked before and now it was checked. I may have unchecked it in my trial & error.
    Anyway, after running the mapping again with the box checked, the extract to a file worked.
    So, I'm happy now. But, I am also perturbed with the function of the this checkbox and why it caused so much confusion within ODI.

  • Target and Source Table - Query from ODI Repository

    Hi folks,
    Can anybody help me? I am trying to query the following from an ODI 11g work repository:
    All tables and for each table the tables that are listed in designer as “filled by” (don’t know the exact translation as I am using a german ODI designer) – in other words “all tables and the tables they are depending on”. The reason is to perform a connect-by query on that.
    There is a solution published on ODIEXPERTS: http://odiexperts.com/interface-mapping-query, but however it does not show me the expected results. Does anyone have an idea how to get a simple table like that:
    TARGET          SOURCE
    TAB1           TAB2
    TAB1           TAB3
    TAB2           TAB4
    TAB3           TAB5
    TAB3           TAB6
    TAB6           TAB7
    Using the Metadata Navigator is no option as we don’t have Weblogic installed and I need the data for further processing.

    If memory serves you have an SNP_POP table still in that release, join to the Model table (the joins cols are obvious if I recall) to get the datastore names and your more or less there.
    I dont have that table in 11.1.1.5 and we moved over a while back so cant really take a look anytime soon.

  • URGENT:how to get the error table populated in ODI

    Hello all,
    I am working on an interface of ODI where mapping is from a XML file to the database tables. I want, when there is any bad data (having length size greater than the target table's column length or data type mismatch) the bad data should get rejected or separated.
    I am using LKM, CKM, and IKM knowledge modules, but the erroneous record is not getting populated in even in the error table.
    when I try to insert lets say 4 records and 2nd record is erroneous then it is only inserting the 1st record. i want that the erroneous record should get inserted in the error table.
    Please suggest, if anybody is having any related information to get this done. That would be really helpful.
    Thanks in advance!

    Hello Phil,
    Thanks for your update.
    The option which you have mentioned having cloumn as char 4000, I can not do that as my target tables are seeded tables.
    I have also tried to get the data atleast in C$ tables having their length as varchar 4000. but it is not allowing to insert in C$ tables also.
    In my case I am getting data for ex: date_of_birth in VARCHAR datatype from the source XML and I have converted it ot DATE while mapping it to target column having DATE as the datatype.
    what if the DATE in xml source is invalid? its giving me the error while execution "Invalid datetime format". how would I identify that this error is for a particular record. If I can get this erroneous record atleast in C$ table I can find a work around for this correction then.
    I hope you have solution for this problem :)
    Thanks.

  • Can't get Sequencing working (table or view does not exist...)

    Hello,
    I'm running JDeveloper 10.1.2 on a Oracle 9i database.
    All my mappings are correct, i mean, i can query the database using the Toplink API and list all the objects on teh database. What it's not working is inserting new objects due to this sequencing problem.
    This is what i'm doing:
    myClass newObject = (myClass) uow.newInstance(myClass.class);
    uow.assignSequenceNumber(newObject);
    But this is generating this Exception:
    Local Exception Stack: Exception [TOPLINK-4002] (OracleAS TopLink - 10g (9.0.4.5) (Build 040930)): oracle.toplink.exceptions.DatabaseExceptionException Description: java.sql.SQLException: ORA-00942: table or view does not exist
    Internal Exception: java.sql.SQLException: ORA-00942: table or view does not exist
    Error Code: 942
    I figured this could be a problem with the sequencing configuration and all that stuff (I don't have a sequence table). So i checked my configurations...
    'Toplink Mappings' is setup to 'Use Native Sequencing', as per Notes on http://www.boku.ac.at/oradoc/ias/10g(9.0.4)/web.904/b10313/dataaccs.htm. This page also states this:
    "When using native sequencing with Oracle, specify the name of the sequence object (the object that generates the sequence numbers) for each descriptor. The sequence preallocation size must also match the increment on the sequence object" (...) "Use preallocation and native sequencing for Oracle databases.
    And this is exactly what i'm doing. On myClass mappings, the 'Use Sequencing' checkbox is checked, the correct sequence name is typed correctly, once this is the same sequence i've been using for years prior to Toplink, the selected table is the correct one, just as the id field on the table.
    Another thing is that the 'Preallocation Size' on Toplink Mappings match the increment value on the database sequence.
    I must be missing something...
    Can anybody shed some light on this?
    Thanks a lot to all.

    Hello
    The method accessing check box tells TopLink to use the get/set methods on your object to set its attributes - If it is unselected, TopLink will set the attribute directly. Great if you have business logic in your get/set methods that you don't want TopLink to hit each time it creates an object after a read from the database (or registers, etc).
    The problem you are encountering seems like you have sequencing set to "Use Default Sequencing Table". This option is on the "TopLink Mappings" page on the "Database info" tab just below where you would have set which connection to use. You will need to ensure you have the "use Native sequencing" selected. If you set this information instead through code, be sure you do it on the database login before you login to the session.
    If this doesn't help, try turning on TopLink logging to see the SQL statement that causes the problem. The knowing the table name and SQL being used might help figure out where it is being set.
    Best Regards,
    Chris

  • How to handle error for a Db Table to Db table transform in ODI

    Hi,
    I have created two table in two different schema source and target, where there is a field for e.g.- place where the datatype is varchar2 and data inserted is string.
    In designer model of ODI i have put the type of place as number in both source and target and accordingly done the mapping.
    When it is executed it should give an error, but it got completed but no data is inserted neither in target table nor in error table in the target schema(E$_TARGET_TEST which is created automatically).
    Why the error is not given and how to handle such type of error..
    Please help.
    The codes for source and target tables are as follows:
    source table code:
    CREATE TABLE "DEF"."SOURCE_TEST"
        "EMP_ID"   NUMBER(9,0),
        "EMP_NAME" VARCHAR2(20 BYTE),
        "SAL"      NUMBER(9,0),
        "PLACE"    VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    inserted data:
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('1', 'ani', '12000', 'kol')
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('2', 'priya', '15000', 'jad')
    target table code:
    CREATE TABLE "ABC"."TARGET_TEST"
        "EMP_ID"     NUMBER(9,0),
        "EMP_NAME"   VARCHAR2(20 BYTE),
        "YEARLY_SAL" NUMBER(9,0),
        "BONUS"      NUMBER(9,0),
        "PLACE"      VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE

    Hi,
    I have used the following KMs in my transformation with the following options:
    IKM SQL Incremental Update
    INSERT    <Default>:true
    UPDATE    <Default>:true
    COMMIT    <Default>:true
    SYNC_JRN_DELETE    <Default>:true
    FLOW_CONTROL    <Default>:true
    RECYCLE_ERRORS    <Default>:false
    STATIC_CONTROL    <Default>:false
    TRUNCATE    <Default>:false
    DELETE_ALL    <Default>:false
    CREATE_TARG_TABLE    <Default>:false
    DELETE_TEMPORARY_OBJECTS     <Default>:true
    LKM SQL to SQL
    DELETE_TEMPORARY_OBJECTS    <Default>:true
    CKM Oracle
    DROP_ERROR_TABLE    <Default>:false
    DROP_CHECK_TABLE    <Default>:false
    CREATE_ERROR_INDEX    <Default>:true
    COMPATIBLE    <Default>:9
    VALIDATE    <Default>:false
    ENABLE_EDITION_SUPPORT    <Default>:false
    UPGRADE_ERROR_TABLE    true

Maybe you are looking for

  • ITunes Playlists Not Showing Up On Secondary Computer, Sharing The Same Apple ID

    I'm trying to help a friend who has an old HP laptop that serves as her primary computer for all her music. She has synced this with her iPod. She then synced her iPod with her PC computer at work. Everything is working great, except on her work comp

  • OIM 11g DBAT connector - user update not working after target recon

    Hi, I have configured a resource (XSVR3) with the DBAT 9.1.0.5.0 connector to do provisioning and target recon to and from the same custom database table, following the example found on the connector guide. Now what happens is the following: - if I f

  • Mov Export and Flv

    Hi all, I want to upload a flash animation to video hosting website like YouTube, Metacafe. I've Flash Pro Mx 2004 and QuickTime Pro 7.1.6. I converted a flash animation (frame-to-frame, and tween. No Action script other than "stop" at the end) This

  • Is it possible to print multiple pages on a paper?

    Hi, I'm somewhat new to InDesign and am using CS6. I will describe what I would like to accomplish since I'm not sure how to ask my question. I am making a master page (A-master) for a playing card for the purpose of using data merge to fill in the t

  • Will not close

    When I publish my captivate2 program to LMS and then run the program it will not close after I complete the last slide (which is the summary of the test score). Does anyone have an idea where I made a listake? David