Sunopsis memory engine

Hi all,
Ive got a doubt regarding the use of sunopsis memory engine as staging area. All the examples and blogs explain stuffs with sunopsis. Can it survive any amount of data load.. or for bulk dataload we have to go for some other means. Can we create a customized staging area like a dedicated schema or db ?
thanks

Sutirtha Roy wrote:
Hi ,
Sunopsis Memory engine should only be used when you do not have any relation schema to work with as staging area.
If your data volumn increase , you are boud to get performance issue for sunopsis memory engine.
Sunopsis memory enging is mainly provided for giving small demo purpose .
Thanks,
SutirthaI have to disagree, I don't believe it should only be used if there is not a relational schema there are occasions when it is very useful
There are many occasions for instance loading to planning,essbase from a flat file where there is no need to load into relational staging area and using the memory can outperform it and works perfectly well.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Issue while using SUNOPSIS MEMORY ENGINE (High Priority)

    Hi Gurus,
    While using SUNOPSIS MEMORY ENGINE to generate a .csv file using the database table as a source it is throwing an error in the operator like.
    ODI-1228: Task SrcSet0 (Loading) fails on the target SUNOPSIS ENGINE connection SUNOPSIS MEMORY ENGINE.
    Caused By: java.sql.SQLException: unknown token
    (LKM used : LKM Sql to Sql.
    IKM used : IKM Sql to File Append.)
    can you please help me regarding this ASAP as it has became the show stopper for me to proceed further.
    Any Help will be greatly Appreciable.
    Many Thanks,
    Pavan
    Edited by: Pavan. on Jul 11, 2012 10:22 AM

    Hi All,
    The Issue got resolved successfully.
    The solution is
    we need to change the E$_,I$_,J$_,...... to E_,I_,J_,.... ((i.e; removing the '$' Symbol)) in the PHYSICAL SCHEMA of SUNOPSIS MEMORY ENGINE as per the information given below.
    When running interfaces and using a XML or Complex File schema as the staging area, the "Unknown Token" error appears. This error is caused by the updated HSQL version (2.0). This new version of HSQL requires that table names containing a dollar sign ($) are surrounded by quotes. Temporary tables (Loading, Integration, and so forth) that are created by the Knowledge Modules do not meet this requirement on Complex Files and HSQL technologies.
    As a workaround, edit the Physical Schema definitions to remove the dollar sign ($) from all the Work Tables Prefixes. Existing scenarios must be regenerated with these new settings.
    It worked fine for me.
    Thanks ,
    Pavan Kumar

  • Sunopsis Memory Engine Issue

    Hi,
    I recently started getting an the following error whilst executing an interface which uses the Sunopsi Memory Engine as the staging area -
    ODI-1228: Task SrcSet0 (Loading) fails on the target SUNOPSIS_ENGINE connection SUNOPSIS_MEMORY_ENGINE.
    Caused By: java.sql.SQLException: statement is not in batch mode
    The interface had previously been executing fine for a number of days. After some debugging I narrowed it down to the fact that when the source file was loaded to the staging area in the interface, if it had more than 210 records it would generate this error. I'm struggling to work out why this should be the case as I'm sure this isn't a restriction I have ever encountered before. Any ideas would be appreciated!!

    No, nothing strange in that row. I can remove any record in the file or add any record at row 211 and the result is aways the same. Threre are only 2 fields on each line so its a pretty basic recordset

  • Memory Engine

    All,,
    I have Few questions and hoping some one can help...
    1.Can we replace sunopsis memory engine with oracle schema??
    2.what does this exactly and does and where can we see the contents of this engine??
    3.In which table $ stuff is saved??
    4.Can we have oracle schema as a work schema which we have to assign in data server??
    5.we have few intefaces and when we run all at a time we are getting the following error and how do we come across this??
    The package XYZ_Load with session ID 46158 encountered an error and returned with return code -21 and error message -21 : S0001 : java.sql.SQLException: Table already exists: C$_0App_AppData in statement [create table "C$_0App_AppData"]
    java.sql.SQLException: Table already exists: C$_0App_AppData in statement [create table "C$_0App_AppData "]
    6.One more error which i'm coming thru
    The package XYZ_Load with session ID 48158 encountered an error and returned with return code 0 and error message 0 : null : java.sql.SQLException: Driver must be specified
    java.sql.SQLException: Driver must be specified
         at com.sunopsis.sql.SnpsConnection.a(SnpsConnection.java)
         at com.sunopsis.sql.SnpsConnection.u(SnpsConnection.java)
         at com.sunopsis.sql.SnpsConnection.connect(SnpsConnection.java)
         at com.sunopsis.sql.SnpsQuery.updateExecStatement(SnpsQuery.java)
         at com.sunopsis.sql.SnpsQuery.executeQuery(SnpsQuery.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.getTargetOrderMetaData(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandScenario.treatCommand(DwgCommandScenario.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.k(e.java)
         at com.sunopsis.dwg.cmd.g.A(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Thanks
    K

    1. You can take any Oracle schema as work schema when you are loading to oracle tables using interfaces .However, this may impact the load performance. Standard practice is to take two schemas in a single database, and inside topology, mark one as work schema and another as physical schema.
    4. In topology, no. Why would you want to do that? In your system design, you may select a Oracle schema as your temp or staging, but it is not necessary to tell the ODI the same thing. For example, in case of Hyperion, you may load a Oracle schema from input sources and load from staging to planning or other cubes. ODI does not need to know this. If the data volume is high, there is really no point using memory engine.
    5. You are probably going for asynchronous execution. Do you have a case where you have more than one interfaces (let us say INT1 and INT2) that update a single table? In such cases, if you fire them together, INT1 can be fired first and before it would finish INT2 would be started. Because target table is same, C$ table would be same. So the drop statement for INT2 would fail because INT1 might be using the C$ table. Asynchronous processing need to be determined very carefully after evaluating system capacity, capacity of the source systems, network load and performance requirement.

  • ODI 11.1.1.5 Issue with In-Memory Engine!

    Hi All,
    I was in the process of testing an Interface (Flat File->Planning) using Memory Engine as the Optimization Engine; and was getting the following error;
    ODIPlanningException: The source result set contains column [C1_CHILD] which has no corresponding column on the planning side.
    After some reach, I found on John's blog
    http://john-goodwin.blogspot.com/2011/09/odi-series-issues-with-11115-and.html
    that this is a known issue in V11.1.1.5 and changing the Memory Engine to say MSSQL (or as John suggested to Oracle) had my Interface run successfully. However, I do have a few questions on this;
    1. The first step of Drop Work Table failed after changing the Optimization Engine with the error: Cannot drop the table "abc" because it does not exist or you do not have permission.
    I dont understand this messege completely. Does it mean say its not able to drop because the Table becauase it doesnt exist? Well I ran the same interface a second time (so the Table exists from previous run) and I still got the same messege.
    Any comments?
    2. What is the impact of changing the Optimization Engine from Memory to MSSQL (essentially my staging area)? Is there anything else I need to be aware of by doing so?
    3. Finally, is there a patch for this now so In-Memory Engine could be used?
    Appreciate your comments.
    Thanks

    It is fixed in 11.1.1.6 or for 11.1.1.5 then have a look at patch 12905298
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Loading Flat File and Temporary Table question

    I am new to ODI. Is there a method to load a flat file directly in to the target table and not have ODI create a temporary file? I am asking this in trying to find out how efficient ODI is in bulk loading a very large set of data.

    You can always modify the KM to skip the loading of the temporary table and load the target table directly. Another option would be to use the SUNOPSIS MEMORY ENGINE as your staging database. But, this option has some drawbacks, as it relies on available memory for the JVM and is not suitable for large datasets (which you mention you have).
    http://odiexperts.com/11g-oracle-data-integrator-%E2%80%93-part-711g-%E2%80%93-sunopsis-memory-engine/
    Regards,
    Michael Rainey
    Edited by: Michael R on Oct 15, 2012 2:44 PM

  • URGENT How to map an aggregated result to Target flat-file?**

    All,
    I am trying to map the result of the following query from 'source1' to my target which is a flat file.
    The query result set has 'one row per account' which is unique and record looks like :
    12300,9675,2008,6,11250,$-2095.48
    Questions>>>
    1. what is the proper context ( at what level-- project, global?) should I create the $v_year, $v_period variables.?
    I need the values for this 2 variables to be recognized at the end once I 'package' the mapping
    so user can pass on the values at runtime.
    2. Can this query somehow to be plugged as a 'filter' on top of the 'source1' in the interface? and result then mapped to the target?
    what is the alternative if not? (dump it to another temp table? in SUNOPSIS memory engine?)
    SELECT
    PS_.product,
    PS_.operating_unit,
    $V_YEAR,
    $V_PERIOD,
    PS_.account,
    SUM(PS_.posted_total_amt) TOT_AMT
    FROM psoftdb.ps_ledger PS_
    WHERE
    ((PS_.ledger='ACTUALS')
    AND (PS_.accounting_period <=$V_PERIOD)
    AND (PS_.fiscal_year=$V_YEAR)
    AND ((SUBSTR(PS_.account,1,1)='1' OR
    SUBSTR(PS_.account,1,1)='2' OR
    SUBSTR(PS_.account,1,1)='3')
    GROUP BY
    PS_.product,
    PS_.operating_unit,
    PS_.account

    IN order to do this you will need to declare two variables - at project level, with non-persistent type. You will need to have reverse engineered the source table into a data model. You should also define your flat file as a data model too. If this does not already exist, you will need to manually define it.
    Create a package in which the first steps are to declare the variables.
    then insert an interface which uses your source table as a source and your flat file as a target.
    In the interface select "Staging area different from target" and select the schema of your source from the drop-down.
    In your source, drag all the columns on which you want to filter (including accounting_period and fiscal_year) on to the grey background and fill in the filter details. Use the expression editor to set the filter, selecting the variable from the list, which ensures the correct syntax is used.
    Fill in all the other mapping details.
    On the flow tab select the IKM SQL to File Append as your IKM. This should allow you to create your output in one step.
    Build a scenario from your package, and when you execute it pass in the variables:
    e.g. startscen MYSCEN MYCONTEXT 001 "-MYPROJ.V_YEAR=1999" "-MYPROJ.V_PERIOD=07"

  • How to load data from oracle table to XML using ODI ?

    Hello Team
    I am trying to load data from an oracle table to an xml file. I have configured the physical and the logical schemas for the same. I have used SQL to SQL as LKM and SQL to SQL Append as IKM with Sunopsis Memory Engine as staging area.
    After this, when I am executing the interface, it is executed without any errors but to my surprise I am not finding the relevant data in the xml file. What could be the reason ?
    Please look into this and lemme know how to solve it.
    Thanks.

    908458 wrote:
    Hello Phanikanth
    Even IKM SQL Control Append is not solving the problem. There are many " order columns" in the target xml table. I have not mapped them or specified any order. Does the problem persist because of this ?
    Thanks
    Manoj
    Ocle to XML file loading issue in ODI 10.1.3.5

  • ODI Error when Loading data from a .csv file to Planning

    Hello,
    I am trying to load data from a csv file to planning using ODI 10.1.3.6 and I am facing this particular error. I am using staging area as Sunopsis memory engine.
    7000 : null : java.sql.SQLException: Invalid COL ALIAS "DEFAULT C12_ALIAS__DEFAULT" for column "ALIAS:"
    java.sql.SQLException: Invalid COL ALIAS "DEFAULT C12_ALIAS__DEFAULT" for column "ALIAS:"
         at com.sunopsis.jdbc.driver.file.bb.b(bb.java)
         at com.sunopsis.jdbc.driver.file.bb.a(bb.java)
         at com.sunopsis.jdbc.driver.file.w.b(w.java)
         at com.sunopsis.jdbc.driver.file.w.executeQuery(w.java)
         at com.sunopsis.sql.SnpsQuery.executeQuery(SnpsQuery.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.k(e.java)
         at com.sunopsis.dwg.cmd.g.A(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Code from Operator:
    select     Account     C1_ACCOUNT,
         Parent     C2_PARENT,
         Alias: Default     C12_ALIAS__DEFAULT,
         Data Storage     C3_DATA_STORAGE,
         Two Pass Calculation     C9_TWO_PASS_CALCULATION,
         Account Type     C6_ACCOUNT_TYPE,
         Time Balance     C14_TIME_BALANCE,
         Data Type     C5_DATA_TYPE,
         Variance Reporting     C10_VARIANCE_REPORTING,
         Source Plan Type     C13_SOURCE_PLAN_TYPE,
         Plan Type (FinStmt)     C7_PLAN_TYPE__FINSTMT_,
         Aggregation (FinStmt)     C8_AGGREGATION__FINSTMT_,
         Plan Type (WFP)     C15_PLAN_TYPE__WFP_,
         Aggregation (WFP)     C4_AGGREGATION__WFP_,
         Formula     C11_FORMULA
    from      TABLE
    /*$$SNPS_START_KEYSNP$CRDWG_TABLESNP$CRTABLE_NAME=Account.csvSNP$CRLOAD_FILE=Y:/1 Metadata/Account//Account.csvSNP$CRFILE_FORMAT=DSNP$CRFILE_SEP_FIELD=2CSNP$CRFILE_SEP_LINE=0D0ASNP$CRFILE_FIRST_ROW=1SNP$CRFILE_ENC_FIELD=SNP$CRFILE_DEC_SEP=SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=AccountSNP$CRTYPE_NAME=STRINGSNP$CRORDER=1SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=ParentSNP$CRTYPE_NAME=STRINGSNP$CRORDER=2SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Alias: DefaultSNP$CRTYPE_NAME=STRINGSNP$CRORDER=3SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Data StorageSNP$CRTYPE_NAME=STRINGSNP$CRORDER=4SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Two Pass CalculationSNP$CRTYPE_NAME=STRINGSNP$CRORDER=5SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Account TypeSNP$CRTYPE_NAME=STRINGSNP$CRORDER=6SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Time BalanceSNP$CRTYPE_NAME=STRINGSNP$CRORDER=7SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Data TypeSNP$CRTYPE_NAME=STRINGSNP$CRORDER=8SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Variance ReportingSNP$CRTYPE_NAME=STRINGSNP$CRORDER=9SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Source Plan TypeSNP$CRTYPE_NAME=STRINGSNP$CRORDER=10SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Plan Type (FinStmt)SNP$CRTYPE_NAME=STRINGSNP$CRORDER=11SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Aggregation (FinStmt)SNP$CRTYPE_NAME=STRINGSNP$CRORDER=12SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Plan Type (WFP)SNP$CRTYPE_NAME=STRINGSNP$CRORDER=13SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Aggregation (WFP)SNP$CRTYPE_NAME=STRINGSNP$CRORDER=14SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=FormulaSNP$CRTYPE_NAME=STRINGSNP$CRORDER=15SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CR$$SNPS_END_KEY*/
    insert into "C$_0Account"
         C1_ACCOUNT,
         C2_PARENT,
         C12_ALIAS__DEFAULT,
         C3_DATA_STORAGE,
         C9_TWO_PASS_CALCULATION,
         C6_ACCOUNT_TYPE,
         C14_TIME_BALANCE,
         C5_DATA_TYPE,
         C10_VARIANCE_REPORTING,
         C13_SOURCE_PLAN_TYPE,
         C7_PLAN_TYPE__FINSTMT_,
         C8_AGGREGATION__FINSTMT_,
         C15_PLAN_TYPE__WFP_,
         C4_AGGREGATION__WFP_,
         C11_FORMULA
    values
         :C1_ACCOUNT,
         :C2_PARENT,
         :C12_ALIAS__DEFAULT,
         :C3_DATA_STORAGE,
         :C9_TWO_PASS_CALCULATION,
         :C6_ACCOUNT_TYPE,
         :C14_TIME_BALANCE,
         :C5_DATA_TYPE,
         :C10_VARIANCE_REPORTING,
         :C13_SOURCE_PLAN_TYPE,
         :C7_PLAN_TYPE__FINSTMT_,
         :C8_AGGREGATION__FINSTMT_,
         :C15_PLAN_TYPE__WFP_,
         :C4_AGGREGATION__WFP_,
         :C11_FORMULA
    Thanks in advance!

    Right-clicking "data" on the model tab can you see the data?
    In your code there's written:
    P$CRLOAD_FILE=Y:/1 Metadata/Account//Account.csv
    Is it right the double slash before the file name?

  • NOT DISPLAYING IN LKM DROP DOWN BOX

    I want to load XML file form Flat file. I want to use LKM as LKM FILE TO SQL It is available in my project.
    I don't see LKM FILE TO SQL  in LKM selector drop down in flow tab of interface

    Use LKM Sql to SQL
    Make sure staging lies on a rdbms or sunopsis memory engine as your target is file.
    Bhabani
    http://dwteam.in

  • Loading Data from Excel to Essbase  through ODI 11G

    Hi All,
    i am getting below error when i click execute in ODI . In Excel Data store i had click reverse engineer its working fine
    -3100 : 37000 : java.sql.SQLException: [Microsoft][ODBC Excel Driver] Syntax error (missing operator) in query expression '[Account]   C1_ACCOUNT''
    java.sql.SQLException: [Microsoft][ODBC Excel Driver] Syntax error (missing operator) in query expression '[Account]    C1_ACCOUNT''
    at sun.jdbc.odbc.JdbcOdbc.createSQLException(JdbcOdbc.java:6957)
    at sun.jdbc.odbc.JdbcOdbc.standardError(JdbcOdbc.java:7114)
    oracle.odi.core.datasource.dwgobject.support.OnConnectOnDisconnectDataSourceAdapter$OnDisconnectCommandExecutionHandler.invoke(OnConnectOnDisconnectDataSourceAdapter.java:200)
    at $Proxy2.prepareStatement(Unknown Source)
    Thanks

    Hi,
    after fixing the error with the PRE_LOAD_MAXL_SCRIPT parameter I right away figured out another problem.
    On data load I get the following exception: java.sql.SQLException: Unexpected token: ACCOUNT in statement [select   C1_ACCOUNT    ""Account] ...
    The SQL source and the Essbase target data store have the same columns, I have mapped them accordingly. As Staging area I use the Sunopsis Memory Engine. The exception is raised in Step '6 - Integration - SQL_TO_ESSBASE_DATA - Load data into essbase'.
    I have also tried a File source instead of the SQL source - same result.
    Any ideas?
    Many thanks and best regards,
    Peter

  • Error during dimension load in planning using ODI adapter

    I have created a Planning app in Classic and used ODI Planning adapter to load the dimension from a csv file. I tried this with Account and Entity dimension. Both fail at the same step. This is the error I get.
    Error:
    -22 : S0002 : java.sql.SQLException: Table not found: C$_0Entity in statement
    java.sql.SQLException: Table not found: C$_0Entity in statement
    at org.hsqldb.jdbc.jdbcUtil.throwError(Unknown Source)
    at or
    -22 : S0002 : java.sql.SQLException: Table not found: C$_0Account in statement
    java.sql.SQLException: Table not found: C$_0Account in statement
    I am following John Goodwin's blog and not sure if I am missing a step. I would appreciate any help with this.
    Thanks

    Thank you John for your respone. Here are the details:
    Step 1 - Loading -SS_0 - Drop work table - It is failing in this step with the following message:
    -22 : S0002 : java.sql.SQLException: Table not found: C$_0Account in statement [drop table "C$_0Account"]
    java.sql.SQLException: Table not found: C$_0Account in statement [drop table "C$_0Account"]
    Step 2, 3, 5 and 6 are successful
    Step 7 - Integration - Interface name - Report Statistics - This fails with the following error:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 2, in ?
    Planning Writer Load Summary:
         Number of rows successfully processed: 0
         Number of rows rejected: 8
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(Unknown Source)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(Unknown Source)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(Unknown Source)
         at com.sunopsis.dwg.cmd.e.i(Unknown Source)
         at com.sunopsis.dwg.cmd.h.y(Unknown Source)
         at com.sunopsis.dwg.cmd.e.run(Unknown Source)
         at java.lang.Thread.run(Thread.java:595)
    I have checked the check box "Staging Area different from Traget" and selected "Sunopsis Memory Engine"
    I would appreciate your help - Thanks

  • Sunopsis_memory_engine_default data server missing,

    Hi,
    We are setting up a ODI installation to load data into a planning application. We have followed most of the steps in John Goodwin's Blog (thank you John) but we are missing the configuration for the default data server under technologies-> sunopsis engine, in the topology manager.
    We have installed the essbase and planning adaptors, and have looked thru TECH_sunopsis_engine xml files but we see no difference in version from the backups before installations.
    Could anyone please help us/pint us in the right direction regarding the coniguration for sunopsis_memory_engine_default, for the JDBC's, user, password etc?
    I'm sorry if thsi is a basic question, we are totally newbies for ODI.
    Thanks, and regards

    Hi,
    The sunopsis memory engine data server should be installed by default, I am not sure what has gone wrong with your installation.
    The data server is called :- SUNOPSIS_MEMORY_ENGINE
    User :- sa
    Password :- no password leave blank
    JDBC Driver :- org.hsqldb.jdbcDriver
    JDBC Url :- jdbc:hsqldb:.
    You will also need to insert a physical schema but shouldn't have to change anything, though you will have to add a context / logical schema
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • 7000 : null : java.sql.SQLException: Invalid format description

    Was running John Goodwin's ODI first steps, but in 10.1.3.5 version (almost the same), checked everything twice. (already associated to ODI_DEMO_CONTEXT too).
    But when I run the "ODI Series 4" sample, I receive that error.
    I guess it's related to Sunopsis Memory Engine setup, but the sample is straghtforward, so ... any help will be very useful.
    Thanks in advance

    Was running John Goodwin's ODI first steps, but in 10.1.3.5 version (almost the same), checked everything twice. (already associated to ODI_DEMO_CONTEXT too).
    But when I run the "ODI Series 4" sample, I receive that error.
    I guess it's related to Sunopsis Memory Engine setup, but the sample is straghtforward, so ... any help will be very useful.
    Thanks in advance

  • ODI is keep on runnng at load data step

    Hi ,
    I am loading the data from flat file to Oracle DB.
    When I check the operator it is always running at the step load data and is unable to go to next stpe( Insert new records) , But when I check thework table all the data loaded into work table( C$ Table) . Is there any way to check the log at what step my scenario is running?
    thanks ,

    Could you put some more information on
    1. Staging Area : Is it different from Target ?: The standard Sunopsis Memory Engine or In-memory engine are too slow
    2. What are the mappings (any lookups/joins that are performed) and where are they performed, Source, Staging or Target ? (High volume look ups or joins can decide the time depending on where they are implemented)
    3. LKM : Try using Bulk loading or some specific technology LKMs instead of generic
    Regards,
    NJ

Maybe you are looking for