Using data rule on table level

Hi
We want to implement a data rule that compares the grand total in the target table with the grand total in the source table.
In case of difference, we only want one record in the error table.
That seems not possible out of the box. We could implement a procedure in the workflow, but want to use the buildt in features (if they exist).
Have anyone solved this problem ?
Best Regards
Klaus

Hi Klaus,
as far as I know data rules only examine the incoming data. If you want to compare loaded data grand total with the source grand total I see two diefferent approaches:
1. Use the post process operator in the mapping to call a customs procedure that calculates the totals, compares them and in case of a difference writes directly into the error table
2. Create a mapping to do this checks and call it directly after the original mapping.
Regards,
Carsten.

Similar Messages

  • How do we use Data rules/error table for source validation?

    How do we use Data rules/error table for source validation?
    We are using OWB repository 10.2.0.3.0 and OWB client 10.2.0.3.33. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    I reviewed the posting
    Re: Using Data Rules
    Thanks for this forum.
    I want to apply data rules to source table/view and rule violated rows should go to defined error table. Here is an example.
    Table ProjectA
    Pro_ID Number(10)
    Project_name Varchar(50)
    Pro_date Date
    As per above posting, I created the table in object editor, created the data rule
    NAME_NOT_NULL (ie project name not null). I specified the shadow table name as ProjectA_ERR
    In mapping editor, I have projectA as source. I did not find error table name and defined data rules in table properties. It is not showing up the ERR group in source table
    How do we bring the defined data rules and error table into mapping?
    Are there any additional steps/process?
    Any idea ?
    Thanks in advance.
    RI

    Hi,
    Thanks for your reply/pointer. I reviewed the blog. It is interesting.
    What is the version of OWB used in this blog?
    After defining data rule/shadow table, I deployed the table via CC. It created a error table and created the all the source coulmns in alphabatical order. If I have the primary key as 1st coulmn (which does not start with 'A') in my source, it will apprear middle of of columns in error table.
    How do we prevent/workaround this?
    If I have source(view) in sch A, how do we create Error table in Sch B for source(view)?
    Is it feasible?
    I brought the error table details in mapping. Configured the data rules/error tables.
    If I picked up 'MOVE TO ERROR' option, I am getting "VLD-2802 Missing delete matching criteria in table. the condition is needed because the operator contain at least one data rule with a MOVE TO ERROR action"
    On condition Loading - I have 'All constraints' for matching criteria.
    I changed to "no constraints' still I get the above error.
    If I change to 'REPORT' option instead of 'MOVE TO ERROR' option, error goes off.
    Any idea?
    Thanks in advance.
    RI

  • Creating XML file using data from database table

    I have to create an xml file using the data from multiple table. The problem That i am facing is the data is huge it is in millions so I was wondering that is there any efective way of creating such an xml file.
    It would be great if you can suggest some approach to achieve my requirement.
    Thanks,
    -Vinod

    An example from the forum: Re: How to generate xml file from database table
    Edited by: Marco Gralike on Oct 18, 2012 9:41 PM

  • Complicated Question (see pdf): Use data from one table to find the same data in a second table and take other data from that table and place it in a third table. :)

    I don't even know if this is posible.
    I'm using iwork '09 
    View PDF

    I hope I can clarify:
    For our purposes here:
    Table 1 = "Step 2 - Product Sizes"
    Table 2 = "Option id Master"
    Table 3 = "Export - Product Info"
    Table 1:
    The user would enter values for "productcode," "Horz," and "Vert"
    "Size" would auto fill based on values in Horiz and Vert (I have this taken care of already).
    Table 2: This is a completely static table that I want to search against. - Data from other tables in the doc does not alter or change the data in this doc.
    We just want to look at table 2. Find the existing value in "table 2 : size" column that matches the "table 1 : size" column  and then pull the "optionids" and "productprice" from that row.
    Can the value from "Table 1 : Size" be used as a search term in "Table 2 : Size?"
    Table 3: The user does not enter any values on this table. 
    "productcode" is pulled from table 1 - "Table 1 :: A5" = "Table 3 :: A5"
    "optionids" and "productprice" are pulled from Table 2 columns "D" and "E" - however we do not know which Table 2 row it is pulling from until we enter data in Table 1.
    As I'm writing this I'm realizing that
    A. this is probably really confusing to you.
    B. this may be impossible inside of numbers.
    If you have some other method that would facilitate the same out come but be structured differently please let me know.
    --- maybe to help you understand further what I am doing here is my current workflow:
    I record the size of a piece of art.
    Then I manually go to my "Option id Master" and find the same size.
    I then copy the corresponding "optionids" and "productprice" cells. (these options control the prices displayed on my website)
    I got to my "Export - Product Info" table and paste the values in the corresponding cells.
    I was hoping to automate this as it takes a long time when you have hundreds of products.
    Thanks for the help!

  • Report using Data from different tables

    Hello,
    I am trying to convert a Cobol batch program to Oracle 6i tabular report.
    The data is fetched from many different tables and there are lots of processing(i.e, based on the value of a column from one table need additional processing from different tables) required to generate the desired columns in the final report.
    I would like to know what is the best strategy to follow in Oracle Reports 6i. I heard that CREATE GLOBAL TEMPORARY TABLE is an option. ( or REF CURSOR ?) I do not know much about its usage. Can somebody guide me about this or any other better way to achieve the result.
    Thank you in advance
    Priya

    Hello,
    There are many, many options available to you, each of which has advantages and disadvantages. This is why it is difficult to answer "what is best?" without alot more details about your specific circumstances.
    In general, you're going to be writing PL/SQL to do any conditional logic that cannot be expressed as pure SQL. It can executed in the database, or it can executed within Reports itself. And most reports developers do some of both.
    As a general rule, you want to send only the data you need from the database to the report. This means you want to do as much filtering and aggregating of the data as is readily possible within the database. If this cannot be expressed as plain SQL queries, then you'll want to create a stored procedures to help do this work.
    Generally, the PL/SQL you create for executing within the report should be focused on control of the formatting, such as controlling whether a field is visible, or controlling display attributes for conditional formatting.
    But these are not hard and fast rules. In some cases, it is difficult to get all the stored procedures you might like installed into the database. Perhaps the dba is reluctant to let you install that many stored procedures. Perhaps there are restrictions when and how often updates can be made to stored procedures in a production database, which makes it difficult to incrementally adjust your reports based on user feedback. Or perhaps there are restrictions for how long queries are allowed to run.
    So, Reports offers lots of options and features to let you do data manipulation operations from within the report data model.
    In any case, Oracle does offer temporary table capabilities. You can populate a temp table by running stored procedures that do queries, calculations and aggregations. And you can define and initiate a dynamic query statement within the database and pass a handle to this query off to the report to execute (ref cursor).
    From the reports side, you can have as many queries as you want in the data model, arranged in any hierarchy via links. You can parameterize and change the queries dynamically using bind variables and lexicals. And you can add calculations, aggregations, and filters.
    Again, most people do data manipulation both in the database and in Reports, using the database for what it excels at, and Reports for what it excels at.
    Hope this helps.
    Regards,
    The Oracle Reports Team --skw                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • How to spilt files when using DATA UNLOAD (External Table, 10g)?

    Hi,
    I am runnin 10gR2 and need to export partitions by using data_pump driver
    via dmp files (External Tables).
    Now as requierment the created files can not exceed 2GB mark.
    Some of partitions are larger than that.
    How could I split the partition so I could be able to create files smaller than 2GB? Is there any parameter I am not aware of or do I need to do SELECT COUNT(*) FROM source_table PARTITION(partiton_01);
    and than to work with ROWNUM?
    This example working fine for all partitions samller than 2GB:
    CREATE TABLE partiton_01_tbl
    2 ORGANIZATION EXTERNAL
    3 (
    4 TYPE ORACLE_DATAPUMP
    5 DEFAULT DIRECTORY def_dir1
    6 LOCATION ('inv_xt1.dmp')
    7 )
    8 PARALLEL 3
    9 AS SELECT * FROM source_table PARTITION(partiton_01);

    You could specify multiple destination files in the LOCATION parameter (the number of files should match the degree of parallelism specified). I am not aware of an option that would allow the external table to automatically add new data files as the partition size increased, so you'd likely have to do some sort computation about the expected size of the dump file in order to figure out how many files to create.
    Justin

  • Using data from one table in another

    Can anyone help me? I'm new to this.
    I have a table called Mileage; Column A = Location, B = Number of miles
    I want to use the data in my Business Accounts table so that when I enter a location it will automatically enter the number of miles in the cell next to it from the Mileage table.

    Brilliant! Thank you.
    If the Mileage table does not have a location in yet (if I've not entered it into the table yet) how can I get the result to enter an error or an oops rather than the number to the one nearest to it?
    I've looked on the function descriptions but none of it is written in english
    Thanks
    Paula

  • Using data from mutliple tables in COOIS

    We are in the process of upgrading from 4.6C to ECC60. In 4.6C, we
    implemented the modification as described in Note 434123 to add custom
    fields to the COOIS report.
    Our custom list contains data from both the order header and order
    operations. Therefore, data was drawn from tables IOOPER_TAB and
    IOHEADER_TAB, which were both present in the old solution. However, the
    BADI solution used in ECC60 only contains data in the structure belonging to the list
    type selected at the selection screen. The list types are defined in
    domain 'PPIO_LISTTYPE'. The structure that is filled with data is set inprogram PPIO_ENTRY_FILL_TCOA, form 'fill_tcoa'. Unfortunately, there is
    no list type available that activates both IOOPER and IOHEADER.
    I also cannot find a method in the BADI to influence the content of TCOA.
    However, I don't want to create lengthy code to retrieve the data myself.
    The most tempting solution (modification) is to:
    1. extend the value range of PPIO_LISTTYPE with the cusom list
    2. add an entry to the CASE statement in form FILL_TCOA to place an 'X' in both 'cs_tcoa-oper_sel' and 'cs_tcoa-header_sel'.
    When (using list type PPIOO00-operations), and manually setting 'cs_tcoa-header_sel' to 'X' in the debugger, all data needed to create
    the desired output is present. Is it truly necessary to implement this as a modification, or is there a better solution (without implementing a modification) to activate data
    retrieval of multiple tables form logical database IOC?

    Thanks very much for your quick response Anup.
    1. If I create a view in R/3, please let me know how should I bring this information in to BW. Should I create a table in bw or should I create a generic datasource attaching this view?
    Also, please let me know what will be the logic (not the abap code) to create a view on these 4 tables. I need to incorporate this logic in to my spec.
    2. Do I write a function module in BW?
    Thanks!!

  • Query to obtain tables in SH schema using data-dictionary views/tables.

    Hi,
    I have just installed Oracle 10g Database. Logged on to SQL Plus by scott (changed the password as prompted).
    I want to see the tables in SH schema (which comes by default with Oracle 10g database).
    The problem, I am facing is:
    I can see the schema name: SH in the result of the below query:
    SELECT * FROM ALL_USERS
    but, nothing is returned in the result of the below query:
    SQL> SELECT OWNER FROM ALL_TABLES WHERE OWNER='SH';
    Result: no rows selected.
    Why is this happening?
    What will be the correct query to obtain tables present in SH schema ?
    Thanks in Advance & Regards,
    Deeba

    conn /as sysdba
    SQL> select table_name from dba_tables where owner='SH';
    I think the user SCOTT have no privilege to see SH schema tables through all_tables view.
    Thanks

  • Encrypt Credit card data - table level

    Hi Team,
    We want to encrypt the credit card data, please let me know how to do this.
    We want to encrypt the data at the table level so that the specific column cannot be viewed by others and also encrypting the column at the OS level.
    11i Version:
    Database: 10.2.0.5.0
    Apps: 11.5.10.2
    Thanks,

    Hi;
    1. Check what Shree has been posted
    2. If those note are not help you can try to use Scrambling- Data masking,see
    Re: How to prevent DBA from Seeing salary data
    3. If even its not help than rise SR ;)
    PS:Please dont forget to change thread status to answered if it possible when u belive your thread has been answered, it pretend to lose time of other forums user while they are searching open question which is not answered,thanks for understanding
    Regard
    Helios

  • Schema level and table level supplemental logging

    Hello,
    I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
    SQL>alter database add supplemental log data (primary key) columns;
    Database altered.
    SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
    SUPPLEME SUP SUP
    IMPLICIT YES NO
    -My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
    GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
    Successfully logged into database.
    GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
    what is the deference between schema level and table level supplemental logging?

    For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
    1. Primary key
    2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
    columns, and no nullable columns
    3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
    columns, but can include nullable columns
    4. If none of the preceding key types exist (even though there might be other types of keys
    defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
    the database allows to be used in a unique key, excluding virtual columns, UDTs,
    function-based columns, and any columns that are explicitly excluded from the Oracle
    GoldenGate configuration.
    The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
    is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
    When to use ADD TRANDATA for an Oracle source database
    Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
    If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
    TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
    ● You can stop DML activity on any and all tables before users or applications perform DDL on them.
    ● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
    ❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
    statement.
    ❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
    ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
    You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
    Additional requirements when using ADD TRANDATA
    Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
    chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
    statement:
    SQL> alter database add supplemental log data;
    To verify that supplemental logging is enabled at the database level, issue the following statement:
    SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
    If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0)

  • Date rules management

    Hi gurus,
    I customized a date rule "Inicio + 7 días" (start date + 7 days), and i modified the xml , when i execute the test mode (date class: SRV_CUST_BEG; factory calendar = CO, time zone: U TC- 5), the result is ok, but when i try from my operation in CRMD_ORDER (SLFN) the date add 7 days, but count weekends and festives. Pls help me
    Regards

    Hi Mathew,
    i had the same problem but i solved it using duration.
    Is it mandatory to use date rule for you? I guess no, hence you can use duration to fulfill your requirement.
    Regards
    Sidd

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • Error in mapping generation when using a data rule

    I hope someone can point me in the right direction on this one. I've searched all over but can't find a similar problem anywhere.
    This is OWB Client 10.2.0.1.31 and repository 10.2.0.1.0 in a 10g SE database . I have a table on which I have defined a data rule, which is deployed in the database along with its corresponding error table. The rule I have used is the built-in IS_DATE and it is applied to a VARCHAR2 field that will store a date in a specific format. I have defined the table as an operator in a mapping, and set the IS_DATE data rule action to MOVE TO ERROR. When I try to deploy the mapping I get the follwoing errors in the log:
    Warning ORA-06550: line 244, column 4:
    PL/SQL: ORA-00907: missing right parenthesis
    Warning ORA-06550: line 221, column 3:
    PL/SQL: SQL Statement ignored
    Warning ORA-06550: line 3112, column 132:
    PLS-00103: Encountered the symbol "DD" when expecting one of the following:
    . ( ) , * @ % & | = - + < / > at in is mod remainder not
    range rem => .. <an exponent (**)> <> or != or ~= >= <= <>
    and or like LIKE2_ LIKE4_ LIKEC_ between || multiset member
    SUBMULTISET_
    The symbol "." was substituted for "DD" to continue.
    When I look at the generated code, there are lines like this one:
    not (wb_to_date("table"."column", Month dd, RRRR, Mon dd, RRRR, MM-DD-RRRR, MM/DD/RRRR, YYYY-MM-DD, YYYY/MM/DD) is not null
    Why is OWB not putting single quotes around the data formats from the built-in data rule? Is there something fundamental I'm missing in my warehouse model?

    Hi
    This is bug 5195315, which looks to be fixed in a 10.2 patch. If you can pick up the latest 10.2 patch there is like 3 years of fixes worth picking up.
    Cheers
    David

  • Using FDM to load data from oracle table (Integration Import Script)

    Hi,
    I am using Integration Import Script to load data from oracle table to worktables in FDM.
    i am getting following error while running the script.
    Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done
    Attaching the full error report
    ERROR:
    Code............................................. -2147217887
    Description...................................... Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.
    At line: 22
    Procedure........................................ clsImpProcessMgr.fLoadAndProcessFile
    Component........................................ upsWObjectsDM
    Version.......................................... 1112
    Thread........................................... 6260
    IDENTIFICATION:
    User............................................. ******
    Computer Name.................................... *******
    App Name......................................... FDMAPP
    Client App....................................... WebClient
    CONNECTION:
    Provider......................................... ORAOLEDB.ORACLE
    Data Server......................................
    Database Name.................................... DBNAME
    Trusted Connect.................................. False
    Connect Status.. Connection Open
    GLOBALS:
    Location......................................... SCRTEST
    Location ID...................................... 750
    Location Seg..................................... 4
    Category......................................... FDM ACTUAL
    Category ID...................................... 13
    Period........................................... Jun - 2011
    Period ID........................................ 6/30/2011
    POV Local........................................ True
    Language......................................... 1033
    User Level....................................... 1
    All Partitions................................... True
    Is Auditor....................................... False
    I am using the following script
    Function ImpScrTest(strLoc, lngCatKey, dblPerKey, strWorkTableName)
    'Oracle Hyperion FDM Integration Import Script:
    'Created By:     Dhananjay
    'Date Created:     1/17/2012 10:29:53 AM
    'Purpose:A test script to import data from Oracle EBS tables
    Dim cnSS 'ADODB.Connection
    Dim strSQL 'SQL string
    Dim rs 'Recordset
    Dim rsAppend 'tTB table append rs object
    'Initialize objects
    Set cnSS = CreateObject("ADODB.Connection")
    Set rs = CreateObject("ADODB.Recordset")
    Set rsAppend = DW.DataAccess.farsTable(strWorkTableName)
    'Connect to SQL Server database
    cnss.open "Provider=OraOLEDB.Oracle.1;Data Source= +server+;Initial Catalog= +catalog+;User ID= +uid+;Password= +pass+"
    'Create query string
    strSQL = "Select AMOUNT,DESCRIPTION,ACCOUNT,ENTITY FROM +catalog+.TEST_TMP"
    'Get data
    rs.Open strSQL, cnSS
    'Check for data
    If rs.bof And rs.eof Then
    RES.PlngActionType = 2
    RES.PstrActionValue = "No Records to load!"
    Exit Function
    End If
    'Loop through records and append to tTB table in location’s DB
    If Not rs.bof And Not rs.eof Then
    Do While Not rs.eof
    rsAppend.AddNew
    rsAppend.Fields("PartitionKey") = RES.PlngLocKey
    rsAppend.Fields("CatKey") = RES.PlngCatKey
    rsAppend.Fields("PeriodKey") = RES.PdtePerKey
    rsAppend.Fields("DataView") = "YTD"
    rsAppend.Fields("CalcAcctType") = 9
    rsAppend.Fields("Amount") = rs.fields("Amount").Value
    rsAppend.Fields("Desc1") = rs.fields("Description").Value
    rsAppend.Fields("Account") = rs.fields("Account").Value
    rsAppend.Fields("Entity") = rs.fields("Entity").Value
    rsAppend.Update
    rs.movenext
    Loop
    End If
    'Records loaded
    RES.PlngActionType = 6
    RES.PstrActionValue = "Import successful!"
    'Assign Return value
    SQLIntegration = True
    End Function
    Please help me on this
    Thanks,
    Dhananjay
    Edited by: DBS on Feb 9, 2012 10:21 PM

    Hi,
    I found the problem.It was because of the connection string.The format was different for oracle tables.
    PFB the format
    *cnss.open"Provider=OraOLEDB.Oracle.1;Data Source= servername:port/SID;Database= DB;User Id=aaaa;Password=aaaa;"*
    And thanks *SH* for quick response.
    So closing the thread......
    Thanks,
    Dhananjay

Maybe you are looking for