Data file load to Planning using FDMEE

Hi All,
Hyperion version : 11.1.2.3.0.26
We have a currency planning application and the dimns are Account,business,entity,currency,version,scenario,period and year.
My data file contains ;
Account;business;entity;version;data
AC_1001;International;US_Region;working;10000
AC_1002;International;US_Region;working;10000
When I try loading data to this application using FDMEE I am getting three gold fishes I thought the load is succesful but when I tried retrieving the data from smartview and found the data's are not loaded.
POV : Jan 15,Actual
In smartview from Essbase:
HSP_InputValue
HSP_InputValue
Jan
Jan
FY15
FY15
Actual
Actual
Working
Working
Local
USD
International
International
US_Region
US_Region
AC_1001
#Missing
#Missing
AC_1002
#Missing
#Missing
Smartview from planning : Adhoc grid cannot be open as there no valid rows of data .
Not sure why this is happening ,Could you please help me with this . THANKS in ADVANCE!
Regards,
Keny Alex

And this is the log:
2015-01-29 02:33:35,503 INFO  [AIF]: FDMEE Process Start, Process ID: 621
2015-01-29 02:33:35,503 INFO  [AIF]: FDMEE Logging Level: 4
2015-01-29 02:33:35,504 INFO  [AIF]: FDMEE Log File: D:\demos\FDMEE\outbox\logs\RPDPLN_621.log
2015-01-29 02:33:35,504 INFO  [AIF]: User:admin
2015-01-29 02:33:35,505 INFO  [AIF]: Location:RPDLOC (Partitionkey:53)
2015-01-29 02:33:35,505 INFO  [AIF]: Period Name:Jan 15 (Period Key:1/1/15 12:00 AM)
2015-01-29 02:33:35,506 INFO  [AIF]: Category Name:Actual (Category key:1)
2015-01-29 02:33:35,506 INFO  [AIF]: Rule Name:RPD (Rule ID:78)
2015-01-29 02:33:37,162 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
[Oracle JRockit(R) (Oracle Corporation)]
2015-01-29 02:33:37,162 INFO  [AIF]: Java Platform: java1.6.0_37
2015-01-29 02:33:39,399 INFO  [AIF]: -------START IMPORT STEP-------
2015-01-29 02:33:44,360 INFO  [AIF]: File Name: Datafile.txt
2015-01-29 02:33:44,736 INFO  [AIF]: ERPI-105011:EPMERPI- Log File Name :D:\demos\FDMEE\outbox\logs\RPDPLN_621.log
2015-01-29 02:33:44,738 INFO  [AIF]: ERPI-105011:EPMERPI- LOADID:PARTKEY:CATKEY:RULEID:CURRENCYKEY:FILEPATH::621;53:1:78:Local:D:\demos\FDMEE/
2015-01-29 02:33:44,738 INFO  [AIF]: ERPI-105011:EPMERPI- ImportTextData - Start
2015-01-29 02:33:44,920 INFO  [AIF]: ERPI-105011:EPMERPI- Log File Name :D:\demos\FDMEE\outbox\logs\RPDPLN_621.log
2015-01-29 02:33:44,924 INFO  [AIF]: ERPI-105011:EPMERPI- File Name Datafile.txt
periodKey2015-01-01
2015-01-29 02:33:44,927 INFO  [AIF]: ERPI-105011:EPMERPI-  PROCESS ID: 621
PARTITIONKEY: 53
IMPORT GROUP: RPDVersion11
FILE TYPE: DELIMITED
DELIMITER: ;
SOURCE FILE: Datafile.txt
PROCESSING CODES:
BLANK............. Line is blank or empty.
NN................ Non-Numeric, Amount field contains non numeric characters.
TC................ Type Conversion, Amount field could not be converted to a number.
ZP................ Zero Suppress, Amount field contains a 0 value and zero suppress is ON.
SKIP FIELD.............. SKIP field value was found
NULL ACCOUNT VALUE.............. Account Field is null
SKIP FROM SCRIPT.............. Skipped through Script
Rows Loaded: 2
Rows Rejected: 0
2015-01-29 02:33:44,929 INFO  [AIF]: ERPI-105011:EPMERPI- ARCHIVE MODE: null
2015-01-29 02:33:44,930 INFO  [AIF]: ERPI-105011:EPMERPI- Start archiving file:
2015-01-29 02:33:44,930 INFO  [AIF]: ERPI-105011:EPMERPI- Archive file name: 62120150101.txt
2015-01-29 02:33:44,931 INFO  [AIF]: ERPI-105011:EPMERPI- Deleting the source file: Datafile.txt
2015-01-29 02:33:44,931 INFO  [AIF]: ERPI-105011:EPMERPI- File not deleted: D:\demos\FDMEE\Datafile.txt
2015-01-29 02:33:44,938 INFO  [AIF]: ERPI-105011:EPMERPI- ImportTextData - End
2015-01-29 02:33:44,938 INFO  [AIF]: ERPI-105011:EPMERPI- Total time taken for the import in ms = 200
2015-01-29 02:33:45,069 INFO  [AIF]:
Import Data from Source for Period 'Jan 15'
2015-01-29 02:33:45,085 INFO  [AIF]: Generic Data Rows Imported from Source: 2
2015-01-29 02:33:45,089 INFO  [AIF]: Total Data Rows Imported from Source: 2
2015-01-29 02:33:45,783 INFO  [AIF]:
Map Data for Period 'Jan 15'
2015-01-29 02:33:45,794 INFO  [AIF]:
Processing Mappings for Column 'ACCOUNT'
2015-01-29 02:33:45,796 INFO  [AIF]: Data Rows Updated by Rule Mapping '121' (LIKE): 2
2015-01-29 02:33:45,796 INFO  [AIF]:
Processing Mappings for Column 'ENTITY'
2015-01-29 02:33:45,797 INFO  [AIF]: Data Rows Updated by Rule Mapping '121' (LIKE): 2
2015-01-29 02:33:45,797 INFO  [AIF]:
Processing Mappings for Column 'UD1'
2015-01-29 02:33:45,798 INFO  [AIF]: Data Rows Updated by Rule Mapping '121' (LIKE): 2
2015-01-29 02:33:45,798 INFO  [AIF]:
Processing Mappings for Column 'UD2'
2015-01-29 02:33:45,799 INFO  [AIF]: Data Rows Updated by Rule Mapping '121' (LIKE): 2
2015-01-29 02:33:45,836 INFO  [AIF]:
Stage Data for Period 'Jan 15'
2015-01-29 02:33:45,838 INFO  [AIF]: Number of Rows deleted from TDATAMAPSEG: 4
2015-01-29 02:33:45,848 INFO  [AIF]: Number of Rows inserted into TDATAMAPSEG: 4
2015-01-29 02:33:45,850 INFO  [AIF]: Number of Rows deleted from TDATAMAP_T: 4
2015-01-29 02:33:45,851 INFO  [AIF]: Number of Rows deleted from TDATASEG: 2
2015-01-29 02:33:45,859 INFO  [AIF]: Number of Rows inserted into TDATASEG: 2
2015-01-29 02:33:45,860 INFO  [AIF]: Number of Rows deleted from TDATASEG_T: 2
2015-01-29 02:33:45,919 INFO  [AIF]: -------END IMPORT STEP-------
2015-01-29 02:33:45,946 INFO  [AIF]: -------START VALIDATE STEP-------
2015-01-29 02:33:45,993 INFO  [AIF]:
Validate Data Mappings for Period 'Jan 15'
2015-01-29 02:33:46,001 INFO  [AIF]: Total Data Rows available for Export to Target: 2
2015-01-29 02:33:46,001 INFO  [AIF]:
Validate Data Members for Period 'Jan 15'
2015-01-29 02:33:46,002 INFO  [AIF]: Total Data Rows available for Export to Target: 2
2015-01-29 02:33:46,026 INFO  [AIF]: -------END VALIDATE STEP-------
2015-01-29 02:33:46,089 INFO  [AIF]: -------START EXPORT STEP-------
2015-01-29 02:33:49,084 INFO  [AIF]: [HPLService] Info: Cube Name: RPDFN
2015-01-29 02:33:49,084 INFO  [AIF]: [HPLService] Info: Export Mode: STORE_DATA
2015-01-29 02:33:49,084 INFO  [AIF]: [HPLService] Info: updateMultiCurrencyProperties - BEGIN
2015-01-29 02:33:49,532 INFO  [AIF]: [HPLService] Info: Currency Properties Exist for Planning Application: RPDPLN
2015-01-29 02:33:49,534 INFO  [AIF]: [HPLService] Info: Number of existing multi-currency property rows deleted: 7
2015-01-29 02:33:49,537 INFO  [AIF]: [HPLService] Info: Base Currency for Application 'RPDPLN': USD
2015-01-29 02:33:49,542 INFO  [AIF]: [HPLService] Info: Number of multi-currency property rows inserted: 7
2015-01-29 02:33:49,542 INFO  [AIF]: [HPLService] Info: updateMultiCurrencyProperties - END
2015-01-29 02:33:49,543 INFO  [AIF]: Updated Multi-Curency Information for application:RPDPLN
2015-01-29 02:33:49,543 INFO  [AIF]: Connecting to essbase using service user:admin
2015-01-29 02:33:49,572 INFO  [AIF]: Obtained connection to essbase provider:Embedded
2015-01-29 02:33:49,576 INFO  [AIF]: Obtained connection to essbase cube RPDFN
2015-01-29 02:33:49,593 INFO  [AIF]: Locking rules file AIF0078
2015-01-29 02:33:49,595 INFO  [AIF]: Successfully locked rules file AIF0078
2015-01-29 02:33:49,595 INFO  [AIF]: Copying rules file AIF0078 for data load as AIF0078
2015-01-29 02:33:49,609 INFO  [AIF]: Unlocking rules file AIF0078
2015-01-29 02:33:49,611 INFO  [AIF]: Successfully unlocked rules file AIF0078
2015-01-29 02:33:49,611 INFO  [AIF]: The data rules file has been created successfully.
2015-01-29 02:33:49,617 INFO  [AIF]: Locking rules file AIF0078
2015-01-29 02:33:49,619 INFO  [AIF]: Successfully locked rules file AIF0078
2015-01-29 02:33:49,625 INFO  [AIF]: Load data into the cube by launching rules file...
2015-01-29 02:33:50,526 INFO  [AIF]: The data has been loaded by the rules file.
2015-01-29 02:33:50,530 INFO  [AIF]: Unlocking rules file AIF0078
2015-01-29 02:33:50,532 INFO  [AIF]: Successfully unlocked rules file AIF0078
2015-01-29 02:33:50,532 INFO  [AIF]: Executed rule file
2015-01-29 02:33:50,572 INFO  [AIF]: [HPLService] Info: Creating Drill Through Region for Process Id: 621
2015-01-29 02:33:51,075 INFO  [AIF]: [HPLService] Info: Drill Through Region created for Process Id: 621
2015-01-29 02:33:51,076 INFO  [AIF]: [HPLService] Info: [loadData:621] END (true)
2015-01-29 02:33:51,117 INFO  [AIF]: -------END EXPORT STEP-------
2015-01-29 02:33:51,214 INFO  [AIF]: [HPLService] Info: [consolidateData:621,Jan 15] END (true)
2015-01-29 02:33:51,264 INFO  [AIF]: -------START CHECK STEP-------
2015-01-29 02:33:51,316 INFO  [AIF]: -------END CHECK STEP-------
2015-01-29 02:33:51,413 INFO  [AIF]: FDMEE Process End, Process ID: 621

Similar Messages

  • Do We Need to Validate Data Before Loading Into Planning?

    We are debating between whether to load data from GL to Planning using ODI or FDM. If we need some form of validity check on the data, we will have to use FDM, otherwise I believe ODI is good enough.
    My question is, for financials planning, what determines whether we need validity checks or not? How do we decide that?

    FDM helps in validation for data load audit options but validation is as easy as doing a comparison to totals by GL accounts from source and planning. You should be able to use ODI, FDM or load rules to load data into Hyperion and complete validation outside using any of reporting options.

  • FDMEE- Can the subledger data be brought into HFM using FDMEE?

    Hi Experts,
    Is it possible to bring the sub-ledger data from PeopleSoft using FDMEE to HFM. As per one of the requirement of client it is required to bring the transaction level data directly into HFM.
    Please suggest whether it is possible to bring the sub ledger data to HFM using FDMEE.
    Any insightful response will be helpful for us as it would help us to gain clarity and understanding.
    Thanks in advance.

    Hi Tony,
    I think I was not clear enough in my earlier post. Sorry about that. I didn't mention about multiple currency for single entity. I understand your point "Given that, FDM will not load to "multiple" currencies for a single entity", but that was not my question.
    Let's say, I've the below data file containing only 3 data rows. The first row gives USD value as entity 1001 is USD entity, but the second row has CAD value as entity 1002 is Canadian entity. The third row has UK value as entity 1003 is UK entity. My question was, can I load the below file into HFM using a single location in FDM? Or do I have to create 3 separate locations in FDM to load the below data file? Please note, one entity can have only one currency.
    Year,Period,Entity,Account,Value
    2010,August,1001,400010,145.65
    2010,August,1002,400010,35.05
    2010,August,1003,400010,10.05
    Here is the hierarchy in HFM, I created the location in FDM based on the "GSLUSD" node
    GSLUSD (USD)
    ---1001 (USD)
    ---1002USD (USD) -> Currency Translation happens here
    ---1002 (CAD)
    ---1003USD (USD) -> Currency Translation happens here
    ---1003 (UK)

  • Best way to do an Excel data file load

    Hi
    I need to load Excel file’s data into an ORACLE table (on a frequent basis). While loading it, I need to validate each column's contents (using PL SQL code). Are there any packages/procs/APIs provided by ORACLE to do this kind of activity? What would be the best way to do an Excel file load within ORACLE ? FYI, I have Visual Basic code that reads data from Excel file and loads it into a temporary ORACLE table, then I validate data in this temporary table using a PL-SQL code in stored procedure. I am trying to avoid the "front end" process of this effort in VB and want to do the whole thing within ORACLE itself. Please let me know if you have any ideas.
    Your help is greatly appreciated!!
    Thanks in advance,
    Ram

    If you are running on Windows, you could try COM Automation which means moving your VB process into a stored procedure. I've never tried this myself, having been quite satisfied with Heterogeneous Connectivity.
    Tak

  • Data file load

    hi,
    If we try to load data and data cells have "," as seprator for ex 34,6788,666.7 Then will the data be loaded fine or we should remove the seprator?
    Some cells have "-" for the data which is missing should i change it with #missing or 0.00 in my data file?
    Thanks!

    Are you sure a "-" gets loaded as a zero in Essbase through a data load rule?
    I know that a "-" will get sent to Essbase from Excel if the formatting in Excel turns 0's to -'s, but that's because there are real zeros behind the -'s.
    I have to say I never tried to load a "-" through a data load rule as I've always specified that missing data be #Missing.
    If the data file can't be changed at the source, you can use the data load rule file to do a replace of the the "-" with a #Missing. I prefer to do as few manipulations within the rule file as possible as it is a pain to maintain.
    Regards,
    Cameron Lackpour

  • Error during dimension load in planning using ODI adapter

    I have created a Planning app in Classic and used ODI Planning adapter to load the dimension from a csv file. I tried this with Account and Entity dimension. Both fail at the same step. This is the error I get.
    Error:
    -22 : S0002 : java.sql.SQLException: Table not found: C$_0Entity in statement
    java.sql.SQLException: Table not found: C$_0Entity in statement
    at org.hsqldb.jdbc.jdbcUtil.throwError(Unknown Source)
    at or
    -22 : S0002 : java.sql.SQLException: Table not found: C$_0Account in statement
    java.sql.SQLException: Table not found: C$_0Account in statement
    I am following John Goodwin's blog and not sure if I am missing a step. I would appreciate any help with this.
    Thanks

    Thank you John for your respone. Here are the details:
    Step 1 - Loading -SS_0 - Drop work table - It is failing in this step with the following message:
    -22 : S0002 : java.sql.SQLException: Table not found: C$_0Account in statement [drop table "C$_0Account"]
    java.sql.SQLException: Table not found: C$_0Account in statement [drop table "C$_0Account"]
    Step 2, 3, 5 and 6 are successful
    Step 7 - Integration - Interface name - Report Statistics - This fails with the following error:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 2, in ?
    Planning Writer Load Summary:
         Number of rows successfully processed: 0
         Number of rows rejected: 8
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(Unknown Source)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(Unknown Source)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(Unknown Source)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(Unknown Source)
         at com.sunopsis.dwg.cmd.e.i(Unknown Source)
         at com.sunopsis.dwg.cmd.h.y(Unknown Source)
         at com.sunopsis.dwg.cmd.e.run(Unknown Source)
         at java.lang.Thread.run(Thread.java:595)
    I have checked the check box "Staging Area different from Traget" and selected "Sunopsis Memory Engine"
    I would appreciate your help - Thanks

  • Spool using .dat file

    Hi,
    I am trying to take a spool of multiple tables using .dat file. The script used in .dat file is given below. The problem is that i am able to log into oracle using this script but enviournment settings and spool doesnot happen. Please help.
    sqlplus demo_user/sms123@orcl
    echo set lin 10000
    echo set pages 50000
    echo set trimspool on
    echo set colsep ','
    echo set feedback off
    echo set heading off
    echo set arraysize 5000
    echo SPOOL c:\abc.txt
    echo SELECT * FROM XMLTEST11;
    echo SPOOL OFF
    echo exit
    Edited by: Sonu on Jul 5, 2010 3:02 AM

    put this part:
    echo set lin 10000
    echo set pages 50000
    echo set trimspool on
    echo set colsep ','
    echo set feedback off
    echo set heading off
    echo set arraysize 5000
    echo SPOOL c:\abc.txt
    echo SELECT * FROM XMLTEST11;
    echo SPOOL OFF ... in a file of it's own e.g. myscript.sql and then call it on the sqlplus command line...
    sqlplus demo_user/sms123@orcl @myscript.sql

  • Issue with import master data from BW info object using DM

    Hi All,
    We have master data loads on weekly basis.
    One of them, product master data is loaded from 0material using DM package.
    While loading the attributes we exclude couple of Products based on the attributes (say Product sub family)
    ex - Having conversion file which excludes product sub family - ML. so all the products under that sub family are excluded from the load.
    Hierarchy load - How can we skip the products which are excluded during attribute run?
    For the first hierarchy load, we copied all the product ID which are part of product sub family - ML and entered in conversion file "NODENAME" and "PARENT" to skip.
    But during the subsequent runs, whenever we have a new  product under Family ML the hierarchy job fails. Attached the error. we then manually enter that product to be excluded in the hierarchy load and the job runs fine.
    Is there a way to automate this process.
    Version - BPC 10 SP 17 .NET 3.5
    Appreciate any thoughts on this.
    Thanks in advance/
    Raghu

    Thanks for your response.
    We are currently using the first option you have mentioned. But the problem is when we have new product coming in BW (which is part of attribute we skip while loading attributes from BW to BPC) we manually need to add that product to skip in hierarchy conversion file also.
    As i can see, the selection option to skip is available only while loading the attributes but not for hierarchy load.
    Please correct me if am wrong.
    Regards,
    Raghu

  • Parsing raw DAT files from WVC210 IP cam

    Hi,
    Is there any information available about the raw DAT file format that cisco uses for the WvC210 IP camera's?
    A surveillence system based on 2 WvC210 IP camera's was vandalised and we could salvage only the DAT files, the rest is trashed.
    I've loaded the software onto a new system and put the DAT files in the correct location but nothing works on these files.
    I can see structure in the raw data, so there is something there.
    The validate tools and the DBTools don't have any effect so maybe the files are mangled.
    But as a programmer I could parse the files and possibly pad stuff to get it working again.
    It's really important we do a full effort to retrieve some evidence, any help is appreciated.
    I've read most posts and I relalise the product is EOL, but all I need is the file structure, any tech docs available?
    Regards,
    Shaun

    Hi Alan,
    We have more .DAT files but no .rcd or other files.
    Here is the full file list, showing bytes and filenames.
    These are all the files that were recovered from the vandalised harddrive.
    But I'll double check this just to be sure.
    I'm guessing the naming convention is:
    C00000/1 (camera number)
    S00A ??
    20120215 (year-month-day)
    151302 (hr-min-sec)
    37178 (??)
    .dat (file extention)
    15-02-2012  16:02           717.034 C00000S00A20120215130237178.dat
    15-02-2012  16:40           819.238 C00000S00A20120215134037698.dat
    15-02-2012  18:03           718.943 C00000S00A20120215150349680.dat
    15-02-2012  19:06           935.351 C00000S00A20120215160626831.dat
    15-02-2012  19:08           323.334 C00000S00A20120215160844811.dat
    15-02-2012  14:37           971.580 C00001S00A20120215113659726.dat
    15-02-2012  15:09           752.687 C00001S00A20120215120925565.dat
    15-02-2012  16:00           563.281 C00001S00A20120215130006543.dat
    15-02-2012  17:34           615.714 C00001S00A20120215143439943.dat
    15-02-2012  17:34        22.921.555 C00001S00A20120215143441035.dat
    15-02-2012  19:08           858.214 C00001S00A20120215160839377.dat
    15-02-2012  19:13       132.448.256 C00001S00A20120215161340485.dat
    15-02-2012  19:19       350.180.000 C00001S00A20120215161947209.dat
    These are all the files we have and I've uploaded all to the google docs folder:
    https://docs.google.com/open?id=0B8oCN_ZRP1rARTBFWW4tNHhRUEdxTmM0SV9SSTVsQQ

  • Combining data files

    Hi,
    I tooks data from hardware and saved it as a series of small files so I would not be stuck with huge files. Now I want to put together some of these files. I have the files as data_001.lvm, data_002.lvm, data_003.lvm etc. I have tried using the merge files option but it put the data side by side. I want this data to be output in the same columns so that I have  a column with all the time data and a column with all the sampled amplitudes.
    I have attached 2 exapmles of the data files. I am using Labview 7.0.
    Thanks, Magreen
    Attachments:
    data_001.txt ‏525 KB
    data_002.txt ‏534 KB

    As I said, the file IO has changed quite a bit since LabVIEW 7.0. Since my VI is in LabVIEW 8.5, you won't be able to open it.
    Still you should be able to create it from scratch.
    Create a while loop
    define the folder containing your file (e.g. using  the file dialog or a diagram constant).
    Open a new file for the output.
    In the loop, create the file names according to the format pattern
    If the file does not exist (e.g. "file info" generates an error), do nothing and stop the loop
    if the file exists, read as a plain text string and append it to the new file.
    Repeat until you run out of matching files.
    Close the new file.
    (You could also use "list files" with e.g. "data_*.txt" as pattern and simply autoindex over the file names in a FOR loop.)
    See how far you get. If you think you are close, attach your work and we find out what else is needed.
    Message Edited by altenbach on 08-15-2008 11:00 AM
    LabVIEW Champion . Do more with less code and in less time .

  • Peoplesoft Integration broker - Inbound File Loader Utility

    hi,
    I have a question on peoplesoft integration broker nbound file loader utility. When we use this utility to load the third party file into the peoplesoft tables, is there any business logic control and validation executed? if not, for which purpose this functionality (inbound file loader utility) is used?
    thank you for your help

    I am using PeopleTools > Integration Broker > File Utilities > Inbound File Processing. Is this a deprecated or outdated process?
    I have configured the process you mention the same as the one I have previously used and I am receiving a similar error, "Class Record: assumed property AUDIT_ACTN is not a valid field name". If it is like the other code, it is failing on the "Record(2)" method when updating the message. I changed this to PSCAMA in the other peoplecode to get it to work.

  • Data File Cache / Data Cache

    I have a few Questions regarding Data File cache and data cache. based on teh size of the application.
    All the settings are changed by using a maxl script.
    1. Data File cache-50MB,Data Cache - 100MB
    I am using Buffered I/O, then will memory be allocated to Data File cache?
    2. It is given in DBAG that data cache & index cache should be set as small as
    possbile for both buffered & Direct I/O. The size of one of my application is
    around 11 GB. data file :11GB,index File :450MB
    I have set my index cache to 450MB and data cache to 700MB.
    Is it OK or a. what should be my data cache size?
    b. How do I calculate the optimal Data cache and index cache?
    3. the memory size of our AIX server is 6GB. If i use Directo I/O, can my sum of
    all caches be 4GB?
    4. If I use buffered I/O, according to (2), what should be my cache sizes?
    Thanks
    Amarnath

    In the DBAG it states data file cache is not used with buffered IO so the answere to 1) should be NO.
    For 2) there is a hint in the DBAG that you should check the hit ratio of the caches to verify sizing, only calculatory advice for sizing is given on the calulator cache :-( This would mean for 2b) look at the hit ration if it stays around 1.0 try to decrease it until it drops slightly. Inspect the ratios from time to time.
    3) don't know, 64bit should be no problem. But why would you do this anyway?
    Example form our settings pag total ~20GB ind ~2GB
    outline hat 11 dimensions with a block size of ~340KB largest dense ~400 members, largest sparse ~4000 members, existing blocks ~2.7 milions
    The data cache is set to 256 MB, the index cache to 64MB our hit ratios are 1.0 for index cache and 0.77 for data chache. so our data cache could be larger, but the performace of retrievals is around 3.0 seconds which is fine for our users..
    4) Check your hit ratios and try to increase or decrease the cahces in small steps (first I'd do the index cache, the if it's fine I'd tune the data cache).
    hope it helped a bit..

  • Data file got corrupted

    <b>If the datafile get corrupted while creating, what should I do?</b>

    Hi
    During creation of datafile, if the datafile got corrupted then you can delete the data file and recreate it again. The condition is that the particular data file has not been used after creation.
    But if the datafile is created snd used, then you have to restore the datafile from the backup (use the Brtools to restore the datafile)
    Please use Brtools utliity for the same.
    Thanks

  • Loading FLat file data using FDMEE having 1 to many mapping

    Hi All,
    I need to load a data from Flat file to hyperion planning applcation using FDMEE having one to many mapping
    For e.g Data file has 2 records
    Acc Actual Version1 Scene1 1000
    Acc Actual Version1 Scene2 2000
    now target application has 5 dimension and data need to be load as
    acc Actual Version1 entity1 Prod2 1000
    Acc Actual Version1 Entity2 Prod2 2000
    Please suggest
    Regards
    Anubhav

    From your exmple I don't see the one too many mapping requirement. You have one source data line that maps to a single target intersection. Where is the one to many mapping requirement in your example?

  • Loading the data from a text file to a table using pl/sql

    Hi Experts,
    I want to load the data from a text (sample1.txt) file to a table using pl/sql
    I have used the below pl/sql code
    declare
    f utl_file.file_type;
    s varchar2(200);
    c number := 0;
    begin
    f := utl_file.fopen('TRY','sample1.txt','R');
    loop
    utl_file.get_line(f,s);
    insert into sampletable (a,b,c) values (s,s,s);
    c := c + 1;
    end loop;
    exception
    when NO_DATA_FOUND then
    utl_file.fclose(f);
    dbms_output.put_line('No. of rows inserted : ' || c);
    end;
    and my sample1.txt file looks like
    1
    2
    3
    The data is getting inserted, with below manner
    select * from sampletable;
    A     B     C
    1     1     1
    2     2     2
    3     3     3
    I want the data to get inserted as
    A     B     C
    1     2     3
    The text file that I have is having three lines, and each line's first value should go to each column
    Please help...
    Thanks

    declare
    f utl_file.file_type;
    s1 varchar2(200);
    s2 varchar2(200);
    s3 varchar2(200);
    c number := 0;
    begin
    f := utl_file.fopen('TRY','sample1.txt','R');
    utl_file.get_line(f,s1);
    utl_file.get_line(f,s2);
    utl_file.get_line(f,s3);
    insert into sampletable (a,b,c) values (s1,s2,s3);
    c := c + 1;
    utl_file.fclose(f);
    exception
    when NO_DATA_FOUND then
    if utl_file.is_open(f) then utl_file.fclose(f); ens if;
    dbms_output.put_line('No. of rows inserted : ' || c);
    end;SY.

Maybe you are looking for