Essbase Full Export generating erroneously large data files

On 9.2.1. using BSO
I have a 1 GB page file that routinely is exported for backups. generates 1 text file about 1.2 GB. using buffered i/o and rle compression.
recently, the backup occurred as normal but generated 6 text files totaling 10GB.
I did a maxl force restructure and the .ind and .pag files remained about the same as they were prior to the restructure.
The restructure did seem to take much longer than normal..
Any ideas?
Thanks
Jeff

Is this an automated export, or do you go through the user prompts in EAS to create a backup. The size you provided (1.2G off a 1G page file) sounds like a lev0 export. Exports are always going to come out much larger due to the binary and compressed nature of the page files.
This "feels" like it may have just been an extra/different option checked. The difference between Lev0/Input/Full exports and Columnar/Essbase can easily make jumps in storage space like that...especially changing to Columnar ("Export in column format") can cause a big bump in export sizes. If your member names are long, it causes an even bigger impact on the size of columnar export files.
Also, I usually encourage Level0 exports, which is smaller and faster than "All Data". However, the default method is "All Data" which would also grow the resultant files, especially with many upper levels.
On a different note, I would test out your choice of RLE compression. I've personally found very few cases where bitmap wasn't far and away the best choice.

Similar Messages

  • Large Data file problem in Oracle 8.1.7 and RedHat 6.2EE

    I've installed the RedHat 6.2EE (Enterprise
    Edition Optimized for Oracle8i) and Oracle
    EE 8.1.7. I am able to create very large file
    ( > 2GB) using standard commands, such as
    'cat', 'dd', .... However, when I create a
    large data file in Oracle, I get the
    following error messages:
    create tablespace ts datafile '/data/u1/db1/data1.dbf' size 10000M autoextend off
    extent management local autoallocate;
    create tablespace ts datafile '/data/u1/db1/data1.dbf' size 10000M autoextend off
    ERROR at line 1:
    ORA-19502: write error on file "/data/u1/db1/data1.dbf", blockno 231425
    (blocksize=8192)
    ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
    Additional information: 231425
    Additional information: 64
    Additional information: 231425
    Do anyone know what's wrong?
    Thanks
    david

    I've finally solved it!
    I downloaded the following jre from blackdown:
    jre118_v3-glibc-2.1.3-DYNMOTIF.tar.bz2
    It's the only one that seems to work (and god, have I tried them all!)
    I've no idea what the DYNMOTIF means (apart from being something to do with Motif - but you don't have to be a linux guru to work that out ;)) - but, hell, it works.
    And after sitting in front of this machine for 3 days trying to deal with Oracle's, frankly PATHETIC install, that's so full of holes and bugs, that's all I care about..
    The one bundled with Oracle 8.1.7 doesn't work with Linux redhat 6.2EE.
    Don't oracle test their software?
    Anyway I'm happy now, and I'm leaving this in case anybody else has the same problem.
    Thanks for everyone's help.

  • Essbase Data Export not Overwriting existing data file

    We have an ODI interface in our environment which is used to export the data from Essbase apps to text files using Data export calc scripts and then we load those text files in a relational database. Laetely we are seeing some issue where the Data Export calc script is not overwriting the file and is just appending the new data to the existing file.
    The OverWriteFile option is set to ON.
    SET DATAEXPORTOPTIONS {
         DataExportLevel "Level0";
         DataExportOverWriteFile ON;     
    DataExportDimHeader ON;
         DataExportColHeader "Period";
         DataExportDynamicCalc ON;
    The "Scenario" variable is a substitution variable which is set during the runtime. We are trying to extract "Budget" but the calc script is not clearing the "Actual" scenario from the text file which was the scenario that was extracted earlier. Its like after the execution of the calc script, the file contains both "Actual" and "Budget" data. We are not able to find the root cause as in why this might be happening and why OVERWRITEFILE command is not being taken into account by the data export calc script.
    We have also deleted the text data file to make sure there are no temporary files on the server or anything. But when we ran the data export directly from Essbase again, then again the file contained both "Actual" as well as "Budget" data which really strange. We have never encountered an issue like this before.
    Any suggestions regarding this issue?

    Did some more testing and pretty much zoomed on the issue. Our Scenario is actually something like this "Q1FCST-Budget", "Q2FCST-Budget" etc
    This is the reason why we need to use a member function because Calc Script reads "&ODI_SCENARIO" (which is set to Q2FCST-Budget) as a number and gives an error. To convert this value to a string we are using @member function. And, this seems the root cause of the issue. The ODI_Scenario variable is set to "Q2FCST-Budget", but when we run the script with this calculation function @member("&ODI_SCENARIO"), the data file brings back the values for "Q1FCST-Budget" out of nowhere in addition to "Q2FCST-Budget" data which we are trying to extract.
    Successful Test Case 1:
    1) Put Scenario "Q2FCST-Budget" in hard coded letters in Script and ran the script
    e.g "Q2FCST-Phased"
    2) Ran the Script
    3) Result Ok.Script overwrote the file with Q2FCST-Budget data
    Successful Case 2:
    1) Put scenario in @member function
    e.g. @member("Q2FCST-Budget")
    2) Results again ok
    Failed Case:
    1) Deleted the file
    2) Put scenario in a substitution variable and used the member function "@member("&ODI_Scenario") and Ran the script . *ODI_SCENARIO is set to Q@FCST-Budget in Essbase variables.
    e.g. @member("&ODI_SCENARIO")
    3) Result : Script contained both "Q1FCST-Budget" as well as "Q2FCST-Budget" data values in the text file.
    We are still not close to the root cause and why is this issue happening. Putting the sub var in the member function changes the complete picture and gives us inaccurate results.
    Any clues anyone?

  • 9i full export for a large database

    Hi Dear All,
    My organization is going to make one database read only. so requested to take a full export backup of the same which grew upto 600GB. I want to take it into splits of 2gb. what parameters shall I use.
    Prepared the parfile like:
    FILE=/exp/prod/exp01.dmp,/exp/prod/exp02.dmp,/exp/prod/exp03.dmp
    FILESIZE=2G
    Pls suggest me can I proceed with above parameters and what should be the parameter file for importing+* the same.
    Thanks in Advance
    Phani Kumar

    Hello,
    can I proceed with above parameters and what should be the parameter file for importing the same.With the Original Export/Import you may use the FILE and FILESIZE parameters so as to split the Dump into several dump files, and also to Import from several dump files:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch02.htm#1005411
    For instance, you may use something like this:
    FILE=<dumpfile1>.dmp,<dumpfile2>.dmp,<dumpfile3>.dmp
    FILESIZE=2GIn this example the Size of each Dump Files is limited to 2 GB.
    NB: Use the same setting for FILE and FILESIZE for the Export and the Import.
    how can I user mknod option in aix box with oracle9i for taking 600gb of databaseAbout mknod you may read the following Note of My Oracle Support:
    Large File Issues (2Gb+) when Using Export (EXP-2 EXP-15), Import (IMP-2 IMP-21), or SQL*Loader [ID 30528.1]See the chapter *3. Export and Import with a compressed dump file* and chapter *4. Export and Import to multiple files*.
    Hope this help.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on Jun 2, 2011 10:06 AM

  • Reading Large data files from client machine ( urgent)

    Hi,
    I want to know the best way for reading large data at about 75MB file from client machine and insert to the database.
    Can anybody provide sample code for it.
    Loading the file should be done at client machine and inserting into the database should be done at server side.
    How should i load the file?
    How should i transfer this file or data to server ?
    How should i insert into the database ?
    Thanks in advance.
    regards
    Kalyan

    Like I said before you should be using your application server to serve files >from the server off the filesystem. The database should not store files this big >and should instead just have a reference to this file. I think u have not understood the problem corectly.
    I will make it clear.
    The requirement is as follows.
    This is a j2ee based application.
    Application server is oracle application server.
    Database is oracle9i
    it is thick client (swing based application)
    User enters datasource like c:\turkey.data
    This turkey.data file contains data
    1@1@20050131@1@4306286000113@D00@32000002005511069941@@P@10@0@1@0@0@0@DK@70059420@4330654016574@1@51881100@51881100@@99@D@40235@0@0@1@430441800000000@@11@D@42389@20050201@28483@15@@@[email protected]@@20050208@20050307@0@@@@@@@@@0@@0@0@0@430443400000800@0@0@@0@@@29@0@@@EUR
    like wise we may have more than 3 lacs rows in it.
    We need to read this file and transfer this to the application server. Which are EJBS.
    There we read this file each row in file is one row in the database for a table.
    Like wise we need to insert 3 lacs records in the database.
    We can use Jdbc to insert the data which is not a problem.
    Only problem is how to transfer this data to server.
    I can do it in one way. This is only a example
    I can read all the data in StringBuffer and pass to server.
    There again i get the data from StringBuffer and insert into database using jdbc.
    This way if u do it. It is performance issue and takes long time to insert into the database.It even may give MemoryOutofBond exception.
    just iam looking for the better way of doing this which may get good performace issue.
    Hope u have understood the problem.

  • About the exporting and importing xml data file in the pdf form

    Hi all,
    I need help for importing xml data file in the pdf form. When I write some thing in the text field with fill color and typeface (font) and exported xml data using email button. When I imported that xml file in the same pdf file that is used for exporting the xml data then all the data are shown in the control but not color and font typeface. Anyone has suggestion for solving this problem. Please help me for solving this problem.
    Thank you
    Saroj Neupane

    Can you post a sample form and data to [email protected] and I will have a look.

  • Oracle 11g unreasonable large data file that can't be shrunk

    I got this Oracle 11g installed on my win7 just for running my local application. In the Enterprise Manager the "USERS" tablespaces shows over 12g for allocated size, about 1g for Space Used(also see the table snippet below), only about 8% of space is used. However, when I tried to shrink the data file/space even to half of its current size, both from command line(resize) and Enterprise Manager, i got the error that says "Failed to commit: ORA-03297: file contains used data beyond requested RESIZE value ". I was able to resize my TEMP tablespace successfully with the same command.
    Any insight on this? thanks a lot. I'm about to run out my hard drive space.
    Name / Allocated Size(MB) /Space Used(MB) /Allocated Space Used(%) /Auto Extend /Allocated Free Space(MB) /Status Datafiles /Type /Extent Management /Segment Management
    USERS / 12,288.0 /1,026.7 /8.4 /YES /11,261.3 / /1 /PERMANENT /LOCAL /AUTO

    Jonathan Lewis wrote:
    user1340127 wrote:
    However, when I tried to shrink the data file/space even to half of its current size, both from command line(resize) and Enterprise Manager, i got the error that says "Failed to commit: ORA-03297: file contains used data beyond requested RESIZE value ". I was able to resize my TEMP tablespace successfully with the same command.
    Any insight on this? thanks a lot. I'm about to run out my hard drive space.
    You have an object stored at the end of the datafile, you will need to move it down the file (or to another tablespace) before you can shrink the file. I though OEM had a "tablespace map" feature to help you see what objects were located where but if not, see:In EM (or 10g dbconsole, at any rate), you select a tablespace, drop down menu Show Tablespace Contents, then there is a tablespace map icon to expand the map. It can be slow. Then you can scroll down to the bottom, scroll up and look for where the last non-green extents are. Also, you can find the beginning of datafiles (if you have multiple data files in the tablespace) by watching for purple header blocks. Hovering over the segments gives information, and clicking on them (or selecting segments in the contents list up above) shows where all the extents are in yellow. I haven't tried the reorganize link...

  • Allocating enough memory to open a large data file

    Hello,
    I am currently running an experiment which requires me to collect data at a very high rate, creating enormous data files. At first, I ran into the problem of LabVIEW not having enough memory to save all of the data to the hard-disk at once (it seeme to have no problem with a huge array during data collection, only when it went to save it to a file), so I programmed LabVIEW to only save 100000 samples at a time. This seemed to work fine, and I was able to collect a 550MB data file in this way. However, now that I would like to analyze that data, LabVIEW can't read the data from the file into an array, giving me the same insufficient memory errors as before. My system has 3GB of memory, so in theory LabVIEW should be able to get enough to open it, however Windows doesn't seem to want to allocate what it needs. Is there any way that I can override windows and allocate enough memory to LabVIEW for it to be able to open this file and work on the data?

    BrockB wrote:
    The data is all in a 2xN tab-delimited array, and I'm using the 1d array output of the "read from spreadsheet" vi, as I only need the first of the two columns. What I meant to say earlier was that I would still have to read all of the data from the file first if I wanted to break it up into pieces to use later. Labview seems to get stuck on reading the data from the file, not actually on opening it. It also seems like breaking the data up would be a much bigger hassle than just allowing labview to use more of the 3GB that I have available (most of which is sitting unused anyway).
    First of all, any ASCII formatted file is an extreme waste of space and comparatively expensive to write and read. It is only recommeded for small files intended to be later read by humans.
    "Read from Spreadsheet file" must first internally read the entire file as a string, then chop it up looking for tab and linefeed characters, then scan the parts into DBL. If you want the first column (instead of the first row), it then also needs to transpose the entire 2D array, requiering another copy in memory. As you can see, you'll easily have quite a few data copies plugging up your RAM. ("Read/Write from/to spreadsheet file" are plain VIs. You can open their diagram to see what's in there. )
    For datasets this size, you MUST use binary files. Try it and your problems will go away.
    Message Edited by altenbach on 11-11-2008 12:06 PM
    LabVIEW Champion . Do more with less code and in less time .

  • The command,"Dataloadred" is not working for a very large data file

    I have a file which size is 1.8Gb and it has 3 data channels including time channel. I tried reduced loading of the file using the command "Dataloadred" at the interval of 5. The below is the script.
    'start script----------------------------
    call Filenameget("Data","fileread")
    call DataLoadHdFile(Filedlgfile)
    call DataLoadRed("Filedlgfile","2-3",1,0,"First interval value","Start/Width/Number",10,453028984
    ,5,90605794,1)
    'end script-------------------------------------
    The following error message was displayed on the screen, which was resulted from the command, "Dataloadred()".
    "loading file (filename): Insufficient channels are available with the required channel length [3/90605794].
    To resolve this problem, I tried to allocate the channel length to 200M using the command " Chnalloc
    (). But this also resulted in the same kind error as above.
    How Can I resolve this problem and load my data by reducing. Your reply would be appreciate.
    Regards,
    Sky

    Hi,
    Please try this:
    1. Start DIAdem
    2. Open the "settings" menu
    3. Choose "Memory management..."
    4. Click the "Data matrix..." button
    5. In the dialog, set the "No. of channels" and "Channel length" to meet your requirements
    6. Click "Close"
    7. DIAdem will now restart and set up the data channels at the length you have selected
    Depending on how large you set the data matrix size, starting DIAdem may take a few minutes (depending on your computer equipment).
    An alternative to loading and reducing data sets may be the "Register File" function in the DATA window. Once you have clicked on the DATA icon, select the "File" menu and choose the "Register File..." option. Registering files will not actually load a data set, and thus speed up the data acce
    ss part of DIAdem. To learn more about this function, go to the help system and search for "Registering a file in DIAdem DATA".
    I hope this will help you. If you have any additional questions, please let me know.
    Otmar
    Otmar D. Foehner
    Business Development Manager
    DIAdem and Test Data Management
    National Instruments
    Austin, TX - USA
    "For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."

  • How to process large data files in XI  ?  100 MB files ?

    Hi All
       At present we have a scenario as follows
      It is File to IDoc ....Problem is the size of the file
      We need to transfer 100mb file to SAP R/3 system ? So this huge data how to
      process ?
    Adv thanx and regards
    Rakesh

    Hi,
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
    Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
    Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
    Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    please check these links..
    /community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Input Flat File Size Determination
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    data packet size  - load from flat file
    How to upload a file of very huge size on to server.
    Please let me know , your problem is solved or not..
    Regards
    Chilla..

  • Exporting and Importing Large data set of approx 300,000 rows

    Hi,
    I have a table in db 1(approx 10 columns) and want to copy all the rows(approx. 300,000) from this table with a simple where clause to the same db table in db 2. Both databases are on unix.
    I am executing this from a laptop with windows XP remotely connected to my office network.
    Could someone let me know what is the best way to do this.
    Thanks

    331991 wrote:
    Thanks for the detailed instructions. I however have some limitations:
    1. Both schemas(from and to) are on the same database server. I am the schema owner for both the schemas A logical impossibility in Oracle. A schema is defined by its owner, therefor you cannot have two schemas owned by the same owner. SCHEMA1 is, by definition, owned by user SCHEMA1. And SCHEMA2 is, by definition, owned by user SCHEMA2.
    but don't have rights to create dir's on the server.Then it is of no value to create a directory object in the database. The database's directory object is nothing but an alias to refer to an actual, existing directory on the host server.
    2. I don't want to copy all data from db1 to db2. I want to copy data where id > 5000 in the id column. Also in table in db1 has 10 columns and table in db2 has 15 columns with 10 from db1 with same data types and 5 more .
    How can I export the data to my laptop from db1 and then import into db2 in this scenario?
    You don't. Create a dblink and copy the data directly from the source to the target:
    INSERT INTO TARGET_TABLE VALUES
      (SELECT COLA, COLB, COLC, NULL, NULL, NULL, SYSDATE
       FROM SOURCE_TABLE@SOURCE_LINK
       WHERE ID > 5000);
    Also I am using Oracle 10G.
    Thanks

  • Need to rebuild Outlook 2011 for mac identity for exporting data file.

    I was exporting outlook for mac data file (to import it into apple mail), and suddenly Outlook email client crashed and when I tried again it said that I need to rebuild the Outlook for mac database using rebuild utility. I took help from some professional technicians but it was a dead end. They said that my outlook for mac database was corrupt and they could not help me any further. I had years of emails in outlook for mac that I needed to move to apple mail as I was fed up of these frequently occuring outlook 2011 errors. Please if anyone could help me in this case then it will be very helpful.
    Thanks

    Outlook for mac stores archives its data in .olm format and it originally stores its mails in .OLK format. If everything has failed then you can seek help of a tool that can recover these .olk14 messages. One tool I know that is capable of doing so is http://www.outlookmacdatabaserecovery.com/.
    Look for the trial version first and see of it works and recovers you emails.

  • Problem generating XML data file through XSD

    Hello there,
    I'm trying to generate an XML data file as per the below steps:
    Making a normal schema in query transform.
    Mapping desired source column into query schema.
    Right click on query transform and generating XML schema (XSD) in my local system.
    In object library XML file format adding the XSD with schema as a root element.
    Creating target instance from object library XML file.
    In target file specifying file creation path in my local system. 
    After performing above steps and executing the job getting below error.
    Any help on this will be much appreciated.
    Regards,
    Jagari

    I found some code looking at a similar issue - I don't know how to recover other data (in I guess) the rest of the HTML in the code. I need to find a better reference with details and examples.
    I fixed it by changing:
        ssDebug.trace(moreStuff.xliff.file.body.trans-unit); // - error
    to:
       ssDebug.trace(moreStuff.file.body['trans-unit']); //- no error
    with expected output (no error):
    <trans-unit id="001" resname="IDS_ZXP7_JAM_01">
      <source>If, while you are printing, your printer stops, ...</source>
    </trans-unit>
    <trans-unit id="002" resname="IDS_ZXP7_JAM_02">
      <source>look at the Operator Control Panel (OCP) for the fault description.</source>
    </trans-unit>
    <trans-unit id="003" resname="IDS_ZXP7_JAM_03">
      <source>If the fault is a card jam, open and close the Print Cover (or Options Cover).</source>
    </trans-unit>
    <trans-unit id="004" resname="IDS_ZXP7_JAM_04">
      <source>The printer will initialize and move the jammed card to the Reject Bin.</source>
    </trans-unit>
    <trans-unit id="005" resname="IDS_ZXP7_JAM_TITLE">
      <source>Card Jam</source>
    </trans-unit>

  • Macro for exporting data to a BPC Data File

    Hi,
    My requirement is if in an Input Schedule, I can write a macro on a button click that takes the data from the input schedule cell range and exports it to a Data file at the BPC Data File path. Please note, while exporting, the data is to be overwritten and not appended, i.e. every time I export the Data file should be replaced with new data.
    Does anyone have sample code for writing such a macro ?
    Environment: SAP BPC 7.5 NW
    Cheers,
    Nitin

    Hi Vladim,
    The requirement is to allow the user to enter Account Type and Rate Type property values for certain nodes in the Dimension.
    Since these values are not coming from BW, we need to maintain in conversion files while uploading the master data.
    However, we need to do away with the manual maintenance of the conversion files and find a way to update these properties directly.
    One way I thought was to create an Input Schedule (IS), the user enters the account no and rate/account type and clicks a button. This button
    1. Exports this data to a Data File, and then a
    2. Data manager package is called to import the data file.
    Point 2 is doable, but im trying to figure out the step 1.
    Since we do not have an FTP location hence won't be able to utilise your method.
    Cheers,
    Nitin

  • Mass-exporting Sports Tracker dat files

    Is there a way of exporting the Sports Tracker dat files that are saved on my memory card to a more compatible format (e.g. GPX or KML)? I have GPSBabel but the input format does not have the option of Nokia dat files...
    Does anybody have any tips?
    Thanks!
    P.S. And yes, I know that the Sports Tracker application itself can export single files to GPX and KML, but I find this very tedious when working with many files!

    johnjensen wrote:
    Yes ita years of wast data.Can not import to new sportstracker ore Win 7 phone.
    It would probably have been better if you had started a new thread which make it clearer what you need help with.
    There doesn't seem to be an answer on the sportstracker support pages at http://support.sports-tracker.com/ but following the link to their community pages and doing a quick search, I found this thread which, whilst it doesn't refer to WP7, might be of help https://getsatisfaction.com/sportstracking/topics/export_workouts_from_symbian_to_android There might be better results as I haven't looked too closely.
    If it is of no use, it might be worth asking the question over there anyway.
    N8-00 pc059C9F6 Belle
    808 PureView pc059P6W5

Maybe you are looking for

  • Purchase order updation without QM module

    Hi Guru We are currently using Active ingredient functionality for our product. Ideal scenario is,when we do GR of purchase order ,system will generate inspection lot & value of char (Ingredients) will be updated by posting the GR. Simultaneously,sys

  • I do not get a readable or editable PDF file that has been converted to Microsoft Word.

    In the past I have been able to convert PDF files to MicroSoft Word, and Edit and Save the Documents. Now, the converted files are unreadable and uneditable. How do I correct this problem?

  • Why is ITunes 11 not recognizing mobile devices?

    I recently performed a normal software update which consisted of the ITunes 11, the current version of ITunes.  Everything appears to be working properly, with the exception of managing mobile devices.  When plugged in, no prompts, windows, displays,

  • Long Time Loading Home Media

    I have my Apple TV connected by ethernet and sometimes I get the 'Loading Media' on the screena and nothing happens, other times the media loads up fast. Never happened before when I had the ATV with the HDD obviously, but this very annoying. Is anyo

  • CLI documentation doesn't state what "PhysicalCriteria" is

    I need to list all physical hosts in my N1 5.2 SPS master server. However, the cr_cli commands are not well documented (as you may know). The cli command documentation (which I have been reading thoroughly for the past two hours) state the command is