MAXL performances

I have a problem of performances.The update of the filters and security on the level of my Essbase server had to take up to 24 hours, which is too long. In some cases, I have to update: 10 to 15 filters / database1500 users> 200 databasesThe operation of update is done starting from MAXL script. Is there a way to improve the performances of update of the security? If so, which one? Thank you for your assistance.

We had similar performance problems and found a solution in the QUICKLOGIN essbase.cfg setting.This setting allows Essbase to cache the sec file, thus allowing higher concurrency and quicker security changes.However, there is a typo in the documentation- when you want to enable this option through the essbase.cfg, the correct syntax is: QUICKLOGIN ON(The docs forget the 'ON')

Similar Messages

  • Tutorial required for performing Dimension Build using Maxl

    Hi all,
    Can you please suggest me a Tutorial for performing Dimension Build using Maxl?
    Best Regards

    From the tech ref Samples
    Exampleimport database sample.basic dimensions
    from data_file '/data/calcdat.txt'
    using rules_file '/data/rulesfile.rul'
    on error append to '/logs/dimbuild.log';
    Deferred-Restructure Examples
    For Data File Sources:
    import database sample.basic dimensions
    from server text data_file 'genref' using server rules_file 'genref' suppress verification,
    from server text data_file 'level' using server rules_file 'level' suppress verification,
    from server text data_file 'time' using server rules_file 'time'
    preserve input data on error append to 'C:\Hyperion\products\eas\client\dataload.err';
    For SQL Sources:
    import database sample.basic dimensions
    connect as 'usrname1' identified by 'password1' using server rules_file 'genref',
    connect as 'usrname2' identified by 'password2' using server rules_file 'level',
    connect as 'usrname3' identified by 'password3' using server rules_file 'time'
    on error append to 'C:\Hyperion\products\eas\client\dataload.err';
    For Data and SQL Sources:
    import database sample.basic dimensions
    from server text data_file 'genref' using server rules_file 'genref',
    from server text data_file 'level' using server rules_file 'level',
    connect as 'usrname1' identified by 'password1' using server rules_file 'genref',
    connect as 'usrname2' identified by 'password2' using server rules_file 'genref'
    on error append to 'C:\Hyperion\products\eas\client\dataload.errr';>
    Try it and post back if you have errors
    Regards
    Celvin
    http://www.orahyplabs.com

  • Maxl scripts to perform backup of the following

    Help me with these (Maxl script to do these backups)
         How to take Backup of filters (use maxl script to get filter information of all native cubes).
         Since we use xref calc a lot so we need to backup information of location alias.     
    We have partitioning so we need to get partition information backed up.

    Hi,
    Filter information is stored in the essbase.sec file so it is good start to make sure that is being backed up.
    As for extracting filter information by maxl you can do it with something like :-
    login admin password on localhost;
    spool on to 'c:\temp\filters.txt';
    display filter row all;
    spool off;
    logout;
    or if you want to narrow the filter down to a database use
    display filter row app.db;
    You will also need to change the column width to fit all the filter in e.g.
    set column_width 50;
    You can also dump the whole security file to a text file if you wanted which includes all the filter information (from 9.3.1)
    export security_file to data_file C:\temp\sec_file.txt;
    I take it your partitions don't change very often so you can easily just export the partition to xml from EAS, depending on what you are on.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Multiple DimBuilds w/Only One Restructure in MaxL?

    I am doing a series of dimension builds via load rules (v7.1.5). In this case I am building this entire dimension from scratch every time, but I want to preserve data, because there is forecast and plan data in this cube, not just actuals. My problem is this: In order to build completely from scracth, my first dimbuild load rule has "Remove Unspecified" turned ON. But that that dimbuild does not include all the level 0 members I will end up adding to this dimension by the end of this process. And I cannot find a way to delay Essbase from performing the restructure until the last dimbuild. I have tried using the "suppress verification" option in MaxL's import dimension command, but it doesn't accomplish this. I cannot find anything in the MaxL docs that refers to this and no one I work with has an answer. There has to be a way to do this, doesn't there? Otherwise I will have to abandon this "build from scratch" methodology and just leave old, dead members lying around in this dimension until they are removed manually.<BR><BR>Thanks,<BR><BR>James

    James:<BR><BR>What's important here is that ALL of the dimension build happen in the same IMPORT statement, as follows:<BR><BR>import database sample.basic dimensions <BR><b>from server text data_file 'genref' using server rules_file 'genref' suppress verification, <BR>from server text data_file 'level' using server rules_file 'level' suppress verification, <BR>from server text data_file 'time' using server rules_file 'time' suppress verification </b><BR>preserve all data on error append to 'C:\Hyperion\EAS\eas\client\dataload.err';<BR><BR>This is the only way that the suppression works.<BR>

  • Error While doing aggregate operaion in ASO cube using MaxL

    Hi,
    when I try to run a Maxl script against an ASO application to aggeragate the cube. I end up with an error msg as shown below.
    ERROR - 1270102 - Merge and view build operations cannot be performed while data load buffers exist.
    Maxl I used "execute aggregate process on database app_name.db_name stopping when total_size exceeds 1.5"
    Please guide me on what caused this issue.
    Thanks
    Sathish

    Is it working now?
    If not maybe you have other buffers that still exist, you can also destroy buffers e.g. alter database ASOSamp.Sample destroy load_buffer with buffer_id 1;
    Also restarting the database should clear the buffers, try restarting and then aggregating to see if it was a buffer issue.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How to create Base Dimensions with MaxL and Text File?

    Hi,
    Doing a scratch rebuild of a cube every month. Don't want to have a 'dummy' outline with base dimensions to copy over every build. Instead want to build from text file somehow. Thus my plan is to: 1) Delete the existing app/db 2) Create a new blank app/db 3) Create the base dimensions in the outline via text file and 4) Build entire outline via a text file. I'm stuck on #3 how to get the 'base dimensions' built via text file. I need:
    ACCOUNTS
    PERIOD
    VALUE
    VIEWS
    SCENARIO
    CUSTOM4
    YEAR
    CUSTOM3
    CUSTOM2
    ENTITY
    CUSTOM1
    I see this MaxL, but it uses a 'rules file' and I never have built a rules file to create base dims so I'm confused if it's possible or not...
    import database sample.basic dimensions
    from data_file '/data/calcdat.txt'
    using rules_file '/data/rulesfile.rul'
    on error append to '/logs/dimbuild.log';

    We rebuild our Departments and Organization from an enterprise hierarchy master each week.
    The way we implemented (what you call #3) was to not do #1 and #2, but to have a "destructive" load rule for each of these dimensions using a text file. (in the "Dimension Build Settings" for the load rule, select "Remove Unspecified" to make it destructive)
    The text file just has the dimension name (parent) and any children we needed defined in a parent/child relationship. For instance
    "Sales Departments" "0100-All Departments"
    This essentially works the same as deleting the app because the destructive load rules will drop all the blocks of data that were unspecified.
    Then we run our SQL load rule to build the rest of the dimensions from the Location Master.
    We perform a level-0 export prior to this process, then reload the level-0 data and execute all the consolidation scripts to get the data back (now in a current enterprise defined hierarchy)

  • Performance Issues after an Upgrade

    Hello!
    We are experiencing performance issues after we upgraded to a new version of Hyperion (11.1.2.1). At this point, I am not too sure about the actual causes for this degrade in performance but I am trying to narrow down the causes and need your inputs. Please help me with your ideas.
    1) What could be the causes/factors for the performance to degrade after an upgrade?
    2) Does the performance of a script depend on the user credentials i.e. who is launching the script? Whether it’s the super admin of the application/application owner or an application specific admin?
    3) Does the performance of the scripts depend on the place you are launching it from? For example - will the performance differ if it’s launched from MaxL Vs EAS?
    Please let me know your thoughts on this.
    Thanks,
    - Krrish

    There are a number of bugs 12600557, 12675485, 12669814, 12698488 logged in for 11.1.2.1 - If you use Internet Explorer 8 and have data forms designed to use a large number of sparse dimension members in rows or columns, you may experience performance degradation when opening the data forms.
    This has been fixed in Oracle Hyperion Planning, Fusion Edition Release 11.1.2.1 Patch Set Update (PSU): 11.1.2.1.101 which is available on My Oracle Support as Patch 12666861.
    HTH-
    Jasmine.

  • Is it possible to call a CMD or VBS within MAXL? Need script help

    Hello,
    I have a CMD script that uses MAXL to execute Essbase backups, the details of which are located in a txt file. That works fine, and I have the logs being sent to a folder.
    What I am trying to accomplish is AFTER it is finished running the backup, it calls a CMD script to parse the log file for errors, then either send a successful or failure notification through SMTP.
    I have all the scripts to perform the operations and they all function properly, but when I run it, the email is sent before the Essbase backup has completed.
    Is there a better way to do this like possibly calling the other CMD/VBS directly from within the MAXL shell? This is my current CMD file:
    Echo Calls Maxl shell with reference to EssbaseBackup.txt for variables
    call \\<server>\HyperionPlanning\App\Backups\MaxlBackup.cmd
    Echo Search Essbase Backup Logs for Errors
    findstr /c:"ERROR" \\<server>\HyperionPlanning\App\Backups\Logs\HyperionSetEssbaseForBackuplog.txt
    if %ERRORLEVEL% NEQ 0 goto NO_ERROR
    goto ERROR
    Echo Sends backup success mail
    :NO_ERROR
    \\<server>\HyperionPlanning\App\Backups\mail_send_success.vbs
    EXIT 0
    Echo Sends backup failure notification
    :ERROR
    \\<server>\HyperionPlanning\App\Backups\backup_failed.vbs
    EXIT 1MaxlBackup.cmd
    "C:\Oracle\Middleware\EPMSystem11R1\products\Essbase\EssbaseClient-32\bin\startmaxl.cmd" "\\<server>\HyperionPlanning\App\Backups\EssbaseBackup.txt"EssbaseBackup.txt
    spool on to '\\<server>\HyperionPlanning\App\Backups\logs\HyperionSetEssbaseForBackuplog.txt';
    set timestamp on;set timestamp off;
    login admin identified by <password> on <server>;
    alter system logout session on application App force;
    alter application App disable connects;
    alter database App.main force archive to file 'F:\Backups\App\Appmain.arc';
    alter database App.cap force archive to file 'F:\Backups\App\Appcap.arc';
    alter application App enable connects;
    set timestamp on;set timestamp off;
    logout;
    spool off;Edited by: Metatron on Mar 30, 2012 8:10 AM

    Try taking the code that is in the MaxLBackup.cmd and stick it into the root script just to remove that area of complexity. If that works, you might also try removing CALL from the line. Although I thought the point of CALL was to run another script and then return control to originating script.
    Here's some old (System 9.3.1) code that does what you're doing -- the pathing is wrong for 11x:
    REM Write filters to disc
    %hyperion_home%\products\Essbase\EssbaseClient\bin\essmsh.exe -D write_filters_to_disc.mshs %7,%8
    REM If error, go to end, else write
    IF ERRORLEVEL == 1 (SET errormsg=Error! - Read of filters from Essbase failed &     GOTO ERROR)The -D and .mshs are to handle an encrypted MaxL script.
    Regards,
    Cameron Lackpour

  • Essbase ASO Cube query performance from OBI EE

    Hi all
    I have serious problems of performance when I query an ASO cube from OBI EE. The problem born when I implement a filter in some dimension of model in the Business Model and Mapping layer. This filter is to level-0 of the dimension, the values are obtained from a session variable in OBI EE. The objetive of this is apply filters depending of users. Then, for session variable I've a table in relational dabase base with relation between user and "access", then my dimensions (not all) have as level-0 the "access" of users (as duplicated members).
    The session variable in OBI EE is filled with row-wise option, so it has all values of "access" that correspond to user (:USER system variabe).
    When I query only by one of this filtered dimensions the respond is very fast, When I query for one of this filtered dimensions and a metric the respond is fast (10 seconds). But when I query for two of this filtered dimensions and metric the respond take 25 minutes. I checked Essbase app log and found this:
    +[Mon Nov 15 19:56:01 2010]Local/TestSec5/TestSec5/admin/Info(1013091)+
    +Received Command [MdxReport] from user [admin]+
    +[Mon Nov 15 20:28:28 2010]Local/TestSec5/TestSec5/admin/Info(1260039)+
    MaxL DML Execution Elapsed Time : [1947.18] seconds
    When I look the MDX query generated by OBI I see that the aggregation process is doing in the fly in the members filtered of the crossjoin of two dimensions:
    With
    set [CATALOGO_INSTITUCIONAL2] as '[CATALOGO_INSTITUCIONAL].Generations(2).members'
    set [CATALOGO_PRESUPUESTARIO2] as '[CATALOGO_PRESUPUESTARIO].Generations(2).members'
    *member [METRICAS_PRESUPUESTARIAS].[MS1] as 'AGGREGATE(filter(crossjoin (Descendants([CATALOGO_INSTITUCIONAL].currentmember,[CATALOGO_INSTITUCIONAL].Generations(7)),Descendants([CATALOGO_PRESUPUESTARIO].currentmember,[CATALOGO_PRESUPUESTARIO].Generations(7))),(([CATALOGO_INSTITUCIONAL].CurrentMember.MEMBER_ALIAS = "01.01" OR [CATALOGO_INSTITUCIONAL].CurrentMember.MEMBER_Name = "01.01")) AND (([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "G" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "G") OR ([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "I0101" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "I0101") OR ([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "S01" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "S01"))),METRICAS_PRESUPUESTARIAS.[Compromiso])', SOLVE_ORDER = 100*
    select
    { [METRICAS_PRESUPUESTARIAS].[MS1]
    } on columns,
    NON EMPTY {crossjoin ({[CATALOGO_INSTITUCIONAL2]},{[CATALOGO_PRESUPUESTARIO2]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
    from [TestSec5.TestSec5]
    Can somebody tell me if is possible to change the way in that OBI built the query or if is possible to use aggregations previously materialized of essbase?

    hi Amol,
    1. On what basis , did you estimate your cube to around 400GB to 600GB.
    2. If ASO is an option, its huge advantage lies in space, its does not take more space , unlike BSO.
    3. I have seen cubes ,who size was around 300-400GB in BSO,when made the same cube into ASO , its consumed space of 40GB-45GB.
    HOpe this helps
    Sandeep Reddy Enti
    HCC
    http://hyperionconsutlancy.com/

  • Disconnect users in MaxL

    I get requests from my client in the form of spreadsheet asking me to disconnect users that are logged in for more than 1 hour.I know there are maxL statements to disconnect all users. I am looking at 10-20 users at a time.
    Is there a statement as I would like to put this in excel and automate it instead of going over the process through EAS.its take a lot of time and hurts my fingers.

    Hi rtk & 833738,
    Cameron and Glenn both make good points, and I think you should consider the KISS (Keep It Simple) route to an effective solution.
    To me it sounds like your cubes are getting excessively fragmented (typical during a heavy planning cycle). Are you doing nightly maintenance such as export-clear-load of your data to help control this? Additionally, a good review of your tuning, optimization, and settings in essbase may reveal some areas that can be improved on. Finally, I think you may need to review your hardware requirements. Hardware, faster disks, more memory etc, can have a significant positive impact on performance and stability with relatively very low cost, and short implementation time.
    If you still feel you need a utility to suit your unique requirements, this can certainly be created in fairly short order. Contact me at my email address and I can help you out with this. Also, consider Accelatis; a tool that can help manage this and more.
    Robb Salzmann

  • Routing logs to individual log file in multi rules_file MaxL

    Hi Gurus,
    I have been pretty late to this forum after long time. I have a situation here, and trying to find out the best way for operational benefits.
    We have an ASO cube (Historical) keeps 24 months snapshot data and refreshed monthly just like last 24 months rolling. The cube size is around 18.5 GB. The input level data size is around 13 GB. For monthly refresh the current process rebuilds the cube from scratch, deletes the 1/24 snapshot as it is going to add last months snapshot. The entire process takes 13 hours of processing time becuase the server doesn't have number of CPUs to support parallel operations.
    Since we recently moved to 11.1.2.3, and have ample amounts of CPUs(8) and RAM (16gb), I'd like to take davantage of parallelism, and will go for incremental load. Prior to that since the outline build is EPMA driven I'd only like to rebuild the dimension with all data, essentially restructures the DB, with data after metadata refresh, so that I can keep my history intact, and only proceed for loading the last month's data after clearing out the 1st snapshot.
    My MaxL script looks like below:
    /* Set up logs */
    set timestamp on;
    spool on to $(mxlLog).log;
    /* Connect to Essbase */
    login $key $essUser $key $essPwd on $essServer;
    alter application "$essApp" load database "$essDB";
    /* Disable User Access to DB */
    alter application "$essApp" disable connects;
    /* Unlock all objects */
    alter database "$essApp"."$essDB" unlock all objects;
    /* Clear all data for previous month*/
    alter database "$essApp"."$essDB" clear data in region 'CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})' physical;
    /* Load SQL Data */
    import database "$essApp"."$essDB" data connect as $key $edsUser identified by $key $edsPwd using multiple rules_file 'LOADDATA','LOADJNLS','LOADFX','LOAD_J1','LOAD_J2','LOAD_J3','LOADDELQ' to load_buffer_block starting with buffer_id 1 on error write to "$(mxlLog)_LOADDATA.err";
    /* Selects and build an aggregation that permits the database to grow by no more than 300% */
    execute aggregate process on database "$essApp"."$essDB" stopping when total_size exceeds 4 enable alternate_rollups;
    /* build query tracking views */
    execute aggregate build on database "$essApp"."$essDB" using view_file 'gw';
    /* Enable Query Tracking */
    alter database "$essApp"."$essDB" enable query_tracking;
    /* Enable User Access to DB */
    alter application "$essApp" enable connects;
    logout;
    exit;
    I am able to achive performance but not satisfactory. So I have couple of queries below.
    1. Whether bule shaded codes can further be tuned. I have major problem in clearing only 1 month snapshot : where I require to clear one scenario and the designated 1st month.
    2. Multiple rules_file statement, how do I write logs of each load rule to separte log files instead one, my previous process is wrting error-log for each load rule in separte log file and consolidates at the end of batch run to a single file for the whole batch execution.
    Apprecaite any help in this regrad.
    Thanks,
    DD

    Thanks Celvin. I'd rather route MaxL logs in one log file and consolidate into the batch logs instead of using
    multiple log files.
    Regrading Partial Clear:
    My worry is, I first tried partial clear with 'logical', that too took considerable amonut of time, and the
    difference between logical and physical clear is only 15-20 minutes. FYI, I have 31 dimensions in this cube,
    and the MDX clear script that use Scenario->ACTUAL and Period->&CLEAR_PERIOD (SubVar) is of dynamic hierarchy
    type.
    Is there a way I can rewrite the clear data MDX script in betterway  so that it will clear faster, than this
    <<CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})>>
    Does this clear MDX have any effect on dynamic/stored hierarchy nature of the dimension, if not, then what
    would be optimized way to write this MDX?
    Thanks,
    DD

  • Solution to Errno:7 in MaxL for Essbase Studio redeployment scripts

    This is an answer to an archived forum post that I found via a Google search.  The forum post no longer accepts replies.  The forum post contained an unanswered question about how to resolve "Errno:7" in MaxL in release 11.1.2.3 when trying to redeploy an outline through Essbase Studio.
    I encountered the same error in a client environment and could not find any resolution or references to this in the Knowledge Base, patches, release notes, etc.
    I wanted to post the resolution that worked for me, in case someone else encounters Errno:7 in MaxL.
    In this particular instance, the MaxL "deploy outline" script worked fine until we upgraded 11.1.2.1 to 11.1.2.3.  After upgrading, we could still connect to the Essbase Agent via MaxL, but the "deploy outline" script fails with "Not able to connect to BPM Server. Errno:7" and "BPM Connect Status: Error".  No evidence of the error could be found in the logs for the Essbase Agent, the Essbase application in question, and Essbase Studio.  Using the Essbase Studio Console to redeploy the outline worked fine.
    The root cause was learned when we ran the EPM Registry Editor to view a registry report.  The report showed that while Essbase Studio was correctly listening on port 5300, the web port was still on the old 11.1.2.1 port (9080) instead of the port 11.1.2.3 was expecting (12080).  This port number is not exposed to us if we go back into the EPM Configuration tool and drill down to Essbase Studio.  The config tool only allows us to change the relational database connection and the "datafiles" folder location.  Running "netstat -na" from the command prompt confirmed that the server was listening to 9080 instead of 12080.
    Using the EPM Registry Editor command-line tool to update the port's property value and then restarting the Essbase Studio service fixed the issue for us.  If you read the Essbase Studio documentation as I did, you may have had the impression that editing a file on the server would do the trick.  But the Readme for Essbase Studio 11.1.2.3 provides the real answer:
    "Starting in Release 11.1.2.3, the following Essbase Studio server properties are stored in the
    Oracle Hyperion Shared Services Registry database.
    The 11.1.2.3 Oracle Essbase Studio User's Guide describes all server properties as being in the
    server.properties file. To view or modify the settings in Shared Services Registry, use the
    epmsys_registry utility, described in the Oracle Enterprise Performance Management System
    Deployment Options Guide."
    I hope this helps and good luck!

    Hi Santy,
    Here's the original forum post: Essbase Studio Cube deployment via MaxL error
    In that thread, someone had questioned if an 11.1.2.2 MaxL client could still connect and bypass the error.  I happened to have a laptop handy with the 11.1.2.2 MaxL client installed on it and was able to test that.  The 11.1.2.2 MaxL client got the error as well.
    In my 11.1.2.3 environment I tried both the 32-bit and 64-bit MaxL runtimes and verified both were on the latest available Essbase patch set for 11.1.2.3.  Again, they still got the Errorno:7 message.  The problem was only fixed after updating the "server.httpPort" property value via the epmsys_registry tool.
    Regards,
    - Dave

  • Essbase - Shared Services - Maxl - User creation

    Hi,
    I have an issue looking similar to [Automating User/Group creation & Assigning filters in Shared Services|http://forums.oracle.com/forums/thread.jspa?threadID=1009127]
    When trying to add internal groups to an external MSAD user, I get following messages:
    h3. when adding a group to an external user:
    h6. alter user 'x29027' add 'GR_GROUP';
    Maxl returns:
    h6. Statement executed with warnings.
    h6. User x29027 does not exist
    => the system does not recognize the user
    h3. when trying to create this user first as an internal user
    (based the settings from on another external user)
    h6. create or replace user 'x29027' identified by 'password' as 'i09740';
    Maxl returns:
    h6. Statement executed with warnings.
    h6. A user/group with the same name (x29027) exist at Shared Services
    => the system does recognize the user in MSAD!
    ===> both statements seem to be contradictory!!!
    h3. Other remarks/thoughts:
    - we have two MSAD links (to two different domains), does this matter?
    - no difference when addressing users as x29027@MSAD_FIB (a syntax similar to the HSS security report output)
    - any possibilities in creating a user internally first (using the 'as' option; to copy settings from another user) and then moving to external? (like alter user 'Test_EDR4' set type external;)
    Thanks in advance
    Erik
    Environment: Essbase 9.3.1.3. with Shared Services

    Hi Erik,
    When you create an user in Essbase, the user will be created both in Essbase as well as Shared Service,
    where as when you create an user in Shared service, the user will not be created in essbase untill you perform refresh.
    In your case you can create the external user in Essasbe by using "Create user 'x29027' type external;'.
    By this you will be creating the user in Essbase and the particular user is recognised in Essbase.
    Now you can add him to any group.
    - Krish

  • Multiple Dimension Builds using MAXL

    We currently use ESSCMND to perform multiple dimension builds and data loads. We have been using the "INCBUILDDIM" in Escmnd to update the outline with multiple files without restructuring until the last file. Is there an equivalent command for MAXL ??

    Yes, you can mimic the BEGININCBUILDDIM command from ESSCMD in version 6.5. I've appended a sample script below to show that functionality. Until you upgrade to V6.5 you could shell to Esscmd and do your dim builds there. login 'xxx' 'PASSWORD' on 'LOCALHOST'; spool on to 'd:\ESSBASE\CLIENT\OUTPUT.LOG'; alter system load application 'SAMPTEST'; alter application 'SAMPTEST' load database 'BASIC'; import database 'SAMPTEST'.'BASIC' dimensions from server text data_file 'ACCTBLD1.TXT' using server rules_file 'ACTBLD.RUL', from server text data_file 'ACCTBLD2.TXT' using server rules_file 'ACTBLD.RUL' preserve all data on error append to 'ERR.OUT'; spool off; logout; exit;

  • Maxl statement create or replace database freezes essbaseserver

    We run on daily basis maxl scripts to copy database from one application to another.
    with statements like:
    alter system logout session on database 'Devel_H'.'HRM';
    create or replace database 'INform'.'HRM' as 'Devel_H'.'HRM';
    We just migrated (from esb7.1) to essbase 931 on a win2003 professional server 64 bit.
    Overall performance looks better then before. (like calcs and retrievals)
    But this proces of creating db takes far longer and even freezes all other stuff on this server.
    This happens with large (20GB) and small (3Gb) databases. Only the freeze period divers.
    Also EAS sessions and WA sessions are frozen the whole periode of the create.
    What could cause this behavior?

    Since I have an idea it’s an OS matter and not an Essbase an another question/answer which might lead to an solution:
    What (on os level) happens during a "create or replace...." command:
    For sure its not a simple xcopy or copy command. Since the files are filled step by step (terrible slow) and not just copied.

Maybe you are looking for