Routing logs to individual log file in multi rules_file MaxL

Hi Gurus,
I have been pretty late to this forum after long time. I have a situation here, and trying to find out the best way for operational benefits.
We have an ASO cube (Historical) keeps 24 months snapshot data and refreshed monthly just like last 24 months rolling. The cube size is around 18.5 GB. The input level data size is around 13 GB. For monthly refresh the current process rebuilds the cube from scratch, deletes the 1/24 snapshot as it is going to add last months snapshot. The entire process takes 13 hours of processing time becuase the server doesn't have number of CPUs to support parallel operations.
Since we recently moved to 11.1.2.3, and have ample amounts of CPUs(8) and RAM (16gb), I'd like to take davantage of parallelism, and will go for incremental load. Prior to that since the outline build is EPMA driven I'd only like to rebuild the dimension with all data, essentially restructures the DB, with data after metadata refresh, so that I can keep my history intact, and only proceed for loading the last month's data after clearing out the 1st snapshot.
My MaxL script looks like below:
/* Set up logs */
set timestamp on;
spool on to $(mxlLog).log;
/* Connect to Essbase */
login $key $essUser $key $essPwd on $essServer;
alter application "$essApp" load database "$essDB";
/* Disable User Access to DB */
alter application "$essApp" disable connects;
/* Unlock all objects */
alter database "$essApp"."$essDB" unlock all objects;
/* Clear all data for previous month*/
alter database "$essApp"."$essDB" clear data in region 'CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})' physical;
/* Load SQL Data */
import database "$essApp"."$essDB" data connect as $key $edsUser identified by $key $edsPwd using multiple rules_file 'LOADDATA','LOADJNLS','LOADFX','LOAD_J1','LOAD_J2','LOAD_J3','LOADDELQ' to load_buffer_block starting with buffer_id 1 on error write to "$(mxlLog)_LOADDATA.err";
/* Selects and build an aggregation that permits the database to grow by no more than 300% */
execute aggregate process on database "$essApp"."$essDB" stopping when total_size exceeds 4 enable alternate_rollups;
/* build query tracking views */
execute aggregate build on database "$essApp"."$essDB" using view_file 'gw';
/* Enable Query Tracking */
alter database "$essApp"."$essDB" enable query_tracking;
/* Enable User Access to DB */
alter application "$essApp" enable connects;
logout;
exit;
I am able to achive performance but not satisfactory. So I have couple of queries below.
1. Whether bule shaded codes can further be tuned. I have major problem in clearing only 1 month snapshot : where I require to clear one scenario and the designated 1st month.
2. Multiple rules_file statement, how do I write logs of each load rule to separte log files instead one, my previous process is wrting error-log for each load rule in separte log file and consolidates at the end of batch run to a single file for the whole batch execution.
Apprecaite any help in this regrad.
Thanks,
DD

Thanks Celvin. I'd rather route MaxL logs in one log file and consolidate into the batch logs instead of using
multiple log files.
Regrading Partial Clear:
My worry is, I first tried partial clear with 'logical', that too took considerable amonut of time, and the
difference between logical and physical clear is only 15-20 minutes. FYI, I have 31 dimensions in this cube,
and the MDX clear script that use Scenario->ACTUAL and Period->&CLEAR_PERIOD (SubVar) is of dynamic hierarchy
type.
Is there a way I can rewrite the clear data MDX script in betterway  so that it will clear faster, than this
<<CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})>>
Does this clear MDX have any effect on dynamic/stored hierarchy nature of the dimension, if not, then what
would be optimized way to write this MDX?
Thanks,
DD

Similar Messages

  • You are unable to log in to the File Vault user account "myaccount "...

    I know there are various posts already out there on remedies for recovering your data stored on a FileVault account when you receive the following message at the login screen; *"You are unable to log in to the File Vault user account "myaccount " at this time"*, but this genuinely worked for me despite AppleCare providing absolutely no assistance whatsoever. In fact, if I had followed their advise I'd be inconsolable right now having wiped my MacBook Pro and contemplating the prospect of rewriting my two essays due in 3 days time!
    I, in a moment of shear stupidity, decided to move the sparsebundle file in my one and only account to trash. Thinking nothing of my foolish actions I shut down for the evening without a care in the world. The next day I started up my computer as usual, and as usual I was prompted at the login screen for my password. I entered the correct password, but was alarmed to see the message above flash before my eyes. Without boring you all with what I did over the weekend waiting for AppleCare to open again on Monday morning. Anyway, this post is specifically for people who have put the sparsebundle of their FileVault-enabled account in the trash (NOT anything else!) without emptying it, of course! The other prerequisite is that you must REMEMBER YOUR FILEVAULT ACCOUNT PASSWORD!
    1. Firstly, you must insert *Disc 1 of the Mac OS X Install* discs.
    2. Restart your computer holding down the letter S (make sure you are holding this down BEFORE the start up noise sounds)
    3. Select the appropriate language and continue to the next screen (DO NOT go past the next screen, the WELCOME screen)
    4. At the grey bar at the top, under Utilities, select *Reset Password*
    5. Select the Administrator/Root account and proceed to change the password of this account to test
    6. Confirm the password by reentering it and click Save
    7. Restart your computer and at the login screen you should now be able to select an account named Other
    8. The username for this account is root and the password is test (the password you entered earlier)
    9. Using Finder, locate the Terminal utility, which can be found in *Applications --> Utilities*
    10. Enter the following, ignoring the bold of course (pay attention to lower cases AND spaces!): *defaults write com.apple.finder AppleShowAllFiles TRUE*
    11. Hit Enter
    12. On the next line, enter: *killall Finder*
    13. Hit Enter again
    14. Type: exit on the next line and close Terminal
    15. This has enabled the hidden files on your computer to be visible
    16. You then need to locate the sparsebundle file in the trash of your usual account folder (it could be 501, so search for that too) whilst logged in to the administrator account
    17. Once you have found it, click *Go to Folder* under Go in the grey bar and type /Users/
    18. Create a *new folder* at this location with a new username
    19. Move the sparsebundle from its present location to the folder you have just created
    20. Click Get Info on the new folder, and at the very bottom click *Apply to enclosed items*
    21. In *System Preferences --> Accounts*, create a new user with EXACTLY the same name as the folder you created (eg. Folder name = burtreynolds, new user = burtreynolds)
    22. A window should appear if you have done the above correctly stating *A folder in Users folder already has same name, would you like to use it?*
    22. Click OK
    23. Click *Show All* at the top of the Accounts window
    24. Restart your system and log in to the new account you have created
    25. The sparsebundle should now be visible
    26. Double-click on the sparsebundle, it will prompt you to enter a password
    27. Enter the password of your former account (if you have genuinely forgotten this password, I honestly can't help any further at this point)
    28. If the password is correct, the sparsebundle will automatically mount and you will have access to all the files
    29. NEVER EVER USE FILEVAULT AGAIN AND BACK UP ALL DATA YOU DON'T WANT TO LOSE!!!
    The above worked for me, and to say I'm mildly annoyed with AppleCare is, well, putting it mildly really!

    I know there are various posts already out there on remedies for recovering your data stored on a FileVault account when you receive the following message at the login screen; *"You are unable to log in to the File Vault user account "myaccount " at this time"*, but this genuinely worked for me despite AppleCare providing absolutely no assistance whatsoever. In fact, if I had followed their advise I'd be inconsolable right now having wiped my MacBook Pro and contemplating the prospect of rewriting my two essays due in 3 days time!
    I, in a moment of shear stupidity, decided to move the sparsebundle file in my one and only account to trash. Thinking nothing of my foolish actions I shut down for the evening without a care in the world. The next day I started up my computer as usual, and as usual I was prompted at the login screen for my password. I entered the correct password, but was alarmed to see the message above flash before my eyes. Without boring you all with what I did over the weekend waiting for AppleCare to open again on Monday morning. Anyway, this post is specifically for people who have put the sparsebundle of their FileVault-enabled account in the trash (NOT anything else!) without emptying it, of course! The other prerequisite is that you must REMEMBER YOUR FILEVAULT ACCOUNT PASSWORD!
    1. Firstly, you must insert *Disc 1 of the Mac OS X Install* discs.
    2. Restart your computer holding down the letter S (make sure you are holding this down BEFORE the start up noise sounds)
    3. Select the appropriate language and continue to the next screen (DO NOT go past the next screen, the WELCOME screen)
    4. At the grey bar at the top, under Utilities, select *Reset Password*
    5. Select the Administrator/Root account and proceed to change the password of this account to test
    6. Confirm the password by reentering it and click Save
    7. Restart your computer and at the login screen you should now be able to select an account named Other
    8. The username for this account is root and the password is test (the password you entered earlier)
    9. Using Finder, locate the Terminal utility, which can be found in *Applications --> Utilities*
    10. Enter the following, ignoring the bold of course (pay attention to lower cases AND spaces!): *defaults write com.apple.finder AppleShowAllFiles TRUE*
    11. Hit Enter
    12. On the next line, enter: *killall Finder*
    13. Hit Enter again
    14. Type: exit on the next line and close Terminal
    15. This has enabled the hidden files on your computer to be visible
    16. You then need to locate the sparsebundle file in the trash of your usual account folder (it could be 501, so search for that too) whilst logged in to the administrator account
    17. Once you have found it, click *Go to Folder* under Go in the grey bar and type /Users/
    18. Create a *new folder* at this location with a new username
    19. Move the sparsebundle from its present location to the folder you have just created
    20. Click Get Info on the new folder, and at the very bottom click *Apply to enclosed items*
    21. In *System Preferences --> Accounts*, create a new user with EXACTLY the same name as the folder you created (eg. Folder name = burtreynolds, new user = burtreynolds)
    22. A window should appear if you have done the above correctly stating *A folder in Users folder already has same name, would you like to use it?*
    22. Click OK
    23. Click *Show All* at the top of the Accounts window
    24. Restart your system and log in to the new account you have created
    25. The sparsebundle should now be visible
    26. Double-click on the sparsebundle, it will prompt you to enter a password
    27. Enter the password of your former account (if you have genuinely forgotten this password, I honestly can't help any further at this point)
    28. If the password is correct, the sparsebundle will automatically mount and you will have access to all the files
    29. NEVER EVER USE FILEVAULT AGAIN AND BACK UP ALL DATA YOU DON'T WANT TO LOSE!!!
    The above worked for me, and to say I'm mildly annoyed with AppleCare is, well, putting it mildly really!

  • OSB logging - Process writes logs in both osb_server1.out and log file ..?

    Hello,
    I have few OSB proxy service and we have configured few log operations for logging but while testing, I noticed logs are getting written in both osb_server1.out and osb_server1.log file. I dont want to write logs in osb_server1.out file.
    I am running my weblogic server in development mode.
    Could someone please advice me whats wrong I am doing here.

    Hello,
    I have few OSB proxy service and we have configured few log operations for logging but while testing, I noticed logs are getting written in both osb_server1.out and osb_server1.log file. I dont want to write logs in osb_server1.out file.
    I am running my weblogic server in development mode.
    Could someone please advice me whats wrong I am doing here.

  • Concurrently logging into the same file from 2 VMs

    If two processes running in the same machine.
    There is only one file name specified in logging.properties, e.g., test.log.
    VM 1 is logging into file test.log while VM 2 does the same thing to the same file.
    Based on the javadoc of FileHandler, the second vm will create another file
    like test.0.log to logging its data.
    Is that true?
    If true, then, when VM1 stops logging, VM2 starts logging again. Will the new logging messages from VM2 go to test.log?
    Thanks in advance.

    I have tested it out.
    When two applications in two processes logging to the same disk file (e.g. test.log) specified by the logging.properties at the same time, the two processes actually log to two different files: test.log.0 and test.log.1.
    When two applicatoins in two processes logging to the same disk file specified by the logging.properties as test.log at different time, e.g., the first process exits, then the second process executes, there are still two different files
    created for each process: test.log.0 and test.log.1.

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • DAQmx Logging New Features - Split files

    Hi All,
    With respect to this link - http://forums.ni.com/t5/Multifunction-DAQ/DAQmx-Logging-New-Features-Split-files-non-buffered-loggin... 
    My requirement is - Log each one minute data in single TDMS file (irrespect of the sampling rate).
    For this,
    I do want to understand the "Logging - Samples per File" property node value.
    The value which I am passing, needs to aligned to some "X" number.
    For Example,
    I am requesting "120000000 Samples per File"
    But the corrected value is "120171520 Samples per File"
    What is the relation between both the numbers? or How I can achieve this number pragmatically?
    Can you please help me on this?
    Thank you,
    Yogesh Redemptor
    Regards,
    Yogesh Redemptor

    Hello Mr.Yogesh!
    This error can be resolved be wiring the Logging.FileWriteSize input of the DAQmx Read property node to the required number of samples (120,000,000) .
    The corrected number in this case (120171520) is related to the size of the volume sector in bytes.
    Regards,
    Raghu

  • GC Logs do not log to the specified file

    Hi,
    I am facing a wierd problem while logging GC Logs to a log file.
    The Command line I use is this -
    -Xloggc:D:\gc_logs\gc_logs-%date:~4,2%%date:~7,2%%date:~10,4%-%time:~0,2%-%time:~3,2%-%time:~6,2%.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -verbose:gcWhenever my server stops, upon restart, I see that the gc logs go directly to console but fail to log into the log file expected.
    I have verified the D:\gc_logs folder exists.
    I am not sure if there is anything I am missing here that is causing this problem.
    I use JDK 1.6.0_10 and JBoss 4.2.3.GA Server.

    Check the permissions of folder. Hope it's not read only folder. I have faced same problem on linux box.

  • Log me in pro file transfer

    Just installed a new computer,I log on to log me in pro with no problem but I can't transfer files from one computer to another.... Any Suggestions? Thanks Marc B. 

    Are you going in through safari, or using the app in the store?   If you are using the app, and getting that error, you should drop the developer a note.  The whole point of a app is that it should have all the code needed to do what you are trying to do.

  • Cannot find acsbackup_instance.log in the backup file

    hi,
    I had ACS 5.2 ( Evalution License ) setup installed on VMware with patch 11 when I try to restore earlier backup of ACS gives me  error "Cannot find acsbackup_instance.log in the backup file"
    I am using Filezilla FTP sever for backup transfer.
    suggest... Thanks in advance..
    Regards,
    Avi.

    That is strange!! Just make sure you are using the TFTP and not the DESK in the repository. It looks to me that you are trying to restore from the DESK and not the TFTP and it could not find the backup file int he DESK.
    If you double check the repository and you are 100% sure that the TFTP is being used then I would recommend to open a TAC case and feedback us about what the issue was if the TAC discovered the root cause of this.
    btw, what you see if you apply this command on CLI:
    show backup history
    Thanks.
    Amjad
    Rating useful replies is more useful than saying "Thank you"

  • Ttcwerrors.log and ttcwmesg.log file locations

    Is there an option we can set in the ttcrsagent.options file to change the location of the ttcwerrors.log and ttcwmesg.log files?
    We can change the location of the daemon user and error log files by setting the -supportlog and -userlog options in ttendaemon.options, so we were hoping we could do the same in ttcrsagent.options.
    Thanks,
    -Jules

    Unfortunately, no. The ttcw log file location is not configurable
    Simon

  • SSIS log provider for Text files - Clean logs

    I have SQL Server 2012 with package deployment model.
    I'm thinking what is best practise for logging.
    Does SSIS log create new log files each day or does it log alway to same file forever?
    How to handle that size of files in Logging Folder is under control? Is is manual process to clean(remove) logs or any automated way to remove logs older that 30 days etc?
    Kenny_I

    In all SSIS versions the logging to file is an append operation, or create then append if the log file does not exist. To remedy the file growth one needs to create a scheduled job.
    Another option is to create a new file in the package http://goo.gl/4c1O3n this is helpful if you want to get rid of the files older than x days. IMO the easiest way to remove these is thru
    Using Robocopy to delete old files from folder
    Arthur My Blog

  • Routing and Remote Access Logs (Windows Server 2008 R2)

    Hi,
    I have a Windows 2008 R2 server running Routing and Remote access and users are using PPTP VPN's to connect to our network.
    I have been asked to find logs for the following for connections in to our server
    Username used for connection
    Computer Name
    IP Address used by computer connecting
    Start/End time of VPN session
    Date
    Encryption used
    I found an article stating to enable RRAS logs you need to run the following command
    To enable RAS logs run command “netsh ras set tracing * enabled” and found a series of logs created in this location C:\Windows\tracing
    None appear to contain the information I am looking for and was wondering if I was doing this correctly and if not how I am meant to extract this information?
    If you require any more details just let me know.
    Kind Regards
    David

    Hi,
    I can’t sure which article you have read, but fur the 2008R2 the RAS to enable the log and the debug log in the KB is descried like this, I recommend you to try the KB
    mentioned method.
    To configure RRAS to enable logging
    1. Start Server Manager. Click Start, click Administrative Tools, and then click Server Manager.
    2. In the navigation tree, expand Roles, and then expand Network Policy and Access Services.
    3. Right-click Routing and Remote Access, and then click Properties.
    4. On the Logging tab, select Log errors only, Log errors and warnings, or Log all events, depending on how much information you want to capture.
    5. Click OK to save your changes.
    The related KB:
    RRAS: Logging should be enabled on the RRAS server
    http://technet.microsoft.com/zh-cn/library/ee922651(v=ws.10).aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Since Prelude doesn't work with Red footage, how can I add logging info to a file in Premiere Pro

    Since Prelude doesn't work with Red footage, how can I add logging info to a file in Premiere Pro CS6?

    RED footage has a restriction on metadata.  We are looking into what it would take to get full metadata support into our system, but you cannot tag red footage at this time.

  • Log job output to file - information missing? dbcc checkdb

    Hello
    Not sure where to put this question.. feel free to move it if necessary.
    I have a job which runs DBCC CHECKDB WITH PHYSICAL_ONLY on every database on the instance which is read_write. Problem is that I want to get the output of the result to make sure that every database is actually performing this command. When viewing normal
    history by right clicking the job and select "view history" I get cut off information due to the lack of space allowed (1000 chars default I think).
    So I tried to log it to table and view it by msdb.dbo.sp_help_jobsteplog but the information here does not cover all databases as well.. So then I tried to log it to a file. But I get the same information there, and not all databases are logged.
    So I start to wonder if the dbcc checkdb job does not get executed on the other databases?? If I look in the Current SQL Server Logs I only see that DBCC CHECKDB WITH PHYSICAL_ONLY executed on the same databases that is listed in my output file.
    What can I do? the instance contains over 400 databases but only approximately 70 is logged as doing a dbcc checkdb.
    This is my command:
    SET NOCOUNT ON
    EXEC sp_MSforeachdb @command1='
    IF NOT(SELECT DATABASEPROPERTYEX(''?'',''Updateability''))=''READ_ONLY''
    BEGIN
    DBCC CHECKDB (?) WITH PHYSICAL_ONLY
    END'

    There is a known issue with sp_MSforeachdb where under heavy load the procedure can actually miss databases with no errors. That can be the case in your environment. Aaron Bertrand wrote about the issue and solutions for the problems in the article:
    Making a more reliable and flexible sp_MSforeachdb.
    Ana Mihalj

  • Acfs.log.0 oracleoks log file

    A trivial question about acfs.log.N files
    (i.e. acfs.log.0, acfs.log.1, acfs.log.2, acfs.log.3, etc, 1 GB size each),
    they can be found inside directory:
       CRS_HOME/log/<hostname>/acfs/kernel
    together with a small file: file.order
    taht lists the temporal order by which to consider them
    Is it safe to delete them (if I don't need anymore) using rm -f acfs.log.*?
    According to lsof no process is using them at the moment.
    Also: there is a way to limit the number of files created?
    Sorry to bother you, but I'm not able to find information, neither in Oracle web sites, nor in the docs, nor googling about.
    It looks like they are log files of oracleoks (Oracle Kernel Service , a not-open source Linux module, loaded into the kernel after Grid installation)
    It's a 11.2.0.4 CRS installation, on one node I have a few acfs.log.N files, each filled with records like:
    ofs_aio_writev: OfsFindNonContigSpaceFromGBM failed with status 0xc00000007f
    thanks
    Oscar

    Hi Oscar,
    Regarding that freeGBM messages ,
    Do you see any kind of hang while trying to stop acfs filesystem with srvctl .
    I feel it will be worth to open a service request with Oracle and investigate what exactly causing this messages ,rather removing the same.
    If you are going to open a service request ,please gather below related information and share them in service request.
    Refer:-------
    ==========
    What diagnostic information to collect for ADVM/ACFS related issues (Doc ID 885363.1)
    While gathering information ,you can also use TFA tool which get install by default which will make gathering information more easier.
    Refer:---------
    ==========
    TFA Collector - Tool for Enhanced Diagnostic Gathering (Doc ID 1513912.1)
    Regards,
    Aritra

Maybe you are looking for

  • How to pass prompt value to columns with formula?

    Hello guys I have a situation: I have a report with 1 column name 'product' and in this column it has a case statement like 'case when item is ('a','b','c','d') then 'Men' else 'women' end' Then I have created a dashboard prompt of the exact same col

  • 24" iMac for Good for Pro Use?

    Hi, I am considering getting a 24" iMac. I would like to get a Mac Pro with a pro display but my budget does not stretch that far. I have no worries about the speed of the Duo 2 Core 2.4Ghz, my concern is about the DISPLAY. I am going to be working i

  • Report based on Serial Number generated in the Production Order

    Hi, I wish to generate a Customized report, which shall include the total consumption based on Sales Order/Line Item/Serial Number/Production Order matching the Serial Number generated in the Production Order where in we can get the required data. Ne

  • No backend PO in ECC5?

    Hi Gurus, Please can someone shed some light on the following error. We have activated extended classic scen in srm 5.0, but our srm PO does not replicate over to ECC5 as a display only doc. The checks our basis team have undertaken are: i. No idocs

  • Macbook pro retina display getting too hot

    Hello, I bought a macbook pro retina display: 2.7 quad core/16GB RAM/1GB Nvidia 650m/768 SSD When I copy files like 20 or 30 GB from an external to my retina the keyboard becomes hot and GPU 141 F (61C). For playing game it becomes more hot!!! I don'