Log file in RSADRLSM02 : regional structure

Hello,
can somebody tell me what is the structure od the "log. file name log" passed like a mandatory parameter to the program RSADRLSM02 that is filling the regional structure ?
Thank you,
Tom.

Puneet
I have a question related to address validation / verification. We want to understand the extent of checks SAP offers by using RSADRLSM02.
We have been suggested by some to go with external software for address checks as opposed to using SAP regional structures.
We are looking for some feedback from customers who have used SAP regional structures before we proceed one way or the other.
Would you mind sharing your experience or point me to a contact ?
Thanks
Sudhakar

Similar Messages

  • How to create a log file for bapi return structure

    Hi ppl,
         I am using BAPI_PO_CHANGE to mark the delivery of POs as complete after many validations through a classic report now my concern is i have been asked to create a log file which details the errors in the POs which is in the bapi return structure.
       I don't know how to do can any one help at the earliest.
    Regards,
    Bharathy.

    hi
    pls see this thread...
    it may help you...
    /people/kamalkumar.ramakrishnan/blog/2007/01/10/a-primer-on-using-and-creating-sap-application-log
    thx
    pavan
    *pls mark for helpful answers

  • Multiple log files using Log4j

    Hello,
    I want to generate log files based on package structure. Like com.temp.test in test.log ,also I am having a log file at application like app.log .
    This is my requirement what has been logged in test.log should not be logged in app.log.This is my log4j.properties file.
    # Log4j configuration file.
    # Available levels are DEBUG, INFO, WARN, ERROR, FATAL
    # Default logger
    log4j.rootLogger=DEBUG, PFILE
    log4j.logger.com.temp.test=DEBUG,TEST
    # PFILE is the primary log file
    log4j.appender.PFILE=org.apache.log4j.RollingFileAppender
    log4j.appender.PFILE.File=./App.log
    log4j.appender.PFILE.MaxFileSize=5120KB
    log4j.appender.PFILE.MaxBackupIndex=10
    #log4j.appender.PFILE.Threshold=DEBUG
    log4j.appender.PFILE.layout=org.apache.log4j.PatternLayout
    log4j.appender.PFILE.layout.ConversionPattern=%p %d[%l][%C] %m%n
    #log4j.appender.PFILE.layout.ConversionPattern=%p %d %m%n
    log4j.appender.TEST=org.apache.log4j.RollingFileAppender
    log4j.appender.TEST.File=./test.log
    log4j.appender.TEST.MaxFileSize=5120KB
    log4j.appender.TEST.MaxBackupIndex=10
    log4j.appender.TEST.layout=org.apache.log4j.PatternLayout
    log4j.appender.TEST.layout.ConversionPattern=%p %d[%l][%C] %m%n
    Can u help me!!!

    You have to configure the temp logger so that it does not send its info on to the root logger.
    For this, you can use the additivity flag.
    # Default logger
    log4j.rootLogger=DEBUG, PFILE
    log4j.additivity.com.temp.test=false
    log4j.logger.com.temp.test=DEBUG,TESTThe rest of the file remains the same.

  • Standby Redo Log Files and Directory Structure in Standby Site

    Hi Guru's
    I just want to confirm, i know that if the Directory structure is different i need to mention these 2 parameter in pfile
    on primary site:
    DB_CONVERT_DATAFILE='standby','primary'
    LOG_CONVERT_DATAFILE='standby','primary'
    On secondary Site:
    DB_CONVERT_DATAFILE='primary','standby'
    LOG_CONVERT_DATAFILE='primary','standby'
    But i want to confirm this wheather i need to issue the complete path of the directory in both the above paramtere:
    like:
    DB_CONVERT_DATAFILE='/u01/oracle/app/oracle/oradata/standby','/u01/oracle/app/oracle/oradata/primary'
    LOG_CONVERT_DATAFILE='/u01/oracle/app/oracle/oradata/standby','/u01/oracle/app/oracle/oradata/primary'
    Second Confusion:-
    After transferring Redo Standby log files created on primary and taken to standby on the above mentioned directory structure and after restoring the backup of primary db alongwith the standby control file will not impact the physical standby redo log placed on the above mentioned location.
    Thanks in advance for your help

    Hello,
    Regarding your 1st question, you need to provide the complete path and not just the directory name.
    On the standby:
    db_file_name_convert='<Full path of the datafiles on primary server>','<full path of the datafiles to be stored on the standby server>';
    log_file_name_convert='<Full path of the redo logfiles on primary server>','<full path of the redo logfiles on the standby server>';
    Second Confusion:-
    After transferring Redo Standby log files created on primary and taken to standby on the above mentioned directory structure and after restoring the backup of primary db alongwith the standby control file will not impact the physical standby redo log placed on the above mentioned location.
    How are you creating the standby database ? Using RMAN duplicate or through the restore/recovery options ?
    You can create the standby redo logs later.
    Regards,
    Shivananda

  • Access Connection log file, no clear structure, behavior criptic

    I am troubleshooting an issue with Access Connection ver. 4.42 executing over a Lenovo Thinkpad T61
    I enable the log, using the login tab, for the Diagnostic option in the Tools Menu.
    The log generated is in html format, but the contents its very dificult to understand.
    I think the developer put this for debuging, and they know, what mean every entry ( I think).
    someone know the meaning of the entries in the log file AcconAdvanced.html ???
    Thanks in advance.
    filmancco

    Hmm..  that is a bit strange...  The 3945 card in the T60 and the 4965 card use the same driver, so you'd think the problem should be showing on both systems.
    When you switch to another profile and you get the pop-up window showing progress, at what point of it do you get a red X?
    Also, if you click "help me fix this" what error is listing?
    I presume this is with v4.42 of AC?  Might want to make sure that also have the latest hotkeys and power management driver if don't have them already...
    Hotkeys:
    http://www-307.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-68000
    Power Management driver:
    http://www-307.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-4GXPEG
    Don't use the driver from Intel's site, Access Connections needs to use the Lenovo driver in order to function properly.
    But, you could uninstall Access Connections and the Lenovo version of the driver and download/install the Intel driver with their PROSet application to see if it have the same behavior. That would help say if it's an AC problem or a driver issue.
    As for the disconnecting problem, can try changing the Power Management and Roaming Aggressiveness of the card.
    Edit each profile under Access Connections and on the Wireless Settings page, go into the Settings for Advanced Configuration.  Set the Power Save Mode to Low (best network performance).
    Then open Device Manager and open the properties of the Intel wireless card.  Go to the Advanced tab and change these two settings.
         Power Management: Uncheck Use default value and move the slider all the way to the right to Highest.
         Roaming Aggressiveness: Uncheck Use default value and move the slider all the way to the left to Lowest.  Click Ok and exit.  Then reboot the system.
    You may need to tweak the Aggressive Roaming setting to a left of middle setting if you find yourself not roaming soon enough.
    Try that and see if it helps....
    Note from Moderator:  Converted links.
    Message Edited by nonny on 12-04-2007 08:14 AM

  • CRM regional structure

    Hi friends,
    I wanted to know how to upload the CRM regional structure with data?
    Your suggestions / comments are welcome.
    Clive

    Read in reference data
    Copy postal reference data into the regional structure tables.       
    Use the program RSADRLSM02.                                  
    Read the program documentation.                              
    See note 132948.                                             
    Fill regional structure with postal code, city, streets, ... 
    Title                                                        
    Read in postal codes, cities, districts, streets, PO Boxes   
    Purpose                                                      
    This program puts data in the regional structure, i.e. reads postal          
    codes, cities, districts, streets, PO Boxes and their objects from           
    external media into the R/3 System, independently of the format and          
    structure of the external data.                            
    Prerequisites                                                
    This program processes files of a particular internal format. Such files     
    can be created from external source data with the LSM Workbench, using       
    various pre-defined transfer objects. The LSM Workbench is used as a         
    mapping tool which maps the data structures of various external media at     
    the interface of this program. This makes the program independent of the     
    format of the external data.                                                 
    The LSM Workbench (Release 1.0 or higher) must be installed.                 
    The procedure is described in note 132948.                                                                               
    Features                                                                               
    1.  Data entry                                                               
    The data to be processed is read directly from the application           
    server. The logical name of the data to be processed must be             
    specified. The file names are specified per object (city, street,        
    ...), i.e. it must be made known, in which file the streets, in          
    which the cities, etc. are passed. If several or all objects are         
    passed in one file, the same logical file name must be specified         
    several times.                                                           
    The Logical file name definition and the physical file name and path     
    assignment can be performed with the LSM Workbench.                                                                               
    2.  Log/statistics                                                              
    A log is created during processing. It is written to an application         
    server file together with statistics of objects read and created,           
    for further processing.                                                     
    An existing logical file name must be specified.                            
    The log file is line-oriented, i.e. it can be read record by record.        
    See also the ABAP command OPEN DATASET dsn IN TEXT MODE documentation.                                                                               
    3.  Default creation language/country key                                       
    The default creation language and country for which the data are to         
    be created must be specified.                                                                               
    4.  Storage restiction                                                          
    To avoid a storage overflow, you can specify an approximate value in        
    kilobytes for how big an internal table in ABAP memory can become           
    without endangering the program.                                            
    The specified default value should only be changed if absolutely            
    necessary.                                                                               
    5.  Internal and external number assignment                                     
    This program uses external number assignment, but in rare cases             
    certain objects cannot be created consistently with external                
    numbers. If the 'internal number assignment' flag is set, the object        
    indicated is created with another number, which is logged. Otherwise        
    such objects are not created.                                                                               
    6.  Restart after program cancellation                                          
    If the program is cancelled, you can restart processing later where         
    it was cancelled. This position can either be automatically                 
    determined or specified directly. The manually entered position is          
    not reprocessed, processing restarts with the next record.                                                                               
    Notes                                                                               
    o   The program should run in the background with variants, because           
             reading the data can take a long time.                                                                               
    o   The steps required for reading data into the regional structure are       
             described in detail in note 132948.                                                                               
    o   It also describes how the data read in can be updated later.              
    Hope this helps                                                                  
    Julius

  • Automatic import mass data regional structure -  Program RSADRLSM01

    Hello,
    regarding the automatic import mass data to the regional structure via
    program RSADRLSM02, we are working in order to replace our third party
    provider.
    That is why, we need to deleted all the data imported from the city
    file and the references before to import new provider data.
    We have checked the SAP procedure defined in SAP note 132948 and the
    mentioned program RSADRLSM01 but we need confirm that the regional estructure informed in the old documents saved in the system could be impacted if the program RSADRLSM01 is executed.
    Any experience in this kind of process?
    Thanks in advance.
    Juan Carlos

    Since no one has replied - why not just try this in your test system and see what happens?

  • Save Log file from Disk Utility App running from Recovery HD?

    Background:
    I had an issue with QT and screen recording; it was coming out solid green. I read on Apple Support that I should restart holding down Command-R. I did that, booted into Recovery HD, then opened Disk Utility. I repaired permissions, then decided to repair my HDD for good measure. I noticed that Disk Utility did make a repair to my HDD. Something like "found 57" instead of "56" files in the directory on the HDD; directory needs to be updated. Not sure, I just remember those numbers… "56" is what the system knew of but there were really "57" on the HDD. I sound ignorant… I know.
    After I rebooted QuickTime worked correctly (maybe merely repairing permissions was the solution) however another issue, I have been struggling with for years, was also resovlved!
    I have been having an issue for years with an Icy Dock external enclosure in RAID 1 mode with my older 2009 iMac. The issue remained after upgrading from Snow Leopard to Tiger to Mountain Lion and resetting PRAM and repairing permissions and repairing it's HDD etc. many times over. After powering up the enclosure a message would appear: "The Disk you insterted was not readable by this computer." "Initialize…" "Ignore" "Eject". Then the volume would mount on the Desktop behind the warning and work perfectly. I would then click "Ignore". I had modified the old iMac with an internal SSD in place of the optical drive… at one point.
    I recently sold that iMac and bought a 2012 iMac
    Regardless I continued to have the same issue with the enclosure and my new iMac. However, after repairing the internal HDD today, the external Icy Dock RAID now mounts without giving me a warning to, "Initialize…" "Ignore" "Eject". This warning was appearing just before I repaired permissions and repaired the internal HDD on the iMac using the Recovery HD. In fact the reason I was tyring to use QT screen capture was to make a screen capture of the terminal window process while runing a "diskutil activity" command during powering-on my external enclosure to send to Icy Dock to troubleshoot this cross computer and cross OS incompatibility with their enclosure.
    My external RAID enclosure wasn't even powered on during repair of the internal HDD?! So how could a brand new iMac's HDD directory be effected in such a way by an external RAID enclosure as to make changes to it's directory. And now how could an issue with mounting an external RAID enclosure be corrected by repairing the directory structure on the internal HDD on the iMac? Could it be a conflict with USB IDs? Are those written to the directory? The internal and external sharing the same ID and the external having to be unmounted and remounted using a new ID each time? How could it have been repaired? Lots of questions…
    Main Question:
    Anyway,,, I want to copy the Disk Utility Log from my Recovery HD so I could see exactly what was repaired and report it to Icy Dock. But I am not sure if the log files are saved or not when rebooting from Recovery HD. Not sure how I could find the log file at this point. Are logfiles saved when running DiskUtility from the Recovery HD???
    Sorry for the length.

    Oh crap. The enclosure was working properly, now it's exhibiting the same problem as before and I haven't even rebooted the computer yet. So, what's up?

  • Why multiple  log files are created while using transaction in berkeley db

    we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
    without transaction implementing secondary database concept the issues we are getting are as follows:-
    with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon

    we are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
    with transaction ...
    without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
    - Berkeley DB Programmer's Reference Guide
    in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
    - Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
    If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
    --Andrei                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Exception handling in File adapter when directory structure is incorrect

    hi,
    How can exception handling part be done in File adapter in cases where directory structure is incorrect, or directory
    we are referring to is not present.
    thanks
    Yatan

    If you are polling then there will be error message in log files.But i dont think we can do exception handling in such cases.
    Cheers

  • Type of error in the log file while using using call transaction mode u2018Eu2019

    Hi Gurus,
    Please Answer for this qusetion urgently
    what type of error exactly  you will be seeing in the log file while using call transaction mode u2018Eu2019?
    Thanks/
    Radha.

    Hi,
    Can you be clear.
    In call transaction , no error logs  are created, you have to handle the errors explicitly using the structure BDCMSGCOLL.
    Whenever you use E mode then if the transaction encounters any of the errors i.e. data type mismatching or invalid values etc, it will stop at that screen.
    You can handle the errors in call transaction in the following method.
    create a table using the structure BDCMSGCOLL.
    then
    loop at ......
          CALL TRANSACTION 'XK01' USING I_BDCDATA MODE 'N' UPDATE 'S' MESSAGES INTO I_MESGTAB.
    endloop.
      SORT I_MESGTAB BY MSGID MSGV1 ASCENDING.
      DELETE ADJACENT DUPLICATES FROM I_MESGTAB.
      LOOP AT I_MESGTAB.
        CALL FUNCTION 'FORMAT_MESSAGE'
          EXPORTING
            ID   = I_MESGTAB-MSGID
            LANG = I_MESGTAB-MSGSPRA
            NO   = I_MESGTAB-MSGNR
            V1   = I_MESGTAB-MSGV1
            V2   = I_MESGTAB-MSGV2
            V3   = I_MESGTAB-MSGV3
            V4   = I_MESGTAB-MSGV4
          IMPORTING
            MSG  = MESG1.
        IF I_MESGTAB-MSGTYP = 'S' .
          WA_SUCCMESG-MESG = MESG1.
          APPEND WA_SUCCMESG TO I_SUCCMESG.
    else     IF I_MESGTAB-MSGTYP = 'E' .
          WA_ERRMESG-MESG = MESG1.
          APPEND WA_ERRMESG TO I_ERRMESG.
        ENDIF.
      ENDLOOP.
    Hope this is clear.
    Thanks and Regards.

  • Is the disk equal to log files and other questions?

    In the web page http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/introduction.html#dplfeatures, there is a statement, " The checkpointer is responsible for flushing database data to *disk* that was written to cache as the result of a transaction commit ".
    I wonder if the disk here means log files under the JE home directory.
    From my understanding of these documents and other web resources, the check pointer is to write records in Cache to Log files (disk), and then cleaner is to reorganize and then to remove unused log files. For the records in a Cache, they are brought from disk to Cache by querying the index organized in a B-Tree structure, and the In-Compressor is to delete some empty internal nodes of B-Tree.
    I wonder if the above is right to describe relations among these components, check pointer, cleaner, B-Tree and In-Compressor.
    Thanks for your help!
    Best,
    Jiangfan

    Jiangfan Shi wrote:
    I wonder if the disk here means log files under the JE home directory. Yes.
    I wonder if the above is right to describe relations among these components, check pointer, cleaner, B-Tree and In-Compressor. Yes.

  • Online redo log files being removed physically

    Grid Infra version: 11.2.0.4
    RDBMS Version: 11.2.0.4
    Although this is a RAC DB, this is not a RAC-specific question. Hence posting it here.
    Few months back, I remember issuing a command similair to below (DROP LOGFILE GROUP ...) and the redo log files were still physically present in the diskgroup.
    If I remember correctly, the file is not deleted physical so that we can use the REUSE functionality (ALTER DATABASE ADD LOGFILE MEMBER '+REDO/orcl/onlinelog/redo1b.log' reuse to group 11; ) ie. you can use the REUSE command to add the logfile of the same name which is physically present in OS Filesystem/Diksgroup to redo log group.
    But today, after I issued the below command, I checked the diskgroup location from ASMCMD
    SQL> alter database drop logfile group 31;
    Database altered.
    From ASMCMD, I can that the file has disappeared physically. Is this a new feature with 11.2.0.4 or am I missing something here ?
    ASMCMD> ls +DATA/msblprd/onlinelog/group_31.548.833154995
    ASMCMD-8002: entry 'group_31.548.833154995' does not exist in directory '+DATA/msblprd/onlinelog/'

    Just to add to what Aman has said.
    It is a bad practice not to let OMF decide the placement of Online redo logs because of this issue especially when you use ASM.
    Executing rm command in Linux/Unix is easy but Dropping ASM aliases in the disk group can be a hassle.
    This is documented.
    "When a redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping a redo log file, ensure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log file."
    http://docs.oracle.com/cd/E11882_01/server.112/e25494/onlineredo.htm#ADMIN11324
    BTW . You don't even need to set  db_create_online_log_dest_n to enable OMF for ORLs.
    SQL> show parameter log_dest
    NAME                                 TYPE        VALUE
    db_create_online_log_dest_1          string
    db_create_online_log_dest_2          string
    db_create_online_log_dest_3          string
    db_create_online_log_dest_4          string
    db_create_online_log_dest_5          string
    SQL> show parameter db_create_file_dest
    NAME                                 TYPE        VALUE
    db_create_file_dest                  string      +MBL_DATA
    alter database add logfile thread 4
    group 31 ('+MBL_DATA','+MBL_FRA') size 4096M,
    group 32 ('+MBL_DATA','+MBL_FRA') size 4096M,
    group 33 ('+MBL_DATA','+MBL_FRA') size 4096M,
    group 34 ('+MBL_DATA','+MBL_FRA') size 4096M ;
    Database altered.
    And redo logs will be neatly placed as shown below
       INST     GROUP# MEMBER                                             STATUS           ARC
             4         31 +MBL_DATA/bsblprd/onlinelog/group_31.276.832605441 UNUSED           YES
                          +MBL_FRA/bsblprd/onlinelog/group_31.297.832605445  UNUSED           YES
                       32 +MBL_DATA/bsblprd/onlinelog/group_32.547.832605451 UNUSED           YES
                          +MBL_FRA/bsblprd/onlinelog/group_32.372.832605457  UNUSED           YES
                       33 +MBL_DATA/bsblprd/onlinelog/group_33.548.832605463 UNUSED           YES
                          +MBL_FRA/bsblprd/onlinelog/group_33.284.832605469  UNUSED           YES
                       34 +MBL_DATA/bsblprd/onlinelog/group_34.549.832605475 UNUSED           YES
                          +MBL_FRA/bsblprd/onlinelog/group_34.359.832605481  UNUSED           YES

  • Routing logs to individual log file in multi rules_file MaxL

    Hi Gurus,
    I have been pretty late to this forum after long time. I have a situation here, and trying to find out the best way for operational benefits.
    We have an ASO cube (Historical) keeps 24 months snapshot data and refreshed monthly just like last 24 months rolling. The cube size is around 18.5 GB. The input level data size is around 13 GB. For monthly refresh the current process rebuilds the cube from scratch, deletes the 1/24 snapshot as it is going to add last months snapshot. The entire process takes 13 hours of processing time becuase the server doesn't have number of CPUs to support parallel operations.
    Since we recently moved to 11.1.2.3, and have ample amounts of CPUs(8) and RAM (16gb), I'd like to take davantage of parallelism, and will go for incremental load. Prior to that since the outline build is EPMA driven I'd only like to rebuild the dimension with all data, essentially restructures the DB, with data after metadata refresh, so that I can keep my history intact, and only proceed for loading the last month's data after clearing out the 1st snapshot.
    My MaxL script looks like below:
    /* Set up logs */
    set timestamp on;
    spool on to $(mxlLog).log;
    /* Connect to Essbase */
    login $key $essUser $key $essPwd on $essServer;
    alter application "$essApp" load database "$essDB";
    /* Disable User Access to DB */
    alter application "$essApp" disable connects;
    /* Unlock all objects */
    alter database "$essApp"."$essDB" unlock all objects;
    /* Clear all data for previous month*/
    alter database "$essApp"."$essDB" clear data in region 'CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})' physical;
    /* Load SQL Data */
    import database "$essApp"."$essDB" data connect as $key $edsUser identified by $key $edsPwd using multiple rules_file 'LOADDATA','LOADJNLS','LOADFX','LOAD_J1','LOAD_J2','LOAD_J3','LOADDELQ' to load_buffer_block starting with buffer_id 1 on error write to "$(mxlLog)_LOADDATA.err";
    /* Selects and build an aggregation that permits the database to grow by no more than 300% */
    execute aggregate process on database "$essApp"."$essDB" stopping when total_size exceeds 4 enable alternate_rollups;
    /* build query tracking views */
    execute aggregate build on database "$essApp"."$essDB" using view_file 'gw';
    /* Enable Query Tracking */
    alter database "$essApp"."$essDB" enable query_tracking;
    /* Enable User Access to DB */
    alter application "$essApp" enable connects;
    logout;
    exit;
    I am able to achive performance but not satisfactory. So I have couple of queries below.
    1. Whether bule shaded codes can further be tuned. I have major problem in clearing only 1 month snapshot : where I require to clear one scenario and the designated 1st month.
    2. Multiple rules_file statement, how do I write logs of each load rule to separte log files instead one, my previous process is wrting error-log for each load rule in separte log file and consolidates at the end of batch run to a single file for the whole batch execution.
    Apprecaite any help in this regrad.
    Thanks,
    DD

    Thanks Celvin. I'd rather route MaxL logs in one log file and consolidate into the batch logs instead of using
    multiple log files.
    Regrading Partial Clear:
    My worry is, I first tried partial clear with 'logical', that too took considerable amonut of time, and the
    difference between logical and physical clear is only 15-20 minutes. FYI, I have 31 dimensions in this cube,
    and the MDX clear script that use Scenario->ACTUAL and Period->&CLEAR_PERIOD (SubVar) is of dynamic hierarchy
    type.
    Is there a way I can rewrite the clear data MDX script in betterway  so that it will clear faster, than this
    <<CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})>>
    Does this clear MDX have any effect on dynamic/stored hierarchy nature of the dimension, if not, then what
    would be optimized way to write this MDX?
    Thanks,
    DD

  • Contents of a red log file

    Hi
    Just want to know about the internal of redo log file. My objective is to find out some dump of redo log file to understand how the data is stored in it. what is stored in it. deos redo log file has a header. if yes what is stored in it. what is the structure of redo log file.
    The objective of this thread is to learn the internal of redo log file using some sort of dump. I hope Guru will help me
    thanks
    Nick

    internal of redo log file using some sort of dumpHey Nick,
    Yes you can . Basically its a binary file. So you cannot learn about it directly.
    Some of the contents of the redologs are
    1) every DML operation and DDL operation is logged here. (redo means replay. Whatever you run in the database will be logged here, except SQL select statements
    2) it contains the timestamp too of the statements and also the SCN numbers
    3) It will also contain COMMIT and ROLLBACK commands.
    You have to use a utility called LOGMINER that will "mine" the redologs and then you can view the contents in an ascii format using "v$logminer_contents" view.
    All the best.

Maybe you are looking for

  • T code for reorder level stock

    Dear Sir, I would liketo know the Transacton code or path to have a report of the reorder level stock and existing stock. Can any body have a look at this. Thanks Hari Kishan

  • I am unable to delete apps on iPhone 5

    After looking at other similar questions, I can find no answer. For some reason, I can no longer delete apps from my iPhone 5 running 6.1.4. I am doing everything the way you are supposed to, but nothing. What I end up with is a bunch of jiggling ico

  • Hit the exception when editing the value of row key column in a new created row in a table

    1. I created a view object with 2 entity objects (parent table: YARD_FIXED_SLOT - child table: YARD_FIXED_SLOT_DETAIL) and the primary key of child table composes of 2 columns ( one of them is FK: YardFixedSlotDetail.FIXED_SLOT_ID REFERENCES YARD_FIX

  • TS3694 Phone won't Restore (error 3194). Please HELP!!

    My iPhone 4 is unlocked and was running on iOS 4.3.3. I am travelling and so delayed upgraging to iOS 5 till the release of iOS 6. Now, suddenly my phone is stuck on the Connect to iTunes Logo. Connecting to iTunes and Restoring to iOS 6 gives me ERR

  • Node Manager not starting UCM Server

    Hi All, I have configured Node Manager to start and stop UCM 11g server. But whenever, I am restarting the host OS - Windows server 2003 64 bit, it does not start the UCM managed server, automatically. I am running two managed servers on the same hos