Planning application daily backup process

Hi gurus,
I have to backup a hyperion planning application which has two databases in Essbase. I need to back the application everyday just to make sure if anything goes wrong we can have the data.
What is the process that you recommend.
One thing I can do is go to the application folder in Essbase (for example c:\hyperion....\essbaseserver\..app ) and then copy the whole folder to a backup folder, but my question is if i copy the application folder in this way, is the planning application that i have created in workspace backed up as well. so do i need to do something else in order to back up the planning applicaiton.
other thing i can do is use the "database export" option in essbase but in this process also, while the essbase dataase is backed up, the planning application may not be, so
can someone suggest me the right way of back up so that the planning applicaiton is backed up and the essbase database cube as well..
thanks

First of all it is worth having a read of the backup guide http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_11121/launch.html
The planning metadata is stored in the relational database so you should back this up.
For essbase you can stop the application and copy the file structure, or you can export the data or maybe use LCM.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Migrating Hyperion Planning Applications from 11.1.2.2(old server) to 11.1.2.3(Development Server)

    HI All,
    I have  1 PSPB, 1 PFP and one OPEX Planning Applications in 11.1.2.2 old server which we are planning to move these apps into new development server having Hyperion11.1.2.3v installed. I have gone thru few documents but im still in dilemma that can we go ahead with LCM backup and export apps in old server thru LCM backup and place it in File system area of new development server like we do usually or do we need to upgrade the 11.1.2.2 old server with maintenance release and then taking LCM backup of that upgraded applications only, we can migrate in new server having 11.1.2.3?
    Please guide
    Thanks

    A)Created a planning shell application with the same name as that of 11.1.2.2 planning application
    B)Backup 11.1.2.2 planning schema
    c)Restore the old schema 11.1.2.2 to new environment 11.1.2.3
    D)update the SID ,Only if the userid (Default :admin) is different for 11.1.2.3   version
    E)Restart the planning services
    or make use of Planning Upgrade wizard
    A)Logon to workspace (11.1.2.2)under Administer-> Classic Application Administration-> Planning Administration
    B)Go to Classic Application Wizard ->Upgrade Wizard
    or also try with the lcm way.
    Thanks,
    Sreekumar Hariharan

  • Cannot open Planning application : An error occurred while processing page

    Hi,
    I'm using EPM 11.1.1.3 on Linux Server (Red Hat). Recently, I had to stop and start all services after which I am unable to open any Planning application. It gives me the following error:
    An error occurred while processing this page. Check the log for details.Please close the current tab and open application again
    An error occurred while processing this page. Check the log for details.
    I cannot open it through Workspace also. There aren't any logs in %HYPERION_HOME%/logs/Planning which mention anything about this error. Appreciate if someone could suggest a solution.
    Cheers,
    Sahil

    Thanks for sharing your expert opinions guys. I have checked the suggested log HYPERION_HOME/deployments/Tomcat5/HyperionPlanning/logs/catalina.out
    It seems to be a JDBC connection error. This brings me to the point that I also reconfigured Planning over the existing configuration (8300). could this be the cause of the error as other utilities (Web Analysis, Essbase, Shared Services etc.) are working fine (I did not reconfigure these).
    Kindly have a look at the log below from the aforementioned path and suggest a solution. Greatly appreciate any help.
    Jul 17, 2011 9:45:16 AM org.apache.coyote.http11.Http11BaseProtocol init
    INFO: Initializing Coyote HTTP/1.1 on http-8300
    Jul 17, 2011 9:45:16 AM org.apache.catalina.startup.Catalina load
    INFO: Initialization processed in 441 ms
    Jul 17, 2011 9:45:16 AM org.apache.catalina.core.StandardService start
    INFO: Starting service Catalina
    Jul 17, 2011 9:45:16 AM org.apache.catalina.core.StandardEngine start
    INFO: Starting Servlet Engine: Hyperion Embedded Java Container/1.0.0
    Jul 17, 2011 9:45:16 AM org.apache.catalina.core.StandardHost start
    INFO: XML validation disabled
    [INFO] HyperionPlanning] - Starting Hyperion Planning...
    [INFO] RegistryLogger - REGISTRY LOG INITIALIZED
    [INFO] RegistryLogger - REGISTRY LOG INITIALIZED
    /oracle/hyp/app/common/config/9.5.0.0/product/planning/9.5.0.0/planning_1.xml
    displayName = Planning
    componentTypes =
    priority = 50
    version = 9.5.0.0
    build = 1
    location = /oracle/hyp/app/products/Planning
    taskSequence =
    task =
    *******/oracle/hyp/app/common/config/9.5.0.0/registry.properties
    Creating rebind thread to RMI
    [INFO] HyperionPlanning] - Hyperion Planning started in 3 seconds.
    Jul 17, 2011 9:45:20 AM org.apache.coyote.http11.Http11BaseProtocol start
    INFO: Starting Coyote HTTP/1.1 on http-8300
    Jul 17, 2011 9:45:20 AM org.apache.jk.common.ChannelSocket init
    INFO: JK: ajp13 listening on /0.0.0.0:8302
    Jul 17, 2011 9:45:20 AM org.apache.jk.server.JkMain start
    INFO: Jk running ID=0 time=0/17 config=null
    Jul 17, 2011 9:45:20 AM org.apache.catalina.startup.Catalina start
    INFO: Server startup in 4257 ms
    Jul 17, 2011 9:45:36 AM org.apache.coyote.http11.Http11BaseProtocol init
    SEVERE: Error initializing endpoint
    java.net.BindException: Address already in use:8300
         at org.apache.tomcat.util.net.PoolTcpEndpoint.initEndpoint(PoolTcpEndpoint.java:297)
         at org.apache.coyote.http11.Http11BaseProtocol.init(Http11BaseProtocol.java:138)
         at org.apache.catalina.connector.Connector.initialize(Connector.java:1016)
         at org.apache.catalina.core.StandardService.initialize(StandardService.java:580)
         at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:791)
         at org.apache.catalina.startup.Catalina.load(Catalina.java:503)
         at org.apache.catalina.startup.Catalina.load(Catalina.java:523)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:266)
         at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:431)
    Jul 17, 2011 9:45:36 AM org.apache.catalina.startup.Catalina load
    SEVERE: Catalina.start
    LifecycleException: Protocol handler initialization failed: java.net.BindException: Address already in use:8300
         at org.apache.catalina.connector.Connector.initialize(Connector.java:1018)
         at org.apache.catalina.core.StandardService.initialize(StandardService.java:580)
         at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:791)
         at org.apache.catalina.startup.Catalina.load(Catalina.java:503)
         at org.apache.catalina.startup.Catalina.load(Catalina.java:523)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:266)
         at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:431)
    Jul 17, 2011 9:45:36 AM org.apache.catalina.startup.Catalina load
    INFO: Initialization processed in 407 ms
    Jul 17, 2011 9:45:36 AM org.apache.catalina.core.StandardService start
    INFO: Starting service Catalina
    Jul 17, 2011 9:45:36 AM org.apache.catalina.core.StandardEngine start
    INFO: Starting Servlet Engine: Hyperion Embedded Java Container/1.0.0
    Jul 17, 2011 9:45:36 AM org.apache.catalina.core.StandardHost start
    INFO: XML validation disabled
    [INFO] HyperionPlanning] - Starting Hyperion Planning...
    [INFO] RegistryLogger - REGISTRY LOG INITIALIZED
    [INFO] RegistryLogger - REGISTRY LOG INITIALIZED
    /oracle/hyp/app/common/config/9.5.0.0/product/planning/9.5.0.0/planning_1.xml
    displayName = Planning
    componentTypes =
    priority = 50
    version = 9.5.0.0
    build = 1
    location = /oracle/hyp/app/products/Planning
    taskSequence =
    task =
    *******/oracle/hyp/app/common/config/9.5.0.0/registry.properties
    Creating rebind thread to RMI
    [INFO] HyperionPlanning] - Hyperion Planning started in 2 seconds.
    Jul 17, 2011 9:45:39 AM org.apache.coyote.http11.Http11BaseProtocol start
    SEVERE: Error starting endpoint
    java.net.BindException: Address already in use:8300
    Jul 17, 2011 9:45:39 AM org.apache.catalina.startup.Catalina start
    SEVERE: Catalina.start:
    LifecycleException: service.getName(): "Catalina"; Protocol handler start failed: java.net.BindException: Address already in use:8300
    Jul 17, 2011 9:45:39 AM org.apache.catalina.startup.Catalina start
    INFO: Server startup in 3083 ms
    Jul 17, 2011 9:45:39 AM org.apache.catalina.core.StandardServer await
    SEVERE: StandardServer.await: create[8301]:
    java.net.BindException: Address already in use
    Jul 17, 2011 9:45:39 AM org.apache.coyote.http11.Http11BaseProtocol pause
    INFO: Pausing Coyote HTTP/1.1 on http-8300
    Jul 17, 2011 9:45:39 AM org.apache.catalina.connector.Connector pause
    SEVERE: Protocol handler pause failed
    java.lang.NullPointerException
         at org.apache.jk.server.JkMain.pause(JkMain.java:677)
         at org.apache.jk.server.JkCoyoteHandler.pause(JkCoyoteHandler.java:162)
         at org.apache.catalina.connector.Connector.pause(Connector.java:1031)
         at org.apache.catalina.core.StandardService.stop(StandardService.java:491)
         at org.apache.catalina.core.StandardServer.stop(StandardServer.java:743)
         at org.apache.catalina.startup.Catalina.stop(Catalina.java:601)
         at org.apache.catalina.startup.Catalina$CatalinaShutdownHook.run(Catalina.java:644)
    [INFO] HyperionPlanning] - Shutting down Hyperion Planning applications...
    [INFO] HyperionPlanning] - Hyperion Planning was stopped
    Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb
    Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb
    Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb
    Can not get JDBC connection.
    java.lang.Exception: No object were successfully created. This can be caused by any of the following: The OLAP Server is not running, The DBMS is not running, the DBMS is running on a different machine that the one specified, the name and password provided were incorrect.
         at java.lang.Thread.run(Unknown Source)
    java.lang.RuntimeException: Error loading objects from data source: java.lang.NullPointerException: JDBCCacheLoader.loadObjects(): jdbc connection was null.
    java.lang.NullPointerException
    Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb
    Can not get JDBC connection.
    java.lang.Exception: No object were successfully created. This can be caused by any of the following: The OLAP Server is not running, The DBMS is not running, the DBMS is running on a different machine that the one specified, the name and password provided were incorrect.
         Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb
    Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb
    Attempted to release a null connection
    java.lang.NullPointerException: JDBCCacheLoader.loadObjects(): jdbc connection was null.
    Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb
    Can not get JDBC connection.
    java.lang.Exception: No object were successfully created. This can be caused by any of the following: The OLAP Server is not running, The DBMS is not running, the DBMS is running on a different machine that the one specified, the name and password provided were incorrect.
    Attempted to release a null connection
    java.lang.NullPointerException: JDBCCacheLoader.loadObjects(): jdbc connection was null.
    java.lang.RuntimeException: Error loading objects from data source: java.lang.NullPointerException: JDBCCacheLoader.loadObjects(): jdbc connection was null.
    java.lang.NullPointerException
    Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb
    Unable to create JDBC connection. java.sql.SQLException: [Hyperion][Oracle JDBC Driver]Internal error: Net8 protocol error.
    Unable to set Planning's Oracle connection numeric character to '.'. java.lang.NullPointerException
    Can not set database catalog name, skipping set of catalog name: hypdb

  • Java Processes for Planning Applications

    Hi There,
    We've experienced performance issues where we have reached the limit of the java process for Hyperion Planning - WAS element. My question! Is it possible to create multiple java processes for Hyperion Planning, for example: If we have 7 Planning applications could we have 7 Java processes running - one for each application?
    Thanks
    Mark

    Couple of thoughts on scaling:
    1. Change the mx setting to 1200m -- this is the most you can scale a single instance on 32-bit
    2. Use a 64-bit platform you can set memory settings really high (AIX/Solaris)
    3. Multiple servers with planning on it using a load balancer
    Regards,
    John A. Booth
    http://www.metavero.com

  • How to hot backup 11.1.1.3 Planning application

    I readed Backup and Recovery Guide. But still have no ideal about how to hot backup for planning application.
    Have any metrial talk about this or it's impossible in real world. Thanks!

    From version 11.1 on the "hot" backup method is:
    - (1) archive the database to a file (in this time the database is read only) every night or weekend
    - (2) enable transaction logs
    - (3) backup these files (archive and logs) regularly
    If there's an issue:
    - (1) Restore from archive (gives you a downtime)
    - (2) Replay transaction logs until before the issue occured
    The database would available for read 24x7 - writing will be disabled only during backups.
    Backup and Restore are high performance actions that will depend on disk and cpu speed. A normal Server will do 20MB/sec.
    OK?

  • Scheduled daily backups of DMM fail - Backup/Restore process stops running

    Hi, I have been testing the scheduled backups within the DMM AAI and have found it's performance to be unreliable.
    Specifically I am performing FTP backups to a remote network location.  Manual one-off backups complete successfully without any issues, and scheduled backups occur on the same day, however successive (reoccuring) daily backups don't happen and the Backup/Restore process seems to stop running. I have checked the backup logs but these don't provide any insight as to why the backup process stops running.
    Has anyone else encountered this?  I am running DMM v5.2.0.25.
    Mike

    I am having a similar problem.  Is there a method to restart the backup service without having to reboot the entire DMM Server?
    Note that there is no method via the AAI menu system, and the Web Admin console services section is not clear as to whether it will reboot the entire server (which is NOT an easy thing for me to get away with here).
    Restarting the 5.2.3 DMM Server (not JUST the service??)
    =========================
    I have now learned (per a Cisco engineer), that the 'Scheduled backup services' in the screencap above, is normally in the 'stopped' status "by design", and that it will not show as started in this UI unless the DMM server is actually in a backup.
    ==============================================
    =========================
    My backups now work correctly.  I think that the original problem was my NTP settings, which are now fixed.
    =========================================================

  • Backup and Recovery of Planning Application

    Hi,
    We have created a planning application and build the metadata successfully.
    Now there is some problem with Essbase Server. We want to reinstall only Essbase Sever not all the products.
    My concern is to retain the metadata.
    Please guide me procedure to take the planning application back up and restoration.
    I have one more doubt.
    If I uninstall only Essbase server and reinstall it, configures with Shared Services, Planning configuration... will it create any problem? Or do i need to install all the products in order?
    How to restore the existing environment after installing the Essbase Server?
    your suggestion are highly appreciated...
    Thanks,
    naveen

    Hi,
    I wouldn't say you need to reinstall essbase due to that error because you probably will get it again after you have installed.
    It usually relates to two instances of essbase runtimes in your ARBORPATH.
    If you update your environment variables for ARBORPATHand remove C:\Hyperion\common\EssbaseRTC\9.3.1 and just keep the C:\Hyperion\AnalyticServices one, also check the PATH variable to see if it points to both instances.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • What is the best approach to take daily backup of application from CQ5 Server ?

    Hello,
    How to maintain daily backup to maintain the data from server.
    What is the best approach.
    Regards,
    Satish Sapate.

    Linking shared from ryan should give enough information. 
    If case backing up large repository you may know Data Store store holds large binaries and are only stored once. To reduce the backup time remove the datastore from the backup by following [1] (CQ 5.3 example)
    [1] In order to remove the datastore from the backup you will need to do the following:
    Assuming your repository is under /website/crx/repository and you want to move your datastore to /website/crx/datastore
        stop the crx instance
        mv /website/crx/repository/shared/repository/datastore /website/crx/
        Then modify repository.xml by adding the new path configuration to the DataStore element.
    Before:
    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="minRecordLength" value="4096"/>
    </DataStore>
    After:
    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="path" value="/website/crx/datastore"/>
    <param name="minRecordLength" value="4096"/>
    </DataStore>
    After doing this then you can safely run separate backups on the datastore while the system is running without affecting performance very much.
    Following our example, you could use rsync to backup the datstore:
    rsync --av --ignore-existing /website/crx/datastore /website/backup/datastore

  • Planning Application - Supporting Details

    Hi,
    We deleted around 4000 members from our planning application and later realised that the few members have budget values tagged to them and also supporting details are included for few of the members. Now we would like to bring back those members to our application. Could you please suggest the best method to bring back those members and the associated supporting details. We donot want to restore the entire the entire application. Is there a way to restore only that particular dimension and its members and the supporting details associated with them. If yes, could anyone of you let me know the process. Our relational DB is SQL 2005.
    Request your help for the closure
    Thanks,

    Assuming you have a backup of your SQL database somewhere (this would be the Planning repository), the following code will extract Supporting Detail:
    Supporting Detail
    SELECT
         "Plan Type" = P.TYPE_NAME,
         "Scenario" = O1.OBJECT_NAME,
         "Account" = O2.OBJECT_NAME,
         "Entity" = O3.OBJECT_NAME,
         "Month" = O4.OBJECT_NAME,
         "Version" = O5.OBJECT_NAME,
         "Year" = O6.OBJECT_NAME,
         "Activities" = O7.OBJECT_NAME,
         "Employee" = O8.OBJECT_NAME,
         "Value" = I.VALUE,
         "Position" = I.POSITION,
         "Generation" = I.GENERATION,
         "Operator" = I.OPERATOR,
         "SD" = I.LABEL
    FROM HSP_COLUMN_DETAIL D
         INNER JOIN HSP_PLAN_TYPE P
              ON D.PLAN_TYPE = P.PLAN_TYPE
         INNER JOIN HSP_COLUMN_DETAIL_ITEM I
              ON D.DETAIL_ID = I.DETAIL_ID
         INNER JOIN HSP_OBJECT O1
              ON D.DIM1 = O1.OBJECT_ID
         INNER JOIN HSP_OBJECT O2
              ON D.DIM2 = O2.OBJECT_ID
         INNER JOIN HSP_OBJECT O3
              ON D.DIM3 = O3.OBJECT_ID
         INNER JOIN HSP_OBJECT O4
              ON D.DIM4 = O4.OBJECT_ID
         INNER JOIN HSP_OBJECT O5
              ON D.DIM5 = O5.OBJECT_ID
         INNER JOIN HSP_OBJECT O6
              ON D.DIM6 = O6.OBJECT_ID
         LEFT OUTER JOIN HSP_OBJECT O7
              ON D.DIM7 = O7.OBJECT_ID
         LEFT OUTER JOIN HSP_OBJECT O8
              ON D.DIM8 = O8.OBJECT_IDYou'll want to modify it a bit to get it to work with your dimensions -- this is from an old project of mine.
    I have never tried to write Supporting Detail back in -- I know it has been done, just not by me. I would tread very carefully and only do it on a test database and be really, really, really sure you've got everything working just so. You could easily blow up your data.
    Another approach might be to restore that Planning app backup (making sure you made a backup of the current app), and then use LCM to export out Supporting detail, and then suck it back into the app that no longer has it.
    ^^^I like that idea a lot more even if it is a bit more tedious.
    With either approach, you are going to have to sync the Essbase data to the SD data -- hopefully you have a good Essbase backup as well.
    Regards,
    Cameron Lackpour

  • Windows XP - Howto Disable the broken backup process

    I finally found a source on how to disable the backup process during the iPhone sync. Here's the link. Tried it myself and works great. I plan on removing this XML edit once Apple distributes a proper fix:
    http://www.eidac.de/?p=60
    From the article:"
    Disabling the slow iTunes Backup on Windows is a little more tricky, but it works. First of all close ITunes and then follow these steps:
    1. Locate your iTunesPrefs.xml file. It’s usually located in C:\Documents and Settings\username\Application Data\Apple Computer\iTunes or C:\Documents and Settings\username\Local Settings\Application Data\Apple Computer\iTunes.
    Hint: If the folder Application Data does not show up, make sure that hidden files are visible in the Windows Explorer
    2. Backup your iTunesPrefs.xml file
    3. Open iTunesPrefs.xml using a capable text-editor (e.g. Notepad++, Ultraedit, but not MS Notepad)
    4. Search for a section called User Preferences and paste the following snipped into the User Preferences Section after the first <dict>:
    <key>DeviceBackupsDisabled</key>
    <data>
    dHJ1ZQ==
    </data>
    After you’ve done that it should exactly look like the screenshot on the left.
    5. Save the file and restart iTunes. Backups should now be disabled. To enable backups again delete the XML Snippet from iTunesPrefs.xml file."

    Hi Nick,
    There isn't an easy way to do it via Mac OS X or System Preferences. But I did find this article that looks like it has some pretty good resources.
    http://www.macosxhints.com/article.php?story=20071121141206367

  • Urgent help needed in creation of planning application

    Hi,
    Iam new to planning. I installed planning and configured(Created planning instance, Datasource). When i login into workspace and want to create new planning application. The wizard opens correctly. After giving application name, calender,currency type, Plan type, etc... when i click finish to create a new planning application, it says *"An error occured while processing this page. Check the log for detail".* But i observed one thing the application was created in Essbase without Databases and outline. So can anyone explain me wats the exact prob. where I can see the Log details. Help me.
    Thanks in advance.

    Hi,
    Databases will not be created until you created until you refresh to essbase from within planning, you are not up to that stage yet. Only an essbase application will be created.
    What I suggest you do, is delete the essbase application. Then run planning from the start menu and not as a service, this will mean you will get any error messages displayed in the window.
    Then try and perform the application creation again and see what errors are produced.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Issue with LCM while migrating planning application in the cluster Env.

    Hi,
    Having issues with LCM while migrating the planning application in the cluster Env. In LCM we get below error and the application is up and running. Please let me know if anyone else has faced the same issue before in cluster environment. We have done migration using LCM on the single server and it works fine. It just that the cluster environment is an issue.
    Error on Shared Service screen:
    Post execution failed for - WebPlugin.importArtifacts.doImport. Unable to connect to "ApplicationName", ensure that the application is up and running.
    Error on network:
    “java.net.SocketTimeoutException: Read timed out”
    ERROR - Zip error. The exception is -
    java.net.SocketException: Connection reset by peer: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)

    Hi,
    First of all, if your environment for source and target are same then you will have all the users and groups in shared services, in that case you just have to provision the users for this new application so that your security will get migrated when you migrate the from the source application. If the environs are different, then you have to migrate the users and groups first and provision them before importing the security using LCM.
    Coming back to the process of importing the artifacts in the target application using LCM, you have to place the migrated file in the @admin native directory in Oracle/Middleware/epmsystem1.
    Open shared services console->File system and you will see the your file name under that.
    Select the file and you will see all your exported artifacts. Select all if you want to do complete migration to target.
    Follow the steps, select the target application to which you want to migrate and execute migration.
    Open the application and you will see all your artifacts migrated to the target.
    If you face any error during migration it will be seen in the migration report..
    Thanks,
    Sourabh

  • Unable to load data to Hyperion planning application using odi

    Hi All,
    When I try to load data into planning using odi, the odi process completes successfully with the following status in the operator ReportStatistics as shown below but the data doesn't seem to appear in the planning data form or essbase
    can anyone please help
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (most recent call last):
    File "<string>", line 2, in <module>
    Planning Writer Load Summary:
         Number of rows successfully processed: 20
         Number of rows rejected: 0
    source is oracle database
    target account dimension
    LKM SQL TO SQL
    IKM SQL TO HYPERION PLANNING is used
    In Target the following columns were mapped
    Account(load dimension)
    Data load cube name
    driverdimensionmetadata
    Point of view
    LOG FILE
    2012-08-27 09:46:43,214 INFO [SimpleAsyncTaskExecutor-3]: Oracle Data Integrator Adapter for Hyperion Planning
    2012-08-27 09:46:43,214 INFO [SimpleAsyncTaskExecutor-3]: Connecting to planning application [OPAPP] on [mcg-b055]:[11333] using username [admin].
    2012-08-27 09:46:43,277 INFO [SimpleAsyncTaskExecutor-3]: Successfully connected to the planning application.
    2012-08-27 09:46:43,277 INFO [SimpleAsyncTaskExecutor-3]: The load options for the planning load are
         Dimension Name: Account Sort Parent Child : false
         Load Order By Input : false
         Refresh Database : false
    2012-08-27 09:46:43,339 INFO [SimpleAsyncTaskExecutor-3]: Begining the load process.
    2012-08-27 09:46:43,355 DEBUG [SimpleAsyncTaskExecutor-3]: Number of columns in the source result set does not match the number of planning target columns.
    2012-08-27 09:46:43,371 INFO [SimpleAsyncTaskExecutor-3]: Load type is [Load dimension member].
    2012-08-27 09:46:43,996 INFO [SimpleAsyncTaskExecutor-3]: Load process completed.

    Do any members exist in the account dimension before the load? if not can you try adding one member manually then trying the load again.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • TIme Machine  backup grows too large during backup process

    I have been using Time Machine without a problem for several months, backing up my imac - 500GB drive with 350g used. Recently TM failed because the backups had finally filled the external drive - 500GB USB. Since I did not need the older backups, I reformatted the external drive to start from scratch. Now TM tries to do an initial full backup but the size keeps growing as it is backing up, eventually becoming too large for the external drive and TM fails. It will report, say, 200G to back up, then it reaches that point and the "Backing up XXXGB of XXXGB" just keeps getting larger. I have tried excluding more than 100GB of files to get the backup set very small, but it still grows during the backup process. I have deleted plist and cache files as some discussions have suggested, but the same issue occurs each time. What is going on???

    Michael Birtel wrote:
    Here is the log for the last failure. As you see it indicates there is enough room 345g needed, 464G available, but then it fails. I can watch the backup progress, it reaches 345G and then keeps growing till it give out of disk space error. I don't know what "Event store UUIDs don't match for volume: Macintosh HD" implies, maybe this is a clue?
    No. It's sort of a warning, indicating that TM isn't sure what's changed on your internal HD since the previous backup, usually as a result of an abnormal shutdown. But since you just erased your TM disk, it's perfectly normal.
    Starting standard backup
    Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
    Ownership is disabled on the backup destination volume. Enabling.
    2009-07-08 19:37:53.659 FindSystemFiles[254:713] Querying receipt database for system packages
    2009-07-08 19:37:55.582 FindSystemFiles[254:713] Using system path cache.
    Event store UUIDs don't match for volume: Macintosh HD
    Backup content size: 309.5 GB excluded items size: 22.3 GB for volume Macintosh HD
    No pre-backup thinning needed: 345.01 GB requested (including padding), 464.53 GB available
    This is a completely normal start to a backup. Just after that last message is when the actual copying begins. Apparently whatever's happening, no messages are being sent to the log, so this may not be an easy one to figure out.
    First, let's use Disk Utility to confirm that the disk really is set up properly.
    First, select the second line for your internal HD (usually named "Macintosh HD"). Towards the bottom, the Format should be +Mac OS Extended (Journaled),+ although it might be +Mac OS Extended (Case-sensitive, Journaled).+
    Next, select the line for your TM partition (indented, with the name). Towards the bottom, the Format must be the same as your internal HD (above). If it isn't, you must erase the partition (not necessarily the whole drive) and reformat it with Disk Utility.
    Sometimes when TM formats a drive for you automatically, it sets it to +Mac OS Extended (Case-sensitive, Journaled).+ Do not use this unless your internal HD is also case-sensitive. All drives being backed-up, and your TM volume, should be the same. TM may do backups this way, but you could be in for major problems trying to restore to a mis-matched drive.
    Last, select the top line of the TM drive (with the make and size). Towards the bottom, the *Partition Map Scheme* should be GUID (preferred) or +Apple Partition Map+ for an Intel Mac. It must be +Apple Partition Map+ for a PPC Mac.
    If any of this is incorrect, that's likely the source of the problem. See item #5 of the Frequently Asked Questions post at the top of this forum for instructions, then try again.
    If it's all correct, perhaps there's something else in your logs.
    Use the Console app (in your Applications/Utilities folder).
    When it starts, click +Show Log List+ in the toolbar, then navigate in the sidebar that opens up to your system.log and select it. Navigate to the +Starting standard backup+ message that you noted above, then see what follows that might indicate some sort of error, failure, termination, exit, etc. (many of the messages there are info for developers, etc.). If in doubt post (a reasonable amount of) the log here.

  • Error When Deploying a planning application

    I've prepared a planning application in hyperion 9.3;
    Validation is successful but when deploying the application deployment fails with the message "Action aborted. Please check the Job Status for results."
    Job status shows:
    ===============================================================================
    Started Time : Sunday, March 08, 2009 6:39:27 PM
    Submitted Time : Sunday, March 08, 2009 6:39:27 PM
    Last Updated Time : Sunday, March 08, 2009 6:39:29 PM
    User Name : amjad
    Process Name :
    Thread : 0
    Server : DimServer
    Detail : Initiating Product Action...
    Inspecting Deployment History...
    Generating Headers and Callback Information...
    Generating Application Data...
    Preparing Product Request...
    Posting Product Request...
    Product Response:550...
    Action aborted.
    ================================================================================
    any suggestion?

    Hi,
    If you are going to be using EPMA the first thing to do is to make sure you have installed all the patches for EPMA and Planning.
    They are available from [metalink3|https://metalink3.oracle.com/od/faces/index.jspx] , planning is at patch 9.3.1.1.10 and EPMA has around 5 patches.
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for

  • Ipod Nano 4GB Keeps connecting and disconnecting every few seconds

    when i plug the nano into a usb port... it shows up everywhere and then disconnects and then reconnects and keeps doing this, therefore i can update my ipod? Also... the ipod works fine on the xp pc next door so i have to assume its the USB on my pc.

  • How to specify PACKAGE SIZE for to RFC_READ_TABLE from PyRFC?

    I'm trying to use PyRFC to extract large tables via RFC_READ_TABLE (due to an uncooperative/unsupportive basis team). I know that RFC_READ_TABLE supports calling it with PACKAGE SIZE since ERPConnect does it by default. In Python for Basis (Part 1),

  • All printers stopped working on network

    Have had a home network set up for some time. All of a sudden, both printers stopped printing. Printers are HP color laserjet 2550 and OfficeJet 6110. Hooked up on an airport extreme/airport express network. The rest of the network functions appear t

  • How to solve Invocation error: ALC-DSC-003-000.

    Hi, I upgraded the "Reader Extensions 7.2" Using "ReaderExtensions ES" for watch folder concept. It worked well for so many days but now it throws the below mentioned errorlog. How could i rectify the following error. Please advice. Failure Time----F

  • Quartz composer 4.0_Object Detection by Color

    Hi, I search for a project to develop a quartz compositions with a color tracking. I can't finish my composition. I think have a good compositions but my "object tracking" is not visible. Can you help me to find a good solutions? Thank you my composi