Essbase file system

Hi,
We ran out of the existing ESSBASE file system( Analytic Service 9.3.0.1) and we added a new file system on different volume. Now when we open a Analytic Admin. service console and check the properties/disk space tab, we are able to see the name of the new file system but all the values are 0's including total space, free space or used space. We are able to see the old file system values but not the new file system. I have verified that 1 TB space is available on the new file system (via putty session.)
Any idea how we can make it visible on AAS console.
Thanks
AA

Have you extended the current file system or created a new file system mount?
Have you tried starting and stopping Essbase?
Brian Chow

Similar Messages

  • Migrating Essbase cube across versions via file system

    A large BSO cube has been taking much longer to complete a 'calc all' in Essbase 11.1.2.2 than on Essbase 9.3.1 despite all Essbase.cfg, app and db settings being same (https://forums.oracle.com/thread/2599658).
    As a last resort, I've tried the following-
    1. Calc the cube on the 9.3.1 server.
    2. Use EAS Migration Wizard to migrate the cube from the 9.3.1 server to the 11.1.2.2 server.
    3. File system transfer of all ess*.ind and ess*.pag from 9.3.1\app\db folder to 11.1.2.2\app\db folder (at this point a retrieval from the 11.1.2.2 server does not yet return any data).
    4. File system transfer of the dbname.esm file from 9.3.1\app\db folder to 11.1.2.2\app\db folder (at this point a retrieval from the 11.1.2.2 server returns an "unable to load database dbname" error and an "Invalid transaction status for block -- Please use the IBH Locate/Fix utilities to find/fix the problem" error).
    5. File system transfer of the dbname.tct file from 9.3.1\app\db folder to 11.1.2.2\app\db folder (and voila! Essbase returns data from the 11.1.2.2 server and numbers match with the 9.3.1 sever).
    This almost seems too good to be true. Can anyone think of any dangers of migrating apps this way? Has nothing changed in file formats between Essbase 9.x and 11.x? Won't not transferring the dbname.ind and dbname.db files cause any issues down the road? Thankfully we are soon moving to ASO for this large BSO cube, so this isn't a long term worry.

    Freshly install the Essbase 11.1.2.2 on Window server 2008 r-2 with the recommended hardware specification. After Installation configure 11.1.2.2 with the DB/Schema
    Take the all data back up of the essbase applications using script export or directly exporting from the cube.
    Use the EAS Migration wizard to migrate the essbase applications
    After the Migrating the applications successfully,reLoad all the data into cube.
    For the 4th Point
    IBH error generally caused when there is a mismatch in the index file and the PAG file while e executing the calculation script .Possible solutions are available
    The recommended procedure is:
    a)Disable all logins.
    alter application sample disable connects;
    b)Forcibly log off all users.
    alter system logout session on database sample.basic;
    c)Run the MaxL statement to get invalid block header information.
    alter database sample.basic validate data to local logfile 'invalid_blocks';
    d)Repair invalid block headers
    alter database sample.basic repair invalid_block_headers;
    Thanks,
    Sreekumar Hariharan

  • Essbase unix file system best practice

    Is there such thing in essbase as storing files in different file system to avoid i/o contention? Like for example in Oracle, it is best practice to store index files and data files indifferent location to avoid i/o contention. If everything in essbase server is stored under one file directory structure as it is now, then the unix team is afraid that there may run into performance issue. Can you please share your thought?
    Thanks

    In an environment with many users (200+) or those with planning apps where users can run large long-running rules I would recommend you separate the application on separate volume groups if possible, each volume group having multiple spindles available.
    The alternative to planning for load up front would be to analyze the load during peak times -- although I've had mixed results in getting the server/disk SME's to assist in these kind of efforts.
    Some more advanced things to worry about is on journaling filesystems where they share a common cache for all disks within a VG.
    Regards,
    -John

  • Back up - file system

    This question is on backup.
    Every time i take backup of file system for app folder(\\Hyperion\AnalyticServices) from my essbase server.
    Is this the right way backing up of file system.if any of database corroupts, Can i just rplace the app folder with the backup app folder , does this work.

    This is one way to do the backup. If an application corrupted, you would just have to restore the application directory under the app folder. That is unless you have stored files on another drive. If you have gone into the storage tab of properties and changed where the page and index files reside, you would also have to back up that drive (same directory starting point) at the same time as your other drive backup.

  • Maxl - load rules from file system

    I am writing a Maxl script and I can get it to run "using server rules_file". When I created the rules file I save it to the file system and browse to it using EAS. Does anyone know what the syntax is to run the rules file from the file system while in a Maxl script?
    Than you in advance
    Russ

    Hi guys, back to revisiting the calc script from maxl, but this time a new twist:
    One of the suggestions was to copy the calc script from the local drive to the essbase server (this would require FTP since essbase is running on Unix), the client isn't sold on that idea. The second idea would be to save the calc scripts in a separate database on the server and reference them from there.
    Just as a reminder, I am storing the load rules and calc scripts in a different location because it is an HPCM application and each time I deploy, the calc scripts get deleted automatically.
    I have a database and app as follows CostC.CostC and have created another db and app CostRef.CostRef. My script is C_Prof_A.csc and it is saved in CostRef.CostRef. Is it possible to execute a calc script saved in another database/app on a different database/app?
    if I save the script in CostC.CostC, I can execute it in Maxl as follows: execute calculation 'CostC'.'CostC'.'C_Prof_A';
    There is a snippet on the Rittman Mead Consulting blog that references a local Script_file
    execute calculation on database DemoASO.BasicASO with
    local script_file "/u01/app/...../DemoASO/BasicASO/BudVar.csc"
    When I try something similar, it barks at me when it sees "with", I have tried "using" etc. but can't get anything to work.
    If this doesn't work, then I will be reduced to the FTP process. If that is the case, if anyone has an example of the FTP commands I would appreciate it
    Russ

  • How to fix file system error 56635 in windows 8.1

    heelo pls help me to fix this problem i cant install any on my laptop.
    file system error 56635 is showing  up wen i install my kaspersky 2015

    Hello Jay,
    The current forum is for developers. I'd suggest asking non-programming questions on the
    Office 2013 and Office 365 ProPlus - IT Pro General Discussions  forum instead.

  • ISE 1.2 VM file system recommendation

    Hi,
    According to http://www.cisco.com/en/US/docs/security/ise/1.2/installation_guide/ise_vmware.html#wp1056074, tabl 4-1 discusses storage requirement for ISE 1.2 in VM environment.  It recommends VMFS.
    What are the implications when using NFS instead?  Is this just a recommendation or an actual requirement?
    At the moment, we use Netapp array which uses NFS for all vApps.  It will be difficult to justify a creation an additional FC HBA just for this one vApp.  Please explain.
    TIA,
    Byung

    If you refer to
    http://www.cisco.com/en/US/docs/security/ise/1.2/installation_guide/ise_vmware.html
    It says :
    Storage
    •File System—VMFS
    We recommend that you use VMFS for storage. Other storage protocols are not tested and might result in some file system errors.
    •Internal Storage—SCSI/SAS
    •External Storage—iSCSI/SAN
    We do not recommend the use of NFS storage.

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • Error message "Live file system repair is not supported."

    System won't boot. Directed to Disk Utility to repair but get error message "Live file system repair is not supported."  Appreciate all help.
    Thanks.
    John

    I recently ran into a similar issue with my Time Machine backup disk. After about 6 days of no backups - I had swapped the disk for my photo library for a media project; I reattached the Time Machine disk and attempted a backup.
    Time Machine could not backup to the disk. Running Disk Utility and attempting to Repair the disk ended up returning the "Live file system repair is not supported" message.
    After much experimentaion with disk analysis softwares, I came to the realization that the issue might be that the USB disk dock wasn't connected directly to the MacBook Pro - it was daisy-chained through a USB Hub.
    Connecting the USB disk dock directly to the MBP and running Disk Utility appears to have resolved the issue. DU ran for about 6 hours and succesfully repaired the disk. Consequently, I have been able to use that Time Machine disk for subsequent backups.

  • How to get access to the local file system when running with Web Start

    I'm trying to create a JavaFX app that reads and writes image files to the local file system. Unfortunately, when I run it using the JNLP file that NetBeans generates, I get access permission errors when I try to create an Image object from a .png file.
    Is there any way to make this work in Netbeans? I assume I need to sign the jar or something? I tried turning "Enable Web Start" on in the application settings, and "self-sign by generated key", but that made it so the app wouldn't launch at all using the JNLP file.

    Same as usual as with any other web start app : sign the app or modify the policies of the local JRE. Better sign the app with a temp certificate.
    As for the 2nd error (signed app does not launch), I have no idea as I haven't tried using JWS with FX 2.0 yet. Try to activate console and loggin in Java's control panel options (in W7, JWS logs are in c:\users\<userid>\appdata\LocalLow\Sun\Java\Deployment\log) and see if anything appear here.
    Anyway JWS errors are notoriously not easy to figure out and the whole technology in itself is temperamental. Find the tool named JaNeLA on the web it will help you analyze syntax error in your JNLP (though it is not aware of the new syntax introduced for FX 2.0 and may produce lots of errors on those) and head to the JWS forum (Java Web Start & JNLP Andrew Thompson who dwells over there is the author of JaNeLA).

  • Zone install file system failed?

    On the global zone, my /opt file system is like this:
    /dev/dsk/c1t1d0s3 70547482 28931156 40910852 42% /opt
    I am trying to install it in NMSZone1 like this config:
    fs:
    dir: /opt
    special: /dev/dsk/c1t1d0s3
    raw: /dev/rdsk/c1t1d0s3
    type: ufs
    options: [nodevices,logging]
    But failed like this:
    bash-2.05b# zoneadm -z NMSZone1 boot
    zoneadm: zone 'NMSZone1': fsck of '/dev/rdsk/c1t1d0s3' failed with exit status 3
    3; run fsck manually
    zoneadm: zone 'NMSZone1': unable to get zoneid: Invalid argument
    zoneadm: zone 'NMSZone1': unable to destroy zone
    zoneadm: zone 'NMSZone1': call to zoneadmd failed
    Please help me. Thanks.

    It appears that the c1t1d0s3 device is already in use as /opt in the
    global zone. Is that indeed the case? If so, you need to unmount
    it from there (and remove or comment out its entry in the global
    zone's /etc/vfstab) file and then try booting the zone again.

  • Verication file system failed to make partition

    I can't make a partition verification file system failed

    Hello,
    In Disk Utility, try Repair Disk first, you need another boot drive or the Install Disc if this is trying on your Boot Drive.
    "Try Disk Utility
    1. Insert the Mac OS X Install disc, then restart the computer while holding the C key.
    2. When your computer finishes starting up from the disc, choose Disk Utility from the Installer menu at top of the screen. (In Mac OS X 10.4 or later, you must select your language first.)
    *Important: Do not click Continue in the first screen of the Installer. If you do, you must restart from the disc again to access Disk Utility.*
    3. Click the First Aid tab.
    4. Select your Mac OS X volume.
    5. Click Repair Disk, (not Repair Permissions). Disk Utility checks and repairs the disk."
    http://docs.info.apple.com/article.html?artnum=106214
    Then try a Safe Boot, (holding Shift key down at bootup), run Disk Utility in Applications>Utilities, then highlight your drive, click on Repair Permissions, reboot when it completes.
    (Safe boot may stay on the gray radian for a long time, let it go, it's trying to repair the Hard Drive.)
    If perchance you can't find your install Disc, at least try it from the Safe Boot part onward.

  • How do I use a NAS file system attached to my router to store iTunes purchases?

    We have four Windows devices networked in our house.  They all run iTunes with the same Apple ID so when any one of them has iTunes running, we can see that computer on our Apple TV.  Two run Windows XP, one runs Vista Business, and the newest runs Windows 7.  Upgrading all to the same Windows software is out of the question.  Our NAS is hung on an "off warranty" Linksys E3000 router which communicates via USB with a 1TB NTFS formatted Western Digital hard drive.  Our plan when we set up this little network, not cheaply, in our 1939 baloon construction tank of a house, was to build our library of digital images, music, and video primarily on that device.  It supports a variant of Windows streaming support, but poorly.  The best streaming support for our environment seems to be our ethernet connected Apple TV which is hung off the big TV in our family room, and has access to the surround sound "Home Theater" in that room.  We've incrementally built up collections of photos, digitized music, and most recently educational materialsd, podcasts, etc. which threaten storage limits on a couple of default C: drives on these windows systems.  iTunes, before the latest upgrades, gave us some feedback on the file system connections it established without going to the properties of the individual files, but the latest one has yet to be figured out.  MP3's and photos behave fairly well, including the recently available connection between Adobe software and iTunes which magically appeared, allowing JPEG files indexed by Adobe Photoshop Elements running on the two fastest computers to show up under control on the Apple TV attached 56' Samsung screen! 
    Problems arose when we started trying to set things up so that purchases downloaded from the iTunes store ended up directly on the NAS, and when things downloaded to a specific iTunes library on one of the Windows boxes caused storage "issues" on that box.  A bigger problem looming in the immediate future is the "housecleaning" effort which is part of my set of "new years resolutions."  How do I get control of all of my collections and merge them on the NAS without duplicate files, or when, for example, we have .ACC and .MP3 versions, only the required "best" option for the specific piece of music becomes a candidate for streaming?
    I envission this consolidation effort as a "once in a lifetime" effort.  I'm 70, my wife is 68 and not as "technical" as I am, so documented procedures will be required. 
    I plan to keep this thread updated with progress and questions as this project proceeds.  Links to "how to" experiences which are well documented, etc. may be appreciated by those who follow it.  I plan to post progress reports and detailed issues going forward.   Please help?

    Step 1 - by trial and error...
    So far, I have been able to create physical files containing MP3 and JPG on the NAS using the Windows XP systems to copy from shared locations on the Vista and Win7 boxes.  This process has been aided by the use of a 600 GB SATA 2 capable hard drive enclosure.  I first attach to Win 7 or Win Vista and reboot to see the local drive spaces formatted on the portable device.  Then I copy files from the user's private directories to the public drive space.  When the portable drive is wired to an XP box, I can use Windows to move the files from the portable device to the NAS without any of the more advanced file attributes being copied to the NAS.  Once the files are on the NAS, I can add the new folder(s) to iTunes on any of the computers and voila, the data becomes sharable via iTunes.  So far, this works for anything that I have completely purchased, or for MP3's I made from the AIC files created when I purchased alblums via iTunes. 
    I have three huge boxes full of vynl records I've accumulated.  The ones that I've successfully digitized via a turntable attached to the sound card on one of my computers and third party software, have found their way to the NAS after being imported into iTunes and using it to bring down available album art work.  In general I've been reasonably well pleased with the sound quality of digital MP3 files created this way, but the software I've been using sometimes has serious problems automatically separating individual songs from the album tracks and re-converting "one at a time" isn't very efficient.

  • Solaris 10  - After installation read only file system

    Dear All,
    I have installed the Solaris 10 on my x86 system with out any problem. The installation was completed successfully. I have followed the installation instructions that are mentioned in the following link.
    http://docs.sun.com/app/docs/doc/817-0544/6mgbagb19?a=view
    I am facing a different problem. I am not able to create even a single file / sub-directory on any of the existing directories. It always says READ ONLY file system can not create firl / directory.
    Please help me how to resolve this issue.
    Thanks in advance.
    Regards,
    Srinivas G

    What do you get for 'svcs -xv' output?
    Darren

  • Crystal Reports XI String [255] limit with the File System Data driver...

    I was trying to create a Crystal Reports XI report to return security permissions of files and folders.  I have been able to successfully connect and return data using the File System Data driver as the Data Source; however the String limit on the ACL NT Security Field is 255 characters.  The full string of data to be returned can be much longer than the 255 limit and I cannot find how to manipulate that parameter. 
    I am currently on Crystals XI and Crystal XI R2 and have applied the latest service packs but still see the issue.  My Crystal Reports Database DLL for File System data ( crdb_FileSystem.dll ) is at Product Version 11.5.10.1263.
    Is it possible to change string limits when using the File System Data driver as the Data Source?  If so, how can that be accomplished.  If not, is there another method to retrieve information with the Windows File System Data being the Data Source?  Meaning, could I reach my end game objective of reporting on the Windows ACL's with Crystal through another method?

    Hello,
    This is a known issue. Early versions you could not create folder structures longer than 255 characters. With the updates to the various OS's this is now possible but CR did not allocate the same space required.
    It's been tracked as an enhancement - ADAPT01174519 but set for a future release.
    There are likely other ways of getting the info and then putting it into an Excel file format and using that as the data source.
    I did a Google search and found this option: http://www.tomshardware.com/forum/16772-45-display-explorer-folders-tree-structure-export-excel
    There are tools out there to do this kind of thing....
    Thank you
    Don
    Note the reference to msls.exe appears to be a trojan: http://www.greatis.com/appdata/d/m/msls.exe.htm so don't install it.
    Edited by: Don Williams on Mar 19, 2010 8:45 AM

Maybe you are looking for