Data File Cache Significance

Hi Everyone,I have been away from Essbase for a little while and in the mean time our client made the move to Essbase v6.Could somebody please explain the significance, and correct way to set, the Data File Cache? Also, have the rules for setting the Data Cache changed as a result?Any advice would be appreciated - I understood Essbase v5 quite well but have been out of the loop as far as Essbase v6 goes.

The data file cache has significance only if your client is using Direct I/O. Direct I/O was introduced beginning with v6 to allow DBAs to explicitly manage data file caching on a database by database basis rather than allowing the operating system to do it. The previous I/O management scheme (Buffered I/O) was still supported, but Direct I/O was the default.Unfortunately, in installations with multiple applications per server, optimizing the data file cache for each database proved to be a headache. So Hyperion decided to revert to Buffered I/O as the default beginning with v6.2. So in Essbase version 6.0-6.1 Direct I/O is the default and you have to change it in essbase.cfg to use Buffered I/O. In version 6.2 and later the reverse is true (please correct me if I'm wrong everyone).If you choose to use Direct I/O, I believe the conventional wisdom is to make the data file cache big enough to hold all the .pag files in your database. Otherwise, set it as large as possible.Good luck,Bruce---Message Posted by sparky1 ??4/7/02 19:44---Hi Everyone,I have been away from Essbase for a little while and in the mean time our client made the move to Essbase v6.Could somebody please explain the significance, and correct way to set, the Data File Cache? Also, have the rules for setting the Data Cache changed as a result?Any advice would be appreciated - I understood Essbase v5 quite well but have been out of the loop as far as Essbase v6 goes.

Similar Messages

  • Data File Cache / Data Cache

    I have a few Questions regarding Data File cache and data cache. based on teh size of the application.
    All the settings are changed by using a maxl script.
    1. Data File cache-50MB,Data Cache - 100MB
    I am using Buffered I/O, then will memory be allocated to Data File cache?
    2. It is given in DBAG that data cache & index cache should be set as small as
    possbile for both buffered & Direct I/O. The size of one of my application is
    around 11 GB. data file :11GB,index File :450MB
    I have set my index cache to 450MB and data cache to 700MB.
    Is it OK or a. what should be my data cache size?
    b. How do I calculate the optimal Data cache and index cache?
    3. the memory size of our AIX server is 6GB. If i use Directo I/O, can my sum of
    all caches be 4GB?
    4. If I use buffered I/O, according to (2), what should be my cache sizes?
    Thanks
    Amarnath

    In the DBAG it states data file cache is not used with buffered IO so the answere to 1) should be NO.
    For 2) there is a hint in the DBAG that you should check the hit ratio of the caches to verify sizing, only calculatory advice for sizing is given on the calulator cache :-( This would mean for 2b) look at the hit ration if it stays around 1.0 try to decrease it until it drops slightly. Inspect the ratios from time to time.
    3) don't know, 64bit should be no problem. But why would you do this anyway?
    Example form our settings pag total ~20GB ind ~2GB
    outline hat 11 dimensions with a block size of ~340KB largest dense ~400 members, largest sparse ~4000 members, existing blocks ~2.7 milions
    The data cache is set to 256 MB, the index cache to 64MB our hit ratios are 1.0 for index cache and 0.77 for data chache. so our data cache could be larger, but the performace of retrievals is around 3.0 seconds which is fine for our users..
    4) Check your hit ratios and try to increase or decrease the cahces in small steps (first I'd do the index cache, the if it's fine I'd tune the data cache).
    hope it helped a bit..

  • Dense Restructure 1070020 Out of disk space. Can't create new data file

    During a Dense Restructure we receive: Error(1070020) Out of disk space. Cannot create a new [Data] file.
    Essbase 6.5.3 32-bit
    Windows 2003 32bit w/16GB RAM
    Database is on E: drive with 660GB space total, database is ~220GB.
    All cubes are unlimited
    Tried restoring from backup same problem.
    Over years and years the database is never recalculated, never exported and imported, never verified. Only new data loaded and dense restructured.
    Towards the end of a dense restructure (about 89 pan files through about 101 2GB pag files), getting an error: Error(1070020) Out of disk space. Cannot create a new [Data] file.
    There are still several hundred GB of free space available, and we can write to this free space outside of the essbase application within windows.
    The server's file system is consistent, defragmented, and can prove use of additional space. Hard drive controller and system does not report any "hardware issues".
    Essbase.cfg file
    ; The following entry specifies the full path to JVM.DLL.
    JvmModuleLocation C:\Hyperion\Essbase\java\jre13\bin\hotspot\jvm.dll
    ;This statement loads the essldap.dll as a valid authentication module
    ;AuthenticationModule LDAP essldap.dll x
    DATAERRORLIMIT 30000
    ;These settings are here to deal with error 1040004
    NETRETRYCOUNT 2000
    NETDELAY 1600
    App log
    [Sat Oct 17 13:59:32 2009]Local/removedfrompost/removedfrompost/admin/Info(1007044)
    Restructuring Database [removedfrompost]
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost/removedfrompost/admin/Error(1070020)
    Out of disk space. Cannot create a new [Data] file. [adIndNewFile] aborted
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost///Info(1008108)
    Essbase Internal Logic Error [7333]
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost///Info(1008106)
    Exception error log [C:\HYPERION\ESSBASE\app\removedfrompost\log00002.xcp] is being created...
    log00002.xcp
    Assertion Failure - id=7333 condition='((!( dbp )->bFatalError))'
    - line 11260 in file datbuffm.c
    - arguments [0] [0] [0] [0]
    Additional log info from database start to restructure failure
    Starting Essbase Server - Application [removedfrompost]
    Loaded and initialized JVM module
    Reading Application Definition For [removedfrompost]
    Reading Database Definition For [removedfrompost]
    Reading Database Definition For [TempOO]
    Reading Database Definition For [WTD]
    Reading Database Mapping For [removedfrompost]
    Writing Application Definition For [removedfrompost]
    Writing Database Definition For [removedfrompost]
    Writing Database Definition For [TempOO]
    Writing Database Definition For [WTD]
    Writing Database Mapping For [removedfrompost]
    Waiting for Login Requests
    Received Command [Load Database]
    Writing Parameters For Database [removedfrompost]
    Reading Parameters For Database [removedfrompost]
    Reading Outline For Database [removedfrompost]
    Declared Dimension Sizes = [289 125 2 11649 168329 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 6 ]
    Actual Dimension Sizes = [289 119 1 1293 134423 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 5 ]
    The number of Dynamic Calc Non-Store Members = [80 37 0 257 67 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [34391]
    Maximum Declared Blocks is [1960864521] with data block size of [72250]
    Maximum Actual Possible Blocks is [173808939] with data block size of [17138]
    Formula for member [4 WK Avg Total Sls U] will be executed in [CELL] mode
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [removedfrompost] can hold a maximum of [76] blocks.
    The Dyn.Calc.Cache for database [removedfrompost], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [removedfrompost]
    Reading Parameters For Database [removedfrompost]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [removedfrompost]...
    Data cache size ==> [3145728] bytes, [22] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\removedfrompost\removedfrompost.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Load Database]
    Writing Parameters For Database [TempOO]
    Reading Parameters For Database [TempOO]
    Reading Outline For Database [TempOO]
    Declared Dimension Sizes = [277 16 2 1023 139047 ]
    Actual Dimension Sizes = [277 16 1 1022 138887 ]
    The number of Dynamic Calc Non-Store Members = [68 3 0 0 0 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [4432]
    Maximum Declared Blocks is [142245081] with data block size of [8864]
    Maximum Actual Possible Blocks is [141942514] with data block size of [2717]
    Essbase needs to retrieve [1] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [TempOO] can hold a maximum of [591] blocks.
    The Dyn.Calc.Cache for database [TempOO], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [TempOO]
    Reading Parameters For Database [TempOO]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [TempOO]...
    Data cache size ==> [3145728] bytes, [144] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\TempOO\TempOO.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Load Database]
    Writing Parameters For Database [WTD]
    Reading Parameters For Database [WTD]
    Reading Outline For Database [WTD]
    Declared Dimension Sizes = [2 105 2 11649 158778 1279 609 971 531 208 78 2017 11 9 9 1 1 1 1 6 1 2 1 1 2 1 1 1 2 77 1 1 1 1 1 1 1 1 1 1 1 1 1 260 3 2954 52 6 39 4 1581 6 ]
    Actual Dimension Sizes = [1 99 1 1293 127722 1279 609 971 531 208 78 2017 11 9 9 1 1 1 1 6 1 2 1 1 2 1 1 1 2 77 1 1 1 1 1 1 1 1 1 1 1 1 1 260 3 2954 52 6 39 4 1581 5 ]
    The number of Dynamic Calc Non-Store Members = [0 29 0 257 57 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [99]
    Maximum Declared Blocks is [1849604922] with data block size of [420]
    Maximum Actual Possible Blocks is [165144546] with data block size of [70]
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [WTD] can hold a maximum of [26479] blocks.
    The Dyn.Calc.Cache for database [WTD], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [WTD]
    Reading Parameters For Database [WTD]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [WTD]...
    Data cache size ==> [3145728] bytes, [5617] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\WTD\WTD.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Set Database State]
    Writing Parameters For Database [removedfrompost]
    Writing Parameters For Database [removedfrompost]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [Set Database State]
    Writing Parameters For Database [TempOO]
    Writing Parameters For Database [TempOO]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [Set Database State]
    Writing Parameters For Database [WTD]
    Writing Parameters For Database [WTD]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [SetApplicationState]
    Writing Application Definition For [removedfrompost]
    Writing Database Definition For [removedfrompost]
    Writing Database Definition For [TempOO]
    Writing Database Definition For [WTD]
    Writing Database Mapping For [removedfrompost]
    User [admin] set active on database [removedfrompost]
    Clear Active on User [admin] Instance [1]
    User [admin] set active on database [removedfrompost]
    Received Command [Restructure] from user [admin]
    Reading Parameters For Database [Drxxxxxx]
    Reading Outline For Database [Drxxxxxx]
    Reading Outline Transaction For Database [Drxxxxxx]
    Declared Dimension Sizes = [289 126 2 11649 168329 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 6 ]
    Actual Dimension Sizes = [289 120 1 1293 134423 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 5 ]
    The number of Dynamic Calc Non-Store Members = [80 37 0 257 67 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [34680]
    Maximum Declared Blocks is [1960864521] with data block size of [72828]
    Maximum Actual Possible Blocks is [173808939] with data block size of [17347]
    Formula for member [4 WK Avg Total Sls U] will be executed in [CELL] mode
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [Drxxxxxx] can hold a maximum of [75] blocks.
    The Dyn.Calc.Cache for database [Drxxxxxx], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Reading Parameters For Database [Drxxxxxx]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Data cache size ==> [3145728] bytes, [22] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Performing transaction recovery for database [Drxxxxxx] following an abnormal termination of the server.
    Restructuring Database [removedfrompost]
    Out of disk space. Cannot create a new [Data] file. [adIndNewFile] aborted
    Essbase Internal Logic Error [7333]
    Exception error log [C:\HYPERION\ESSBASE\app\removedfrompost\log00002.xcp] is being created...
    Exception error log completed -- please contact technical support and provide them with this file
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

    To avoid all these things as a best practice
    we didn't allow dense restructure on the cubes size>30 GB
    As an altrnative, we will export the level0 data, clear the DB, and load the new data. After that aggregate the cube to store the data at all the consolidation levels.

  • Can I force caching of data files?

    Our applet is an open-source molecular viewer, http://jmol.sourceforge.net
    It fetches molecular model data files from the web server.
    The larger molecular model files are a few hundred kb in size.
    They are being fetch using URL.openStream(), which uses the URLConnection class.
    It seems that the data files are not being cached in the local data cache.
    Q: What kinds of files get cached in the local data cache (.jpi_cache or .java/deployment/cache/javapi) ?
    Q: Is there anything I can do to ask that my data files be cached?
    Thanks,
    Michael

    >
    The Java Plug-in caches files of the following types
    downloaded via HTTP/HTTPS:
    .jar (jar file)
    .zip (zip file)
    .class (java class file)
    .au (audio file)
    .wav (audio file)
    .jpg (image file)
    .gif (image file)
    So your options could be to jar up your files and
    access them using the jar util classes (Im not 100% if
    this can be done for your situation) or a bit of a
    hack, is to rename your files to one of the above.
    Hope this helpsI develop Java applets too and I load .zip archives via URLConnection. I got the same problem as -michaelthoward-:
    My files (.zip) are almost NEVER being cached. Almost means that if I load the applet from my hardrive (effectively from localhost) they will then be cached.
    At the moment I use the "dirty trick" -maherrj- mentioned too,, by renaming my .zip archives to .jpg.
    But even then it can happen that bigger files (about 300kb up) are STILL NOT CACHED. This strange and unreliable behaviour of the Applet Plug-In from Sun takes effect from Java 1.2 to 1.5beta2.
    If anyone has ideas (I don't dare to speak of fixes) please write.

  • HELP! File cache Maxed out at startup

    Hello everyone, I have used this forum to answer my everyday questions but I have not found the answer to this question. As of late my Mac's file cache gets maxed out instantly. I have 32GB of RAM and near 28GB gets used up in the file cache. I try using memory cleaner and the file cache will not delete. I have to use the terminal command "sudo purge" to delta the file cache. Just yesterday, I started up my mac and 27GB was already in use in the file cache.
    I recently installed Lightroom 5.7.1 and Camera Raw 8.7. And the problems have started since I installed one of these apps.

    Un-install Memory Cleaner and let the OS handle memory. It does a better job. Un-install using the developers instructions.
    Try a restart.
    Do a backup, using either Time Machine or a cloning program, to ensure files/data can be recovered. Two backups are better than one.
    Try setting up another admin user account to see if the same problem continues. If Back-to-My Mac is selected in System Preferences, the Guest account will not work. The intent is to see if it is specific to one account or a system wide problem. This account can be deleted later.
    Isolating an issue by using another user account
    Try booting into the Safe Mode using your normal account.  Disconnect all peripherals except those needed for the test. Shut down the computer and then power it back up after waiting 10 seconds. Immediately after hearing the startup chime, hold down the shift key and continue to hold it until the gray Apple icon and a progress bar appear and again when you log in. The boot up is significantly slower than normal. This will reset some caches, forces a directory check, and disables all startup and login items, among other things. When you reboot normally, the initial reboot may be slower than normal. If the system operates normally, there may be 3rd party applications which are causing a problem. Try deleting/disabling the third party applications after a restart by using the application un-installer. For each disable/delete, you will need to restart if you don’t do them all at once.
    Safe Mode - Mavericks   also Yosemite
    Safe Mode - About

  • Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize your folders with your Outlook data file (.ost)

    Fresh installation of Exchange Server 2013 on Windows Server 2012.
    Our first test account cannot access their email via Outlook but can access fine through OWA. The following message appears - "Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize
    your folders with your Outlook data file (.ost)" is displayed.
    If I turn off cached Exchange mode, setting the email account to not
    cache does not resolve the issue and i get a new error message - "Cannot open your default e-mail folders. The file (path\profile name).ost is not an Outlook data file (.ost). Very odd since it creates its own .ost file when you run it for the first
    time.
    I cleared the appdata local Outlook folder and I tested on a new laptop that has never connected to Outlook, same error message on any system.
    Microsoft Exchange RPC Client Access service is running.
    No warning, error or critical messages in the eventlog, it's like the healthiest server alive.
    Any help would be greatly appreciated. I haven't encountered this issue with previous versions of Exchange.

    So it looks like a lot of people are having this issue and seeing how Exchange 2013 is still new (relatively to the world) there isn't much data around to answer this. I've spend ALOT of time trying to figure this out.
    Here is the answer. :) - No I don't know all but I'm going to try to give you the most reasonable answer to this issue, in a most logical way.
    First thing I did when I was troubleshooting this issue is that I ignored Martina Miskovic's suggestion for Step4 http://technet.microsoft.com/library/jj218640(EXCHG.150)because it didn't make sense to me because I was trying to connect
    Outlook not outside the LAN but actually inside. However, Martina's suggestion does fix the issue if it's applied in the correct context.
    This is where the plot thickens (it's stew). She failed to mention that things like SSL (which I configure practically useless - anyone who ever worked in a business environment where the owner pretty much trusts anyone in the company, otherwise they don't
    work there - very good business practice in my eyes btw, can confirm that...) are some sort of fetish with Microsoft lately. Exchange 2013 was no exception.
    In exchange 2003, exchange 2007 and exchange 2010 - you could install it and then go to outlook and set it up. And when outlook manual Microsoft Exchange profile would ask you for server name, you would give it and give the name of the person who you setting
    up - as long as machine is on the domain, not much more is needed. IT JUST WORKS! :) What a concept, if the person already on premises of the business - GIVE HIM ACCESS. I guess that was too logical for Microsoft. Now if you're off premises you can use things
    like OutlookAnywhere - which I might add had their place under that scenario.
    In Exchange 2013, the world changed. Ofcourse Microsoft doesn't feel like telling it in a plain english to people - I'm sure there is an article somewhere but I didn't find it. Exchange 2013 does not support direct configuration of Outlook like all of it's
    previous versions. Did you jaw drop? Mine did when I realized it. So now when you are asked for your server name in manual outlook set up and you give it Exchange2013.yourdomain.local - it says cannot connect to it. This happens because ALL - INTERNAL AND
    EXTERNAL connection are now handled via OutlookAnywhere. You can't even disable that feature and have it function the reasonable way.
    So now the question still remains - how do you configure outlook. Well under server properties there is this nice section called Outlook anywhere. You have a chance to configure it's External and Internal address. This is another thing that should be logical
    but it didn't work that way for me. When I configured the external address different from the internal - it didn't work. So I strongly suggest you get it working with the same internal address first and then ponder how you want to make it work for the outside
    users.
    Now that you have this set up you have to go to virtual directories and configure the external and internal address there - this is actually what the Step 4 that Martina was refering to has you do.
    Both external and internal address are now the same and you think you can configure your outlook manually - think again. One of the most lovely features of Outlook Anywhere, and the reason why I had never used it in the past is that it requires a TRUSTED
    certificate.
    See so it's not that exchange 2013 requires a trusted certificate - it's that exchange 2013 lacks the feature that was there since Windows 2000 and Exchange 5.5.
    So it's time for you to install an Active Direction Certificate Authority. Refer to this wonderful article for exact steps - http://careexchange.in/how-to-install-certificate-authority-on-windows-server-2012/
    Now even after you do that - it won't work because you have to add the base private key certificate, which you can download now from your internal certsrv site, to Default Domain Policy (AND yes some people claim NEVER mess with the Default Domain Policy,
    always make an addition one... it's up to you - I don't see direct harm if you know what you want to accomplish) see this: http://technet.microsoft.com/en-us/library/cc738131%28v=ws.10%29.aspx if you want to know exact steps.
    This is the moment of ZEN! :) Do you feel the excitement? After all it is your first time. Before we get too excited lets first request and then install the certificate to actual Exchange via the gui and assign it to all the services you can (IIS, SMTP and
    there is a 3rd - I forgot, but you get the idea).
    Now go to your client machine where you have the outlook open, browse to your exchange server via https://exchang2013/ in IE and if you don't get any certificate errors - it's good. If you do run on hte client and the server: gpupdate /force This will refresh
    the policy. Don't try to manually install the certificate from Exchange's website on the client. If you wanna do something manually to it to the base certificate from the private key but if you added it to the domain policy you shouldn't have to do it.
    Basically the idea is to make sure you have CA and that CA allows you to browse to exchange and you get no cert error and you can look at the cert and see that's from a domain CA.
    NOW, you can configure your outlook. EASY grasshoppa - not the manual way. WHY? Cause the automatic way will now work. :) Let it discover that exachange and populate it all - and tell you I'm happy! :)
    Open Outlook - BOOM! It works... Was it as good for you as it was for me?
    You may ask, why can't I just configure it by manual - you CAN. It's just a nightmare. Go ahead and open the settings of the account that got auto configed... How do you like that server name? It should read something like [email protected]
    and if you go to advanced and then connection tab - you'll see Outlook Anywhere is checked as well. Look at the settings - there is the name of the server, FQDN I might add. It's there in 2 places and one has that Mtdd-something:Exchange2013.yourdomain.local.
    So what is that GUID in the server name and where does it come from. It's the identity of the user's mailbox so for every user that setting will be different but you can figure it out via the console on the Exchange server itself - if you wish.
    Also a note, if your SSL certs have any trouble - it will just act like outlook can't connect to the exchange server even though it just declines the connection cause the cert/cert authority is not trusted.
    So in short Outlook Anywhere is EVERYWHERE! And it has barely any gui or config and you just supposed to magically know that kind of generic error messages mean what... Server names are now GUIDs of the [email protected] - THAT MAKES PERFECT
    SENSE MICROSOFT! ...and you have to manage certs... and the only place where you gonna find the name of the server is inside the d*** Outlook Anywhere settings in the config tab, un it's own config button - CAN WE PUT THE CONFIG ANY FURTHER!
    Frustrating beyond reason - that should be Exchange's new slogan...
    Hope this will help people in the future and won't get delete because it's bad PR for Microsoft.
    PS
    ALSO if you want to pick a fight with me about how SSL is more secure... I don't wanna hear it - go somewhere else...

  • Need to read data files originally created in LabVIEW in C++

    I have originally created binary files in LabVIEW. These files are made by flattening labview data to a string, then converting the string to a U8 byte array which is then saved to a binary data file. A separate file is used to keep track of the variables written to the data file. (FAT) a 2D-String array (first element = variable name, 2nd element = staring byte location in .dat file, 3rd element = byte length of variable) is flattened to U32 and then written to a binary file.
    Opening the files in LabVIEW follows the inverse proceedure. The FAT file is read as a U32 binary data, converted to a byte array, converted to a string, and then unflattened from the string into a LabVIEW 2D-string data type. From this the starting location and length of the variable is extracted. These are used to read the data file, in U8, which is then converted bu a string and then unflattened to LabVIEW data according to the LabVIEW data type.
    I now need to now open these files in a Linux environment using C++. I am at a loss at how to emulate all of this data-type conversion, flattening, and unflattening in C++. Does anyone have any ideas?

    Faraclas wrote:
    > Thanks rolfk, I will do the reading. But just to make sure you don't misunderstand
    > me, everything works beautifully in LabVIEW.
    I gathered that. But reading through the description gave me the
    impression that a few conversions/typecasts may actually be superfluous
    without any change to the actual data.
    > The problem I am having is trying to open up these files in C++. This system has
    > been around for a couple of years now, so I cannot change the data structure in
    > the LabVIEW programs (well I could but I don't want to because it is really a very
    > flexible system that works well).
    Well, it really isn't that difficult, it just won't be a one hour job
    for sure. A few guidelines:
    1) LabVIEW flattened data contains the actual data as is for all fixed
    size elements (scalars and clusters of scalars). Any variable sized data
    (strings, arrays) is prepended with an int32 for each dimension
    containing the number of elements for that dimension.
    2) LabVIEW flattened data is always in Big Endian (Most significant Byte
    first) format. This means for normal C(++) programs developed on an x86
    system you will have to byteswap every single 16 bit and 32 bit integer.
    3) LabVIEW Typecast really is similar to Flattened data but no
    prepending of the number of elements is done for variable sized data.
    And Typecast can't convert any type containing variable sized data, but
    any arbitrary complex datatype only containing fixed size data, or a
    simple array or string can be typecasted.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Access via command line to the data files for Address Book

    Snow Leopard Server 10.6.8
    Mac Mini 2.66 GHZ Intel Core 2 Duo 4 GB 1067 MHZ DDR3
    I rolled back from Mountain Lion Server to Snow Leopard Server because I needed mySql. At the time I was under AppleCare and they walked me through the steps. However, I ended up with Address Book issues.
    Addressbook user on the Snow Leopard Server was upgraded to Mountain Lion Server. During the rollback process, the Apple tech had me delete the Addressbook user.
    When we finished the rollback, he had me recreate the Addressbook user, but now it is linked to a new, empty data file. I cannot add new contacts nor edit or delete them. So I need to use the command line to view both data store files, find the one that has my data in it and re-attach it to my new Addressbook user.
    Can someone help me with the Command Line commands?
    Thank you,
    Cailyn

    Hi,
    There are a couple of more advanced solution I can think of and I'm actually working on one myself.
    * You can create your own jnlp client that allows you to pass in arguments. That sounds harder than it easy. Building a stripped down jnlp client (e.g. no installer, no applets) using one of the two open-source client as a start takes probably just a couple of days and should be sufficient for in house usage.
    * Another solution is what I'm working on now. You can wrap your own executable around javaws that takes your passed in arguments plus jnlp href and looks up the original in the cache and adds a new one to the cache that it passes on to javaws and suddenly everything works as it should. The magic will be revealed at http://www.geocities.com/vamp201
    - Gerald Bauer

  • How to consolidate data files using data pump when migrating 10g to 11g?

    We have one 10.2.0.4 database to be migrated to a new box running 11.2.0.1. The 10g database has too many data files scattered within too many file systems. I'd like to consolidate the data files into one or two large chunk in one file systems. Both OSs are RHEL 5. How should I do that using Data Pump Export/Import? I knew there is "Remap" option could be used, but it's only one to one mapping. How can I map multiple old data files into one new data file?

    hi
    datapump is terribly slow, make sure you have as much memory as possible allocated for Oracle but the bottleneck can be I/O throughput.
    Use PARALLEL option, set also these ones:
    * DISK_ASYNCH_IO=TRUE
    * DB_BLOCK_CHECKING=FALSE
    * DB_BLOCK_CHECKSUM=FALSE
    set high enough to allow for maximum parallelism:
    * PROCESSES
    * SESSIONS
    * PARALLEL_MAX_SERVERS
    more:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_perf.htm
    that's it, patience welcome ;-)
    P.S.
    For maximum throughput, do not set PARALLEL to much more than twice the number of CPUs (two workers for each CPU).
    Edited by: g777 on 2011-02-02 09:53
    P.S.2
    breaking news ;-)
    I am playing now with storage performance and I turned the option of disk cache (also called write-back cache) to ON (goes at least along with RAID0 and 5 and setting it you don't lose any data on that volume) - and it gave me 1,5 to 2 times speed-up!
    Some says there's a risk of lose of more data when outage happens, but there's always such a risk even though you can lose less. Anyway if you can afford it (and with import it's OK, as it ss not a production at that moment) - I recommend to try. Takes 15 minutes, but you can gain 2,5 hours out of 10 of normal importing.
    Edited by: g777 on 2011-02-02 14:52

  • How to update cgicmd.dat file during runtime?

    I'd like to know how do update cgicmd.dat file during runtime. For example, I run a report one.jsp as
    http://<machine>:<port>/reports/rwservlet?one.jsp&USERID=uid/pwd@db&DESTYPE=cache&mode=bitmap&desformat=htmlcss
    within this report there is a hyperlink to open another report named two.jsp.
    before creating this hyperlink, I'd like to update cgicmd.dat file with passed in userID, pwd, and connection, so two.jsp can use this key for userinfo
    so I can create hyperlink as follows
    srw.set_hyperlink('/reports/rwservlet?report=two.jsp'||
    '&cmdkey=userinfo&DESTYPE=cache&mode=bitmap&desformat=htmlcss');
    Thanks

    To my knowledge the cgicmd.dat is only read when the OC4J starts, so you would have to come up with another solution. Using Single-Sign-On (SSO) is quite a good idea, and it's there for cases like this.
    Regards,
    Martin Malmstrom

  • ORA-01173: data dictionary indicates missing data file from system tablespa

    Hello
    My Database is 11gR1 and Linux is the OS. Thought of posting it as a new thread as things have changed a lot.
    2 of my database file were lost due to hardware failure. (the datafile do not belong to SYSTEM TABLESPACE) and and the second was datafile for undo tablespace.
    I created control files with command not mentioning the lost datafiles:
    CREATE CONTROLFILE REUSE DATABASE "BSSGLB" NORESETLOGS NOARCHIVELOG
    MAXLOGFILES 32
    MAXLOGMEMBERS 5
    MAXDATAFILES 600
    MAXINSTANCES 10
    MAXLOGHISTORY 1168
    LOGFILE
    GROUP 1 '/DB/app/oracle/oradata/bssglb/redo01.log' SIZE 500M,
    GROUP 2 '/DB/app/oracle/oradata/bssglb/redo02.log' SIZE 500M,
    GROUP 3 '/DB/app/oracle/oradata/bssglb/redo03.log' SIZE 500M
    -- STANDBY LOGFILE
    DATAFILE
    '/DB/app/oracle/oradata/bssglb/system01.dbf',
    '/DB/app/oracle/oradata/bssglb/sysaux01.dbf',
    '/DB/app/oracle/oradata/bssglb/users01.dbf',
    '/DB/app/oracle/oradata/bssglb/bss01.dbf',
    '/var/oracle/oradata/bssglb/bss02',
    '/var/oracle/oradata/bssglb/system02.dbf'
    CHARACTER SET US7ASCII
    I have all the files mentioned in the datafile section of the above command.
    But somehow when i try to open the DB, first it asks for the media recover. When i recover database then it disconnects database saying:
    ERROR at line 1:
    ORA-01092: ORACLE instance terminated. Disconnection forced
    Process ID: 4768
    Session ID: 170 Serial number: 5
    Contents of the alert log file are:
    Beginning crash recovery of 1 threads
    parallel recovery started with 3 processes
    Started redo scan
    Completed redo scan
    1 redo blocks read, 0 data blocks need recovery
    Started redo application at
    Thread 1: logseq 7, block 2, scn 20356182450
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 7 Reading mem 0
    Mem# 0: /DB/app/oracle/oradata/bssglb/redo01.log
    Completed redo application
    Completed crash recovery at
    Thread 1: logseq 7, block 3, scn 20356202453
    0 data blocks read, 0 data blocks written, 1 redo blocks read
    Sun Feb 20 08:16:23 2011
    Thread 1 advanced to log sequence 8
    Thread 1 opened at log sequence 8
    Current log# 2 seq# 8 mem# 0: /DB/app/oracle/oradata/bssglb/redo02.log
    Successful open of redo thread 1
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Sun Feb 20 08:16:23 2011
    SMON: enabling cache recovery
    Errors in file /Oracle/app/oracle/diag/rdbms/bssglb/bssglb/trace/bssglb_ora_4550.trc:
    ORA-01173: data dictionary indicates missing data file from system tablespace
    Error 1173 happened during db open, shutting down database
    USER (ospid: 4550): terminating the instance due to error 1173
    Sun Feb 20 08:16:25 2011
    Instance terminated by USER, pid = 4550
    ORA-1092 signalled during: alter database open...
    ORA-1092 : opiodr aborting process unknown ospid (4550_182910338848)
    Sun Feb 20 08:16:26 2011
    ORA-1092 : opitsk aborting process
    Guys please help me out of this situation.
    Regards, Imran

    STOP opening new threads for ongoing solution to SAME problem
    Re: I have copy of data files! How to install and add these files to database!
    Re: Failure of network storage!
    Re: I have copy of data files! How to install and add these files to database!
    Handle:      misterimran
    Email:      misterimran
    Status Level:      Newbie
    Registered:      Jan 15, 2002
    Total Posts:      1,109
    Total Questions:      130 (115 unresolved)
    so many questions & so few answers!
    Edited by: sb92075 on Feb 19, 2011 7:49 PM

  • About error ORA-01208: data file is an old version - not accessing current

    Hi friends,
    After getting an error ,what i have stated below, my database instance was crashed everytime. I have seen alert.log file nothing is there and this error never come in pattern maner it will come anytime during week.
    I have check with my application client they were doing nothing during this time even they have sent me their log i found no writing process was running and no jobs were running during this time.
    My oracle version is 10.2.0.3 and os version is W2k.
    Thanks in advance.
    PM
    Thu Oct 04 21:45:37 2007
    Errors in file d:\oracle\admin\prd347\bdump\prd347_ckpt_1296.trc:
    ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
    ORA-01122: database file 7 failed verification check
    ORA-01110: data file 7: 'E:\ORADATA\PRD347\CCDM_DATA_SMALL_T01_01.DBF'
    ORA-01208: data file is an old version - not accessing current version
    Thu Oct 04 21:45:37 2007
    Errors in file d:\oracle\admin\prd347\bdump\prd347_ckpt_1296.trc:
    ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
    ORA-01122: database file 7 failed verification check
    ORA-01110: data file 7: 'E:\ORADATA\PRD347\CCDM_DATA_SMALL_T01_01.DBF'
    ORA-01208: data file is an old version - not accessing current version
    _________________________________________________________________

    Hi,
    I am attaching this CKPT Trace file which might helpful to you all.
    V10 STYLE FILE HEADER:
    Compatibility Vsn = 169869312=0xa200000
    Db ID=2446524247=0x91d2ff57, Db Name='PRD347'
    Activation ID=0=0x0
    Control Seq=654516=0x9fcb4, File size=491520=0x78000
    File Number=7, Blksiz=16384, File Type=3 DATA
    Tablespace #7 - CCDM_DATA_SMALL_T01 rel_fn:7
    Creation at scn: 0x0781.e33c8c09 12/13/2004 15:16:45
    Backup taken at scn: 0x0000.00000000 01/01/1988 00:00:00 thread:0
    reset logs count:0x201c7917 scn: 0x0000.00000001 reset logs terminal rcv data:0x0 scn: 0x0000.00000000
    prev reset logs count:0x0 scn: 0x0000.00000000 prev reset logs terminal rcv data:0x0 scn: 0x0000.00000000
    recovered at 09/25/2007 12:17:31
    status:0x4 root dba:0x00000000 chkpt cnt: 211627 ctl cnt:211626
    begin-hot-backup file size: 0
    Checkpointed at scn: 0x079b.81db9e4f 10/04/2007 21:45:15
    thread:1 rba:(0x33cee.2.10)
    enabled threads: 01000000 00000000 00000000 00000000 00000000 00000000
    00000000 00000000 00000000 00000000 00000000 00000000
    Backup Checkpointed at scn: 0x0000.00000000
    thread:0 rba:(0x0.0.0)
    enabled threads: 00000000 00000000 00000000 00000000 00000000 00000000
    00000000 00000000 00000000 00000000 00000000 00000000
    External cache id: 0x0 0x0 0x0 0x0
    Absolute fuzzy scn: 0x0000.00000000
    Recovery fuzzy scn: 0x0000.00000000 01/01/1988 00:00:00
    Terminal Recovery Stamp 01/01/1988 00:00:00
    Platform Information: Creation Platform ID: 7
    Current Platform ID: 7 Last Platform ID: 7
    DATA FILE #7:
    (name #18) E:\ORADATA\PRD347\CCDM_DATA_SMALL_T01_01.DBF
    creation size=0 block size=16384 status=0xe head=18 tail=18 dup=1
    tablespace 7, index=7 krfil=7 prev_file=0
    unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
    Checkpoint cnt:211628 scn: 0x079b.81dba014 10/04/2007 21:45:20
    Stop scn: 0xffff.ffffffff 10/03/2007 02:00:10
    Creation Checkpointed at scn: 0x0781.e33c8c09 12/13/2004 15:16:45
    thread:0 rba:(0x0.0.0)
    enabled threads: 00000000 00000000 00000000 00000000 00000000 00000000
    00000000 00000000 00000000 00000000 00000000 00000000
    Offline scn: 0x0000.00000000 prev_range: 0
    Online Checkpointed at scn: 0x0000.00000000
    thread:0 rba:(0x0.0.0)
    enabled threads: 00000000 00000000 00000000 00000000 00000000 00000000
    00000000 00000000 00000000 00000000 00000000 00000000
    Hot Backup end marker scn: 0x0000.00000000
    aux_file is NOT DEFINED
    ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
    ORA-01122: database file 7 failed verification check
    ORA-01110: data file 7: 'E:\ORADATA\PRD347\CCDM_DATA_SMALL_T01_01.DBF'
    ORA-01208: data file is an old version - not accessing current version
    error 1242 detected in background process
    ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
    ORA-01122: database file 7 failed verification check
    ORA-01110: data file 7: 'E:\ORADATA\PRD347\CCDM_DATA_SMALL_T01_01.DBF'
    ORA-01208: data file is an old versio n - not accessing current version
    ------------------------------------------------------------------------------------------------

  • Copying data files to separate partition

    I am using Oracle 9i Release 2 with Windows XP and I would like to store the tablespace or data files on a separate raw partition. I was told that this could increase performance. How would I go about doing this?
    Thank you,
    Malina

    First, to the best of my knowledge, Windows doesn't support raw partitions. I've only ever heard of using raw partitions on Unix.
    Second, this is one of those things that if you have to ask the question, it's probably not appropriate. Raw disk partitions are significantly harder to manage than normal partitions, so if you don't have the storage administration background to deal with this added complexity, it's really not worth it.
    Third, the performance difference between raw disk and normal partitions is pretty small and affects only certain operations. The difference for most systems will be small-- unless you have run out of other things to tune and really need a couple of % boost, I wouldn't even consider this.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • I have copy of data files! How to install and add these files to database!

    Dear All,
    I am using Oracle 11gR1 and Linux is the operating system.
    I have OS level copy of my data files. This also includes the system and user tablespace data files.
    Here is the list of datafiles:
    */DB/app/oracle/oradata/bssglb/system01.dbf*
    */DB/app/oracle/oradata/bssglb/sysaux01.dbf*
    */iscsi/oracle/db/undotbs05.dbf*
    */DB/app/oracle/oradata/bssglb/users01.dbf*
    */DB/app/oracle/oradata/bssglb/bss01.dbf*
    */var/oracle/oradata/bssglb/bss02*
    */var/oracle/oradata/bssglb/system02.dbf*
    */iscsi/oracle/db/bss03.dbf*
    */iscsi/oracle/db/bssdevp01.dbf*
    I need to copy these files to a fresh install oracle database, kindly let me know the steps or any helpful document.
    Thank you, Imran

    I created control files with command:
    CREATE CONTROLFILE REUSE DATABASE "BSSGLB" NORESETLOGS  NOARCHIVELOG
        MAXLOGFILES 32
        MAXLOGMEMBERS 5
        MAXDATAFILES 600
        MAXINSTANCES 10
        MAXLOGHISTORY 1168
    LOGFILE
      GROUP 1 '/DB/app/oracle/oradata/bssglb/redo01.log'  SIZE 500M,
      GROUP 2 '/DB/app/oracle/oradata/bssglb/redo02.log'  SIZE 500M,
      GROUP 3 '/DB/app/oracle/oradata/bssglb/redo03.log'  SIZE 500M
    -- STANDBY LOGFILE
    DATAFILE
      '/DB/app/oracle/oradata/bssglb/system01.dbf',
      '/DB/app/oracle/oradata/bssglb/sysaux01.dbf',
      '/DB/app/oracle/oradata/bssglb/users01.dbf',
      '/DB/app/oracle/oradata/bssglb/bss01.dbf',
      '/var/oracle/oradata/bssglb/bss02',
      '/var/oracle/oradata/bssglb/system02.dbf'
    CHARACTER SET US7ASCII
    ;I have all the files mentioned in the datafile section of the above command.
    But somehow when i try to open the DB, first it asks for the media recover. When i recover database then it disconnects database saying:
    ERROR at line 1:
    ORA-01092: ORACLE instance terminated. Disconnection forced
    Process ID: 4768
    Session ID: 170 Serial number: 5Contents of the alert log file are:
    Beginning crash recovery of 1 threads
    parallel recovery started with 3 processes
    Started redo scan
    Completed redo scan
    1 redo blocks read, 0 data blocks need recovery
    Started redo application at
    Thread 1: logseq 7, block 2, scn 20356182450
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 7 Reading mem 0
      Mem# 0: /DB/app/oracle/oradata/bssglb/redo01.log
    Completed redo application
    Completed crash recovery at
    Thread 1: logseq 7, block 3, scn 20356202453
    0 data blocks read, 0 data blocks written, 1 redo blocks read
    Sun Feb 20 08:16:23 2011
    Thread 1 advanced to log sequence 8
    Thread 1 opened at log sequence 8
      Current log# 2 seq# 8 mem# 0: /DB/app/oracle/oradata/bssglb/redo02.log
    Successful open of redo thread 1
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Sun Feb 20 08:16:23 2011
    SMON: enabling cache recovery
    Errors in file /Oracle/app/oracle/diag/rdbms/bssglb/bssglb/trace/bssglb_ora_4550.trc:
    ORA-01173: data dictionary indicates missing data file from system tablespace
    Error 1173 happened during db open, shutting down database
    USER (ospid: 4550): terminating the instance due to error 1173
    Sun Feb 20 08:16:25 2011
    Instance terminated by USER, pid = 4550
    ORA-1092 signalled during: alter database open...
    ORA-1092 : opiodr aborting process unknown ospid (4550_182910338848)
    Sun Feb 20 08:16:26 2011
    ORA-1092 : opitsk aborting processPlease help

  • Time Capsule by default backup only data files and excludes program files like Photoshop, Lightroom, and MS Office, etc... right? or do I have to exclude manually?

    Time Capsule by default backup only data files and excludes program files like Photoshop, Lightroom, and MS Office, etc... right since if crash I can reinstall and not want to take up disk space for applications that I have CD to reinstall?  Or do I have to manually exclude these?

    Time Machine will back all files except files like caches, trash or temp files, you can select with file you would like to exclude by going into the Time Machine preferences.http://support.apple.com/kb/HT1427

Maybe you are looking for