ASO Model Backup

Hi folks,I need to backup some apps on a 7.1 server. Some of them are BSO and we both export values and copy files after we have stopped the essbase service (ind, pag, ind, tct, esm,...)However, it appears we can't export ASO models without using a rep file, which takes quite a long time. And I don't know what files to copy to backup an ASO app. I did not find it in the Admin Guide. Does anyone have a clue about it ?Thank youPierre

Hi,
I'm trying something similar. I'm trying to convert a BSO metaoutline so as to suit an ASO application.
I converted all formulae in the EIS metaoutline using MDX but it was throwing a warning.
When I performed a member load, around 50000 members were loaded but at the final step the outline was not saved. I got the following error message:
Outline verification failed. Member (Measures).    ESS_OUTERROR3_ASO_BAD_NONLEAFMBR
This is actually the accounts dimension. The guidelines mention that the Accounts dimension property must be set as "dynamic at dimension level" but i dont have the 'Outline build' tab under the properties for the accounts dimension.
Even after deleting all members in the accounts dimension which have formulae, i get the same error. How can i set the accounts dimension as dynamic OR is this problem caused by something else??
Thanks in advance.

Similar Messages

  • ASO and storing data to multiple disk volume

    Hi,I use Essbase Administration Services 7.1 on Win2000.I use ASO type of OLAP application. I would like to set storage to disk D: (the default is C:). How to to set storage disk volume for ASO type of applications?For block storage type OLAP application is in the database Properties tag in Storage tab. What about ASO type of applications?Is it posible to store data two multiple disk volumes?Thanks,Grofaty

    It is a bit different for ASO models - you need to modify the tablespaces, which are found in the properties of the APPLICATION, not the database.In EAS right-click the app and choose Properties. Go to the Tablespaces tab and that should be what you are looking for.Regards,Jade---------------------------------Jade ColeSenior BI ConsultantClarity [email protected]

  • Hot backup procedure

    Hello, I have a database that takes in between 80 to 600 transactions per day through an external feed. I need to implement a hot backup that will run once a day. I don't need to do point in time recovery because I can restore from the prior nightly backup and then replay the tlog file from the external feed. What is the best way to implement a hot backup. Is it as simple as running a sql procedure that will loop through all the tablespaces and put them in backup mode and then copy them to another location? I am open to all and any recommendations. The database is Oracle 9.2.0.1 running on Windows 2000 or 2003 server. Please provide any scripts if you can. Thank you and have a great day!
    David

    I wrote a dynamic hot backup script once, you can look into it if you want:
    -- YJAM - Modele backup dynamique
    -- SQL*Plus Env
    SET TIM OFF TIMI OFF ECHO OFF VERIFY OFF FEED OFF TRIMS ON
    -- Variables dépendantes du systeme
    DEFINE DEST_DIR=C:\Backup
    DEFINE DIR_SEP=\  -- win*=\    | unix=/
    DEFINE SH=host    -- win*=host | unix=!
    DEFINE COPIE=copy -- win*=copy | unix=cp
    DEFINE SUPPR=del  -- win*=del  | unix=rm
    DEFINE MARGE_ARCHIVELOGS=10
    -- Preparation pour les archives
    COL FARCH NEW_VALUE FIRST_ARCHIVELOG NOPRINT;
    SELECT MAX(SEQUENCE#)-1  FARCH FROM V$LOG;
    PROMPT *************************************************
    SET SERVEROUTPUT ON SIZE 200000
    SPOOL &DEST_DIR.&DIR_SEP.part1.sql
    DECLARE
         -- Curseur pour recuperer la liste des datafiles
         CURSOR cDatafiles IS SELECT TABLESPACE_NAME, FILE_NAME FROM DBA_DATA_FILES ORDER BY TABLESPACE_NAME;
         REP_BACKUP     VARCHAR2(500 CHAR) := '';     -- Définition du répertoire principal de destination de sauvegarde
         OTN                         VARCHAR2(50 CHAR) := ' ';                                             -- Variable en aiguillde trainante pour le declanchement du mode backup
         PFILE_INFO     VARCHAR2(500 CHAR) := '';                                             -- Variable pour test de type de fichier d'initialisation
    BEGIN
         -- Validation du repertoire de destination
         SELECT INSTANCE_NAME INTO REP_BACKUP FROM V$INSTANCE;
         REP_BACKUP := '&DEST_DIR.&DIR_SEP'||REP_BACKUP||'&DIR_SEP'||TO_CHAR(SYSDATE,'YYYYMMDD')||'&DIR_SEP';
         DBMS_OUTPUT.PUT_LINE('&SH mkdir ' || REP_BACKUP);
         DBMS_OUTPUT.PUT_LINE('--');
         -- Traitement des DATAFILEs
         FOR vDatafiles IN cDatafiles
         LOOP
              -- Si le tablespace parcourru n'est pas le meme que le precedent
              IF (OTN != vDatafiles.TABLESPACE_NAME) THEN
                   -- Si ce n'est pas le premier alors on bascule l'ancien hors du mode backup
                   IF (OTN != ' ') THEN
                        DBMS_OUTPUT.PUT_LINE('ALTER TABLESPACE ' || OTN || ' END BACKUP;');
                   END IF;
                   -- Mise a jour de l'aiguillde trainante et mise en mode backup du tablespace
                   OTN := vDatafiles.TABLESPACE_NAME;
                   DBMS_OUTPUT.PUT_LINE('ALTER TABLESPACE ' || OTN || ' BEGIN BACKUP;');
              END IF;
              -- Il faut sauvegarder les fichiers!
              DBMS_OUTPUT.PUT_LINE('&SH &COPIE ' || vDatafiles.FILE_NAME || ' ' || REP_BACKUP);
         END LOOP;
         -- Evite un effet de bord: sortir le dernier tablespace du mode backup*
         DBMS_OUTPUT.PUT_LINE('ALTER TABLESPACE ' || OTN || ' END BACKUP;');
         DBMS_OUTPUT.PUT_LINE('--');
         -- Sauvegarde du fichier de controle sous forme controlfile et trace
         DBMS_OUTPUT.PUT_LINE('ALTER DATABASE BACKUP CONTROLFILE TO ''' || REP_BACKUP || 'control.ctl'';');
         DBMS_OUTPUT.PUT_LINE('ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS ''' || REP_BACKUP || 'trace.ctl'';');
         DBMS_OUTPUT.PUT_LINE('--');
         -- Vérification du type de fichier d'initialisation
         SELECT VALUE INTO PFILE_INFO FROM V$PARAMETER WHERE NAME='ifile';
         IF (PFILE_INFO IS NULL) THEN
              -- A priori on utilise un fichier de type SPFILE ou le PFILE standard
              SELECT VALUE INTO PFILE_INFO FROM V$PARAMETER WHERE NAME='spfile';
              IF (PFILE_INFO IS NULL) THEN
                   -- On utilise le pfile standard     
                   -- Donc on créée temporairement un spfile pour en faire une sauvegarde)
                   DBMS_OUTPUT.PUT_LINE('CREATE SPFILE=''' || REP_BACKUP || 'spfile.ora'' FROM PFILE;');
                   DBMS_OUTPUT.PUT_LINE('CREATE PFILE=''' || REP_BACKUP || 'pfile.ora'' FROM SPFILE=''' || REP_BACKUP || 'spfile.ora'';');
                   DBMS_OUTPUT.PUT_LINE('&SH &SUPPR ' || REP_BACKUP || 'spfile.ora');
              ELSE
                   -- On utilise un spfile
                   -- Donc on le sauvegarde simplement
                   DBMS_OUTPUT.PUT_LINE('CREATE PFILE=''' || REP_BACKUP || 'pfile.ora'' FROM SPFILE;');
              END IF;
         ELSE
              -- on utilise un pfile specifique
              DBMS_OUTPUT.PUT_LINE('&SH &COPIE ' || PFILE_INFO || ' ' || REP_BACKUP);
         END IF;
         DBMS_OUTPUT.PUT_LINE('--');
    END;
    SPOOL OFF
    @&DEST_DIR.&DIR_SEP.part1.sql
    PROMPT *** Exec part 1
    SPOOL &DEST_DIR.&DIR_SEP.part2.sql
    DECLARE
         -- urseur de parcours de la liste des archivelogs
         CURSOR cListearchive(pLow NUMBER, pHigh NUMBER) IS  SELECT NAME,SEQUENCE#
         FROM V$ARCHIVED_LOG
         WHERE SEQUENCE# BETWEEN pLow AND pHigh
         AND DEST_ID IN
         (SELECT DEST_ID
         FROM V$ARCHIVE_DEST
          WHERE DESTINATION IS NOT NULL
          AND UPPER(DEST_NAME) IN
                    (SELECT UPPER(NAME)
                     FROM V$PARAMETER
                     WHERE VALUE IS NOT NULL
                     AND NAME LIKE '%log_archive_dest_%'
                    AND UPPER(SUBSTR(VALUE,1,8))='LOCATION'
         ORDER BY SEQUENCE#;
         LAST_ARCHIVELOG          NUMBER := 0;
    -- Variables de bornage des archives a sauvegarder
         FIRST_ARCHIVELOG     NUMBER := &FIRST_ARCHIVELOG;
         CURR_ARCHIVELOG          NUMBER := 0;
         REP_BACKUP     VARCHAR2(500 CHAR) := '';
    -- Définition du répertoire principal de destination de sauvegarde
    BEGIN
         -- Validation du repertoire de destination
         SELECT INSTANCE_NAME INTO REP_BACKUP FROM V$INSTANCE;
         REP_BACKUP := '&DEST_DIR.&DIR_SEP'||REP_BACKUP||'&DIR_SEP'||TO_CHAR(SYSDATE,'YYYYMMDD')||'&DIR_SEP';
         -- Marge de travail sur les fichiers archivelogs
         EXECUTE IMMEDIATE 'ALTER SYSTEM SWITCH LOGFILE';
         -- On va savegarder les &MARGE_ARCHIVELOGS derniers archivelogs.
         SELECT MAX(SEQUENCE#)-1 INTO LAST_ARCHIVELOG FROM V$LOG;
         FIRST_ARCHIVELOG := FIRST_ARCHIVELOG - &MARGE_ARCHIVELOGS;
         DBMS_OUTPUT.PUT_LINE('-- tranche: ' || FIRST_ARCHIVELOG || ' a ' || LAST_ARCHIVELOG);
         -- Parcours de la liste des elements archives concernes
         FOR vListearchive IN cListearchive(FIRST_ARCHIVELOG,LAST_ARCHIVELOG)
         LOOP
              -- Traite t'on une nouvelle archive?
              IF (CURR_ARCHIVELOG != vListearchive.SEQUENCE#) THEN
                   DBMS_OUTPUT.PUT_LINE('&SH &COPIE  ' || vListearchive.NAME || ' ' || REP_BACKUP);
                   CURR_ARCHIVELOG := vListearchive.SEQUENCE#;
              END IF;
              -- si non il n'y a rien à faire.
         END LOOP;
    END;
    SPOOL OFF
    PROMPT *** Exec part 2
    @&DEST_DIR.&DIR_SEP.part2.sql
    PROMPT *************************************************But it's provided without guarantee. It's been a long time i've not used it! And it might have bugs. Up to you to test it, backup is not a matter where you trust people :-)
    Of course, you could also use a simple RMAN script, depending on what you are trying to achieve exactly:
    RUN
      allocate channel c01 type disk format 'c:\backup\%d%U';
      backup database;
      backup archivelog all delete input;
    }This is just a sample. And you need to choose how you'll use RMAN.
    Yoann.
    Edited to clean up the code from unnecessary blank lines...
    Message was edited by:
    Yoann Mainguy

  • ASO member selection

    we have a hierarchy as below in account dim
    Z43100
    68405
    Rate
    Z43100 (Shared Member)
    when i am retreiving Z43100 in excel then name of member is changing to 68405 and when i try to look in member selction then i can see 68405 under Rate instead of Z43100.
    i am facing this problem in ASO model and version 9.3.1
    can anybody help me with this
    thank you

    thanks for the reply
    my hierarchy when Z43100 is shared member is like
    Rate
    Z43100 (+) Shared member
    Z43000 ( / )shared member
    100 (*)[0:100] -------------contains value 100
    tried doing never share at the z43100 when it occurs for the first time in hierarchy but it didn't work . still showed me same result
    is there anything else i can try
    thank you

  • ASO - Tuning

    Hi,
    New to ASO and need a little insight on tuning. I have a 6 dim cube. Measures is really small (20 members). However 2 of the dimensions are quite large. Zip Code has about 42K members and Customer has about 700K members. Are there any settings in the .cfg file or settings in EAS (Cache etc.) that I should consider looking into? User will never drill to the leaf level of Zip Code or Customer. However they will drill up and down on both dims to the intermediary levels, which may contain thousands of rows. Reading the admin guide isn't really giving me true insight on ASO tuning, if it exists.
    Does outline order matter at all?
    Thanks

    That's not too big for an ASO model so I don't think you would have too much trouble, but it depends on what you are doing that is dynamic.
    Dim order does not matter. The big thing is to try and make as many hierarchies stored as possible. Dynamic hierarchies are what will slow you down the most, as are complex MDX that has to execute across many members or large ranges.

  • Aso outline

    <p>Hi all,</p><p> </p><p>I am getting error while iam building an outline in ASO model.Problem is I am getting errors when I am building dimension above6million upto 6 million I donot have any problem.</p><p> </p><p>ERROR - 1007119 - Loading New Outline for Database [S_BIG1]Failed.<br>ERROR - 1241101 - Unexpected Essbase error 1007119.<br><br>On later loads these are new errors<br><br>ERROR - 1042012 - Network error [10054]: Cannot Send Data.<br>ERROR - 1042012 - Network error [10054]: Cannot Send Data.<br>ERROR - 1241101 - Unexpected Essbase error 1042012.</p><p> </p><p>TIA</p><p>Jsprint</p>

    <p>Thanks for the reply.  Yep, I tried creating a newapplication using ASO and then trying to use the load rule to builda specific dimension in the outline.  I made sure the outlineverfiied correctly with no errors before attempting a load of thedata.  When loading I receive the following error....</p><p> </p><p>ERROR - 1007083 - Dimension build failed. There are manypossible causes (for example, problem allocating memory). Check theserver log file to locate the error that caused the failure..<br>ERROR - 1241101 - Unexpected Essbase error 1007083.</p><p> </p><p>When I check the error logfile it simply says...  \\Outlineverification errors:</p><p> </p><p>I remember reading something about using a Buffer_ID of 1? Is this important? Again I'm still not even sure if adimensional build on an outline is possible for ASO.  Anyonehave any ideas?  Thanks in advance.</p>

  • 11.1.2 Planning – Mapping Reporting Application

    Hi John,
    I was reading one of your blog
    http://john-goodwin.blogspot.com/2010/06/1112-planning-mapping-reporting.html
    I want to know what is the advantage of using "Mapping Reporting Application" over replicated partioning, and also what should be the real use of "Mapping Reporting Application"
    One example of why you may want to use this functionality is you say you have a reporting database such as an ASO database reporting actuals and you want to push a proportion of the forecast/budget data from the planning application into the ASO model. It could also be that you have a number of different cubes that you need to push different areas of data to from your planning application
    When I first read about this new functionality I assumed it would create a replicated partition and push the data to the mapped database, I thought it would manage the partition so it wasn’t destroyed with a refresh, this is not the way it works though but more about that later.Can you please explain me in detail about the way of working of this functionality
    Thanks & Regards,
    Avneet Singh Bhatia

    I don't know if there is any advantage, it is just a different way going about it, the planning method exports and then loads the data, is that better than a replicated partition I am not sure but everybody has there own opinions.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Essbase 11.1.2 dimension build slow

    Hi All,
    I wonder if anyone has had similar problems or ideas of how to solve the problem I have.
    We've recently migrated to Essbase 11.1.2 running on some big SPARC 64 bit servers and everything is going well.
    We have a problem with dimension builds though. They work, just very slowly taking sometimes over a minute to build a dimension with just 10-20 members in it. With 22 dimensions to build it is taking over 20 minutes to build our dimensions - much more time then loading the gigabytes of data after and this is making the overnight process slower.
    The model was migrated from 11.1.1.2 and we converted the outline. The dimension builds on the old server were 4 minutes. The old rules files are still used but as a test I tried creating a new rules file in the same application to load the meta data from a text file instead of an Oracle SQL server but the same problem exists.
    We use deferred restructure on an ASO model but the restructure is fine - just the 'BuildDimFile' session runs for over a minute and then starts again for over a minute for the next dim build. Number of members seems to make no difference be it 10 or so to 60,000.
    Has anyone got any ideas why or seen similar? Or should I really move out of the dark ages and learn Essbase Studio!
    Thanks for your help,
    Dan

    It seems to be related to the number of dimensions.
    I tried creating a basic outline, two dimensions and loaded with a basic rules file. Took 8 seconds in total, no problem.
    I then tried copying in one of the existing rules files that builds a dimension with four members. Still no problem.
    I then went to the existing outline and deleted all but two dimensions and tried rebuilding with the rules file - it is still quick. I then added the dimensions back, by typing them, all as stored with no members underneath and it suddenly jumps to 40 seconds (of which 30 sec for the dimension build time, 10 sec for the restructure) using the same rules file and data. I'd expect the restructure time to go up, but not the build time.
    Possibly unrelated - EAS often produces an error message when loading an outline stating that I have -1536 MB memory free and the outline is 0 MB do I still wish to load?
    Dan

  • Scripting Aggregate storage Views

    I cannot seem to find anywhere that I can use maxl to script a view materialisation specifically. I can obviously tell it to aggregate up to a certain size, after having used the wizard to create the optimum views, and cribbing the size of the views created. However, is there syntax to allow me to specify views specifically e.g. {0,1,0,2,2,3,3,2,3}to specify the levels of aggregation for a 9 dimensional aso model. I would like to automate this as part of an overnight process.

    <p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

  • 64-Bit Essbase Optimization Tips

    We are upgrading from 9.3.1 to 11.1.1.3 and have decided to try 64-bit Essbase on a Windows server with Xeon processors. All went well in initial testing, my ASO models all ran faster on the new 64-bit server. However, we have a planning model that of course is BSO. That model is performing very poorly on the new 64-bit server (most calculations are taking 5-6 times longer). Just to prove that the difference was 64-bit and not the upgrade to 11.1.1.3, I installed the 32-bit version of 11.1.1.3 on another server and ran the same test. It performed fine, very similarly to our 32-bit 9.3.1 installation.
    I guess I'm looking for any ideas on tuning BSO applications or the server itself in a 64-bit windows environment. Also, has anyone else had similar issues.
    Any help is much appreciated!
    Jeff

    You're the first person I've heard say that performance is slower on 64 bit. Although BSO Essbase is always...interesting.
    Did you change platforms (Windows to *nix?)?
    What specifically is slower -- aggregations, specific allocations, something else?
    64 bit Essbase has made my Planning life much, much easier -- I don't have to wring every bit of performance out of Essbase to get more than acceptable speed. I am enjoying the laziness. :)
    If you go onto ODTUG's website, click on Tech Resources, and search for "64" you'll find two presentations on optimizing for 64 bit, one by Edward Roske and the other by Brian Suter. If you're not an ODTUG member, you can sign up for a free associate membership.
    Okay, if I mention ODTUG, I have to mention 2010's Kaleidoscope conference in Washington DC, 27 June to 1 July. There are many, many tuning sessions that are worth everyone's time. Many of the posters on this board will be presenting there. If you like us on the board, you'll (hopefully) like us more in person.
    Regards,
    Cameron Lackpour

  • OVI Suite - Nokia shooting themselves in the foot ...

    I've been using PC Suite for quite some time, as I'm responsible for updates and general maintenance for the users where I work. PC Suite has been working like a charm, easy to use, easy to update. Even some of our senior users have been installing it themselves with ease.
    So.. when OVI suite came along, I considered this to be at least as easy and fast to use as PC Suite. But no.
    First of all they demand your create a OVI suite account, which is annoying. Extreme lag, phones crashing when connecting, the computer crashing when connecting phones (yes, newer models), backups and restore not working properly (hangs) And if you want to install a program via OVI Suite, you have to go via the internet?? Sending a download link to your phone takes forever..if it makes it to the phone at all..what is Nokia doing here?
    This is a serious blow for our part. 90% of our users have Nokia, but I'm not sure whether this will continue in the future..too much hassle with OVI suite. I might encourage them to go for other phones unless OVI suite improves...soon!

    I agree, ovi suite is horrible. Installed and uninstalled 2 different versions over the past few months and gone back to pc suite.
    Ovi is far too unstable and isn't any easier to use. Infact for what I need to do ovi as a lot more difficult!
    The GUI is a laugh and yes I agree it constantly falls over and locks up when doing things.
    I tried to update a hundred contacts via USB and it fell over a number of times, went back to PC suite and it does it flawlessly.
    If you have PC Suite I urge you NOT to upgrade unless you absolutely have to!!!!!

  • I have a model 1,1 Mac Book in the black case.  How can I transfer about 30 GB of stuff to a MacBook Pro running Yosemite?  Can't do a Thunderbolt transfer.  The Seagate backup drive I bought doesn't support 10.5.8.  Thank you.

    How do I transfer 30 GB of stuff, mostly text documents, bookmarks and a few hundred photos from a MacBook model 1,1 in the black case, purchased in (2007?) running Leopard 10.5.8 with a 667 bus, 2 GB, 256 GB to a MBP 13" Retina running Yosemite 10.10.1?  Using Migrate is out.  Discovered after I bought it that the Seagate Plus Backup for Mac does not work with 10.5.8.  OnetoOne recommends backing up my data before I do that, which seems like a good idea.  Have a 801.11 n Time Capsule that also isn't working so that is not presently an option.  Thank you.

    The Seagate backup drive I bought doesn't support 10.5.8
    The only reason I can see for that is it's formatted NTFS, which would likely be Read Only in 10.5.x, if you can reformat it MacOS extended, which would erase everything on it now, then that should work for your purposes.

  • DPM is Only Allowing Express Full Backups For a Database Set to Full Recovery Model

    I have just transitioned my SQL backups from a server running SCDPM 2012 SP1 to a different server running 2012 R2.  All backups are working as expected except for one.  The database in question is supposed to be backuped up iwht a daily express
    full and hourly incremental schedule.  Although the database is set to full recovery model, the new DPM server says that recovery points will be created for that database based on the express full backup schedule.  I checked the logs on the old DPM
    server and the transaction log backups were working just fine up until I stopped protection the data source.  The SQL server is 2008 R2 SP2.  Other databases on the same server that are set to full recovery model are working just fine.  If we
    switch the recovery model of a database that isn't protected by DPM and then start the wizard to add it to the protection group it properly sees the difference when we flip the recovery model back and forth.  We also tried switching the recovery model
    on the failing database from full to simple and then back again, but to no avail.  Both the SQL server and the DPM server have been rebooted.  We have successfully set up transaction log backups in a SQL maintenance plan as a test, so we know the
    database is really using the full recovery model.
    Is there anything that someone knows about that can trigger a false positive for recovery model to backup type mismatches?

    I was having this same problem and appear to have found a solution.  I wanted hourly recovery points for all my SQL databases.  I was getting hourly for some but not for others.  The others were only getting a recovery point for the Full Express
    backup.  I noted that some of the databases were in simple recovery mode so I changed them to full recovery mode but that did not solve my problem.  I was still not getting the hourly recovery points.
    I found an article that seemed to indicate that SCDPM did not recognize any change in the recovery model once protection had started.  My database was in simple recovery mode when I added it (auto) to protection so even though I changed it to full recovery
    mode SCDPM continued to treat it as simple. 
    I tested this by 1) verify my db is set to full recovery, 2) back it up and restore it with a new name, 3) allow SCDPM to automatically add it to protection over night, 4) verify the next day I am getting hourly recovery points on the copy of the db. 
    It worked.  The original db was still only getting express full recovery points and the copy was getting hourly.  I suppose that if I don't want to restore a production db with an alternate name I will have to remove the db from protection, verify
    that it is set to full, and then add it back to protection.   I have not tested this yet.
    This is the article I read: 
    Article I read

  • FYI - Just installed an APC Backup-UPS XS 1200 Model BX1200

    This is a simple FYI... Read on if you have an interest in Backup-UPS systems for your Mac.
    My experience with my neigborhood's power supply (or lack of it) this past week has made me think very hard about a UPS system. I did some research and ended up buying the APC Backup-UPS XS 1200 Model BX1200 to service my PM G5 Dual 2.5GHz system. This unit provides 1200VA and 780 watts.
    Not only did I lose power abruptly several times this past week but have noticed at times when power supply AOK that my PM G5 would at times mysteriously shutdown for no good reason - but of course I suspect the power draw has caused my puny EnerGizer ER-HM450 UPS Power Protection device to give up on me. It has a VA rating of 450 and watts of 200 and supposedly a 2-8 mins of battery run time when power fails. This EnerGizer just wasn't doing the job for me.
    My PM has 3 ext FW400 and FW800 devices and the PM case has two stock HDs and a 3x 500GB RAID-0 SwiftData 200 setup installed inside. Also have the Ultra 6800 graphics card along with 3x FW800 port PCI card and of course a 4-port FirmTek Card for the Swiftdata devices. I also have the Apple 23" Alu flat panel display along with iSight and powered USB hubs. On top of this I have a Brother B&W laser printer HL-5170DN, Cable modem and AirPort base station. Occasionaly I will also plug in my power supply for my 17" PB and various other things such as Mini DV camera, Cell phone charger and so on.
    So as you can imagine the power draw can get quite high when all these devices demand their max power draw - especially the Brother laser printer.
    The whole setup draws power from a single wall socket having 20amp circuit breaker. This same 20amp circuit also powers other wall sockets and over lights in my study/office where I house the computer.
    So OK - I installed my new APC UPS unit and I have to say I'm very pleased with it. I now feel very much protected. It claims to provide me with some 30-40 mins of battery power after a power failure.
    The nice thing about this device is that it's Mac friendly (I did not have to install any of the software that came with the APC device) and no sooner had I hooked up its data port to a USB port on the PM G5 the System Preferences... Energy Saver panel displayed a new UPS tab for setting up the various juicy things for the UPS system. It showed me the exact APC model I had just installed along with its UPS Battery Level percent (it showed 100% full). All very cool stuff.
    I will take a look at the supplied APC software to see if some other goodies above what Apple provides is useful or not. This other software is called PowerChute and claims to have sophisticated Power Management - we will see. ;-))
    Just thought others might find this report useful.
    APC provides a useful web site for you to figure out and size the best APC unit to buy at http://www.apc.com/sizing
    BTW - picked up this APC model at my local Staples store for $154 + tax.
    Regards... Barry

    I've been looking around online for more "user" opinions on the matter of the importance of true sine waves and approximated. I just got off the phone with an APC representative who explained to me the difference, but of course, since he worked for APC, he thought the difference you pay for in a Smart-UPS is worth it. He said the basic BackUPS are made so that the company can stay competitive.
    I actually own a Smart-UPS 1500 - WAY overkill for my setup. I'm just running a PowerMac G5 and a 30" display. But when I was shopping for one, a friend convinced me that my equipment will last longer if I go with the Smart-UPS. But the load on that thing is at it's lowerest indicator and it kind of feels like a waste.
    However, I know we have bad electricty at our apartment in Los Angeles, and often get brown outs, so maybe it's worth having.
    Anyway, my question is, does anyone have any opinions on how much our Apple computers suffer (or do not suffer) from AVR (approximated voltage regulators)? Is it really worth the extra few hundred dollars getting a UPS that will deliver true sine wave?
    I appreciate any insight.
    Thanks,
    Brian,.
    Quad-core 2.5 G5 Mac OS X (10.4.5) 30" Apple Cinema Display
    Quad-core 2.5 G5   Mac OS X (10.4.3)   30" Apple Cinema Display

  • N95 Backup issue (hang on model-spec backup)

    I wanted to update my phone, as i missed lots of backup (still with stock firmware), but i cannot make a backup.
    Backing-up process just hangs at "Phone model-specific backup". Tried this first on Win Seven - i thought it was a compatibility issue. Later tried it on XP sp2 - all the same. Cannot backup anything.
    I remember it worked well on XP SP3 with the suite on the Nokia Software CD. 
    And i have doubts it will not fail during an update. 
    Specifications: 
    N95 8GB
    Firmware 11.0.026
    01-11-07
    RM-320
    Using Nokia PC Suite 7.1.18.0 and 7.1.26.0 on XP SP2 and Seven 7100
    Solved!
    Go to Solution.

    update 
    !Warning
    Currently updating on Windows Seven is impossible. It won't accept BB5 ADL Loader driver in any way, without which it won't be able to upload new firmware.
    Kids, don't try this at home. 

Maybe you are looking for