RMAN level 0 and level 1 tablespace/database relationships

If I take an incremental level 0 backup of an entire database and then take a level 0 of an individual tablespace, does an incremental level 1 backup of that tablespace reference the incremental level 0 of the tablespace or of the entire database? If I change the order of the level 0 backups, does it change the reference of the tablespace's level 1?
I guess what I'm asking is does the the level 1 incremental reference the latest level 0 regardless of whether it's for the whole database or just the tablespace or does a level 1 always reference it's own level 0?
Also, if I take a level 0 of the database Sunday night, and then a level 1 of the database every night after that as well as multiple level 1s of a tablespace throughout each day, will Tuesday night's database level 1 include all of Monday's tablespace level 1s?
I just can't seem to find oracle documentation about these relationships.

Hemant K Chitale wrote:
"Database" and "Tablespace" are logical groupings for our convenience.
RMAN tracks backups at the datafile level.I think that is exactly what I was looking for. I just want to clarify my understanding.
If I run a level 0 on the database Sunday night, a database differential level 1 each night, and a level 0 on a specific tablespace each morning at 6AM followed by hourly tabelspace differential incrementals only for that tablespace, then Monday night's level 1 will be an incremental of the entire database pointing to Sunday's level 0 except for that specific tablespace for which it will be incremental from that tablespace's last level 1. Does that sound right? If so, then it sounds like to restore the entire database to its state on Tuesday at noon from media, I have to put these backups back on disk before running the RMAN restore:
- database level 0 from Sunday
- Monday night's database level 1
- Tuesday's tablespace level 0
- Tuesday's tablespace level1s from 7AM - noon
This means I can avoid putting Monday's tabelspace incrementals back right? From what I gather, the RMAN restore will put the blocks back into the datafiles at their correct SCNs, and then if there are any archive logs after that point in time, the RMAN recover will apply them. Does that all sound correct?

Similar Messages

  • RMAN Views and Level 1 or 0 stats

    I'm using RMan to back up an 11gr2 DB.
    I know that V_$RMAN_BACKUP_JOB_DETAILS records the stats for each backup taken.
    Howver, both Level 1 and Level 0 RMan backups both show up as Incremental backups.
    Technically, they both are, but is there a view or something which I can query which shows if a level 0 backup has been taken or a level 1 backup has been taken ?

    Hi ,
    Technically, they both are, but is there a view or something which I can query which shows if a level 0 backup has been taken or a level 1 backup has been taken ?I believe that Oracle does not report incremental backup level 0 or 1. In any view of the catalog could find this information.
    INPUT_TYPE contains a value indicating the type of input for this backup. For possible values, see the RC_RMAN_BACKUP_TYPE view.
    SQL> select * from RC_RMAN_BACKUP_TYPE;
        WEIGHT INPUT_TYPE
             1 BACKUPSET
             2 SPFILE
             3 CONTROLFILE
             4 ARCHIVELOG
             5 DATAFILE INCR
             6 DATAFILE FULL
             7 DB INCR
             8 RECVR AREA
             9 DB FULLRegards,
    Levi Pereira

  • Dimension's levels and level attributes

    Hi guys,
    Let's say I have following situation:
    Creating dimension CUSTOMERS with unique key CUS_ID.
    Levels: L_CUSTOMER and L_COUNTRY.
    One hierarchi: H_CUSTOMER_REGIONAL with levels L_COUNTRY -> L_CUSTOMER.
    Now the question: What level attributes should I create?
    For level L_CUSTOMER obviously CUS_ID, which is a key level attribute, and CUS_NAME.
    For level L_COUNTRY what attributes should I create?
    I see two variants:
    1. COUN_ID (key level attribute) and COUN_NAME
    2. just one COUN_NAME and this will be my key level attribute.
    What guidelines should I follow here? I intend to use model in Discoverer later if it influences the design here.
    Please advice.
    Thanks,
    Alex

    Alexandre,
    It depends whether you plan to use the COUN_ID in a join with a summarized fact table - if yes, I would suggest to have the ID, if not (pure star schema), there might not be a great use for it.
    Regards:
    Igor
    Hi guys,
    Let's say I have following situation:
    Creating dimension CUSTOMERS with unique key CUS_ID.
    Levels: L_CUSTOMER and L_COUNTRY.
    One hierarchi: H_CUSTOMER_REGIONAL with levels L_COUNTRY -> L_CUSTOMER.
    Now the question: What level attributes should I create?
    For level L_CUSTOMER obviously CUS_ID, which is a key level attribute, and CUS_NAME.
    For level L_COUNTRY what attributes should I create?
    I see two variants:
    1. COUN_ID (key level attribute) and COUN_NAME
    2. just one COUN_NAME and this will be my key level attribute.
    What guidelines should I follow here? I intend to use model in Discoverer later if it influences the design here.
    Please advice.
    Thanks,
    Alex

  • RMAN Cumulative and differential level 1taking too much time

    hi,
    I am attempting to HOT backup my 600 GB database to backup into Tape using NMO 5 EMC Networker 7.6.
    My networker server is on Win Serv 2003.
    My oracle database is on RHEL 4.5 Architecture ia64
    Oracle DB Version 10.2.0.4.0
    Using ASM
    Using EMC Storage as Databse storage
    Using tape backup media type LTO-Ultrium-5
    No of chaneels used same for bothLevel 0 & 1 is 4
    there are 60 Datafiles fior the database
    i am atttempting incremental backup[Hot] backup
    for Incrementa Level 0 is taking 90 Minutes to complete.
    BUT leve1 backup [Both differential and cumulative] are taking almost the same time as taken for Level 0 backup
    almost 80Mins.
    but the backup Set size for Level 0 is almost 500 GB and Sizes for any Level 1 backup not more than 200MB.
    i am confused if both LEVEL 0 AND LEVEL 1 BACKUP should take the same span of time.
    please help to reduce the time to complete the Level 1 backups..
    thanks in advance

    RMAN incremental level 1 and up will have to verify every block in the data files to identify if any modifications have occurred. The time it takes to complete the incremental backup will depend on how much changed. Are you using the latest patches? There are known bugs that can affect performance problems with RMAN backup and recovery. Otherwise, check the Oracle documentation to troubleshoot RMAN.
    Block change tracking as already mantioned, introduced in 10g, can greatly speed up your incremental level 1 and up backups.
    From what I understand:
    SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/mydir/rman_change_track.f';
    As soon as block change tracking is enabled, Oracle starts to record every block that updates. The information is stored in a bitmap inside the BCT file. Every incremental backup causes a bitmap swtich in the BCT file.
    If there exists a previous bitmap beside the current bitmap, then an incremental level 1 backup will only backup the blocks according to the current bitmap. Incremental level 1 backups are differential backups by default. If there is no previous bitmap, the RMAN backup will perform a conventional scan of the database as usual.
    The bitmap logic applies also to cumulative level 1 incremental backups, which will use all the bitmaps recorded since the last bitmap switch from a level 0 incremental backup. Due to the limit of 8 bitmaps, a cumulative incremental level 1 backup will have to perform a conventional scan of the database, if you make a level 0 database backup followed by 7 differential incremental backups.

  • Want to upgrade 2008 R2 to 2012. Upgrade advisor giving errors relating to Database Compatability Level and Server Collation

    I want to upgrade a SQL Server from 2008 R2 to 2012.  When I run upgrade advisor, I get the following error messages:
    Rule "Valid Database compatibility level and successful connection" failed.
    The report server database is not a supported compatibility level or a connection cannot be established.  Use Reporting Services Configuration Manager to verify the report server configuration and SQL Server management tools to verify the compatibility
    level.
    Rule "Valid Database server collation and successful connection" failed.
    The SQL Server Database Engine is not configured with a valid server collation and cannot be used as the Reporting Services SharePoint Shared Service catalog database.
    The database called ReportServer has Collation = Latin1_General_Cl_AS_KS_WS and Compatibility level = 100

    Hi Andrew,
    Regarding to the first error message, please check the Reporting Service Configuration Manager, make sure that you connect to ReportServer database from the correct server
     and use servername\instancename format connection string. For more details, please review this similar
    thread.
    Regarding to the second error, it is caused by that the current SQL Server Database Engine server is using an incompatible server collation.
    SQL Server 2012 Reporting Services SharePoint mode utilizes the SharePoint shared services architecture. SharePoint does not support SQL Server Database Engine configured for case sensitive or server collations or binary server collations.
    And since the SQL Server Database Engine server collation property cannot be changed, you will not be able to complete an upgrade of Reporting Services. You will need to migrate your Reporting Services installation to a new server which is using a compatible
    server collation. For more details, please review the following article:
    Incompatible Database Engine Server Collation:
    https://msdn.microsoft.com/en-us/library/hh759335%28v=sql.110%29.aspx?f=255&MSPPError=-2147217396
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Rman backup full then level 0 and level 1 Weekly

    Please share your thoughts of backups.
    Friday -> Take a full rman backup
    Saturday -> Take a level 0 rman backup -> If recovery needed restore full and apply level 0 ?
    Sunday -> Take a level 1 rman backup -> If recovery needed restore full and apply incremental 1 ?
    Monday -> Take a level 2 rman backup -> If recover needed restore full and apply incremental 1 ?
    Tuesday -> Take a level 1 rman backup -> If recover needed restore full and apply incremental 2 ?
    Wednesday -> Take a level 2 rman backup -> If recovery needed restore full and apply incremental 1 ?
    Thursday -> Take a level 1 rman backup -> If recovery need restore full and apply incremental 2 ?
    Thanks

    Please note that Full and level 0 backs are same...
    Sunday -> Take a level 0 rman backup
    Monday -> Take a level 1 rman backup
    Tuesday -> Take a level 1 rman backup
    Wednesday -> Take a level 2 rman backup
    Thursday -> Take a level 1 rman backup
    Friday-> Take a level 1 rman backup
    Saturday -> Take a level 1 rman backup
    Taking level 2 in the middle of the week, so that in case of any recovery you need ao apply changes since the last level 0, 1 or 2

  • Rman Level 0 and Level 1

    Hi Everybody
    I have to schedule my rman backup in the following manner.
    1. Weekly once level 0 backup has to run
    2. Rest of the other days level 1 backup has to run to disk
    3. In between the week if we moved all backup to some other destination, the backup schedule has to run the level 0 instead of incremental backup automatically.
    Then level 1 backup follows the level 0 backup.
    Can anyone help me out how to schedule this task that satisfies the above three condition.

    Thanks Aman
    My manager he needs like windows backup that level 0 run at once in a week and level 1 follows till end of the week if we remove the backup or moved within the week then backup folder will be empty that time level 0 has to run instead of next incremental backup.
    For example: I am schedule the rman by following manner by windows schedule task with script.
    1. Level 0 is scheduled at saturday only
    2. level 1 is scheduled from sunday to thursday.
    My manager he don't want like this. level 0 should not run by day like saturday. he wants script will check the disk if level 0 is existing or not
    if existing it will replace the level 0 with new one.
    if not existing it will create the new one.
    Is there possible to run level 0 and level 1 in the same script.

  • Table relationship between hierarchy level and merchandise category

    1) Want to find Merchadise Category from Merchandise Hierarchy level which is attached with same Merchandise category.
    I am having data/TABLE as below.
    my input: M_WWG1C_class
    i want to fetch data matkl in T023 or matkl in MARA for the same article within same merchandise category.
    How i can have link?
    2) i am having input CAWN_atwrt(Characteristic Value) and how i can reach matkl in T023 or matkl in MARA  for the same article within same merchandise category.

    Oracle does not support REPEATABLE READ transaction isolation level. It only supports SERIALIZABLE, READ COMMITED and READ ONLY isolation levels.
    The default is READ COMMITED.
    While the READ COMMITED can access all the committed data till the point of execution in transaction, READ SERIALIZABLE can access committed data till the point of start of transaction.
    Another difference between two with respect to ROW LEVEL LOCKING is
    Both read committed and serializable transactions use row-level locking, and both will wait if they try to change a row updated by an uncommitted concurrent transaction. The second transaction that tries to update a given row waits for the other transaction to commit or roll back and release its lock. If that other transaction rolls back, the waiting transaction, regardless of its isolation mode, can proceed to change the previously locked row as if the other transaction had not existed.
    However, if the other blocking transaction commits and releases its locks, a read committed transaction proceeds with its intended update. A serializable transaction, however, fails with the error "Cannot serialize access", because the other transaction has committed a change that was made since the serializable transaction began.
    Read following for clearing your concepts on transaction isolation levels and locking mechanisms
    http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96524/c21cnsis.htm#2414
    Chandar

  • Can RMAN do table-level recovery

    Hi,
    I know that you can do database and tablespace level recovery using RMAN. Can you also do table-level recovery? How granular can you get?
    Thanks

    For table level recovery you can use flashback features:
    Flashback Drop - Oracle now provides a way to use flashback to restore tables that were dropped accidentally.
    Flashback Table - This feature introduces the FLASHBACK TABLE statement in SQL, which lets you quickly recover a table to a previous point in time without restoring a backup.
    etc
    Oracle Database has a group of features, known collectively as flashback, that provide ways to view past states of database objects, or to return database objects to a previous state, without using traditional point-in-time recovery.
    Flashback features of the database can be used to:
    * Perform queries that return past data.
    * Perform queries that return metadata showing a detailed history of changes to the database.
    * Recover tables or individual rows to a previous point in time.
    Flashback features use the Automatic Undo Management system to obtain metadata and historical data for transactions. They rely on undo data: records of the effects of individual transactions. Undo data is persistent and survives a database malfunction or shutdown. Using flashback features, you employ undo data to query past data or recover from logical corruptions. Besides your use of it in flashback operations, undo data is used by Oracle Database to do the following:
    * rollback active transactions
    * recover terminated transactions using database or process recovery
    * provide read consistency for SQL queries
    Please refer here: http://www.stanford.edu/dept/itss/docs/oracle/10g/appdev.101/b10795/adfns_fl.htm
    And an other method is, if table is very important and your db is working archive mod, you can recovery your database (skip unnecessary tablespaces which is not needed for saving space cost) to another host (disaster recovery), then take export that table, import to your working db or create database link between this databases insert into user.table1 as select * from user.table1@recoevered_database;

  • Restore DB from Incremental Level 0 and Level 1 Differential

    Due to aging servers, I need to move a database to a new server; therefore, I'm looking to backup source db using rman level 0 and restore it as target db on another host. While the target db is restoring from the level 0, I want to run a differential of souce db and apply it to target db once the level 0 is complete. Is this possible? I searched through rman documentation and it appears that it is not possible. I see that you can do a restore but the restore is based on the date/time of the differential, which gets the level 0. Could I run 2 restore database commands before running the recover database and alter database open resetlogs commands? The first restore database command would restore the database from the level 0 backup and the 2nd restore database command would restore from the differential backup. Any help is appreciated.
    TIA

    I have run into a Problem with my setup:
    The steps that i used to do to maintain the "standby" server are as follows:
    take a level 0 backup(backup incremental level 0 database) on sunday and recover on standby server
    take a level 1 backup(backup incremental level 1 database) on monday and recover on standby server
    take a level 2 backup(backup incremental level 2 database) on tuesday and recover on standby server
    take a level 3 backup(backup incremental level 3 database) on wednesday and recover on standby server
    take a level 4 backup(backup incremental level 4 database) on thursday and recover on standby server
    but when i was trying to take a level 5 , i realised that 5 is the limit of taking incremental backups
    does that mean i have to take level 0 ,again,on friday and carry in the above fashion
    also,
    sometimes after recovery,my system files
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: 'F:\ORACLE\MPWR01\MPWR01\SYSTEM01.DBF'
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 07/22/2011 15:32:02
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of log thread 1 seq 176293 lowscn 6782347405 found to rest
    ore
    RMAN-06025: no backup of log thread 1 seq 176292 lowscn 6782295731 found to rest
    ore
    RMAN-06025: no backup of log thread 1 seq 176291 lowscn 6782139901 found to rest
    ore
    RMAN-06025: no backup of log thread 1 seq 176290 lowscn 6781998071 found to rest
    ore
    RMAN-06025: no backup of log thread 1 seq 176289 lowscn 6781865569 found to rest
    ore
    RMAN-06025: no backup of log thread 1 seq 176288 lowscn 6781709167 found to rest
    ore
    the archives it asks for are expected,as they are the archives generated post incremental backup,but why does my system datafile go into recovery? i am a bit confused about that?
    it occurred after i applied level 2 on my "standby" server, when i applied level 3 over this ,then it did not occur..
    i would like a discussion on this guys!!

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to maintaing E-Business suite with latest product levels and bug/securi

    Hi All,
    How to maintaing E-Business suite with latest product levels and bug/security fixes?
    backup strataegies for database and E-BS suite?
    what is mean by gather user requirements?
    Please some one explain briefly...........
    Thanks

    Please post your question in the appropriate forum.
    E-Business Suite
    http://forums.oracle.com/forums/category.jspa?categoryID=3
    Thanks,
    Hussein

  • How to apply row level security against the database administrator

    I would like an advice in applying row level security against the database administrator. We need to prevent DBA from editing data in some table rows or have any indication that data was corrupted.
    There is no problem in viewing the data so we considered one way hash function or digital signature which will be stored in the same table, but we see following disadvantages:
    HASH - DBA may use the same hash function to update the stored data after he changes the sensitive row.
    Digital signature - the is a need to manage and keep the private key in a safe place outside of DB
    Is there additional ways to achieve the aim?

    Does VPD helps to prevent from DBA to edit/view a data in specific rows?Yes.
    If I correctly understand, DBA has full access to security policy used by VPD to control the access and can grant himself privileges that I don't want.You can to define which users can be exempt of the politics, for the context or by Grant EXEMPT.
    This includes DBAs.
    The simple fact of being DBA doesn't guarantee the exemption.
    Everything goes to depend of the VPD config.

  • Schema level and table level supplemental logging

    Hello,
    I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
    SQL>alter database add supplemental log data (primary key) columns;
    Database altered.
    SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
    SUPPLEME SUP SUP
    IMPLICIT YES NO
    -My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
    GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
    Successfully logged into database.
    GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
    what is the deference between schema level and table level supplemental logging?

    For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
    1. Primary key
    2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
    columns, and no nullable columns
    3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
    columns, but can include nullable columns
    4. If none of the preceding key types exist (even though there might be other types of keys
    defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
    the database allows to be used in a unique key, excluding virtual columns, UDTs,
    function-based columns, and any columns that are explicitly excluded from the Oracle
    GoldenGate configuration.
    The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
    is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
    When to use ADD TRANDATA for an Oracle source database
    Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
    If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
    TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
    ● You can stop DML activity on any and all tables before users or applications perform DDL on them.
    ● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
    ❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
    statement.
    ❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
    ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
    You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
    Additional requirements when using ADD TRANDATA
    Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
    chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
    statement:
    SQL> alter database add supplemental log data;
    To verify that supplemental logging is enabled at the database level, issue the following statement:
    SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
    If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0)

  • Row-level security at the Database level

    We need Row-level security at the Database level, where the user who logs in to Crystal reports, should be able to fetch only those rows from the database that he is entitled to see. For this, the login name of the user is passed to a stored procedure which sets the context of the DB session and restricts the data retrieved.
    We are not looking for row-level security where the data is first retrieved and then filtered based on the user login name. However, we are definitely looking for a way to set a context for a database session based on the user login name, even before we start fetching data. So effectively, the user who logs in will fetch only those rows which he is supposed to see.
    Issue:
    We face a problem of not being able to pass a variable (something like 'BOUSER' for BO which works, whereas, 'CurrentCEUserName' for Crystal Reports, which doesn't work), to the database stored procedure to set the context.
    Please let us know if we can use 'CurrentCEUserName' variable in Crystal in the same way as 'BOUSER' is used in ConnectInit for BO? We would like to know how we could pass any variable in Crystal Reports which holds the user login information to a stored procedure.
    Also, please suggest alternate ways to achieve this security restriction, if any.

    Hi
    A previous database had a personnel table with their station name, district and region, with a field holding their logon name.  We also had an activity table with the fields referring to the activity, and a field of Station, district and region it occured in.
    By linking the individual rows in an activity table to the personnel table on the station name field, we then used the CurrentCEUserName to filter on the personnel.  This returned only the records in the activity table where the station the activity took place at was the same as the station associated with the selected personnel who has logged on.
    The additional bonus was if we linked it on District or region we had the same result but at a greater level. ie all activity in the logged on personell's District or if linked on region, then their region.
    The personnel table was maintained by the system administrators, so maintenance was low.
    I hope this helps.
    Kevin

Maybe you are looking for

  • Error while updating to target 0PUR_C01 (type INFOCUBE)

    Dear Friends I am facing the problem while loading data through DTP from PSA to cube(Target).Problem occurring is at 0FISCPER. The error shown is as below: Data Package 2: Errors During Processing  >>> Updating to InfoCube 0PUR_C01 >> Error while upd

  • Up Date Profile selection

    Hi, Clint  has a requirement for controlling budgets  at procurement level andat  consumption level . Eg:  Procurement Budget : Purchase Requisition and Purchase Order Eg:  Consumption Budget : Goods Issue / Service Entry sheet  and FI direct entry i

  • Photo App not opening (iPhone iOS 5)

    So, i updated my iPhone 4 to iOS 5, absouletely everything works. I then updated my mother's iPhone 4 to iOS 5, there the exact same phone (I just dont sync photos on the computer with my phone, with hers i do) However on her phone, when u click the

  • Signature in Logic Post

    Ok, this is not a direct Logic question but rather a question about posting on the Logic Forum. After the latest redesign of the Forum, there is no signature field any more that gets added automatically at the end of the post. If I understand correct

  • Podcast won't import from Garageband

    Hi all Help appreciated with this. I do a two hour music podcast (www.whatyouwantradio.com also available at http://soundcloud.com/whatyouwant-radio) which is created in Garageband and then exported to iWeb for uploading to my website. I've been runn