BO Server Out of space

My BO server has ran out of space(98% used). the server just has boot drive.
I see a lot of log files in the logging directory.
I want to know , is it okay if I remove the old Log file from the bobje/logging directory?
Please let me know what all files can be deleted( and from which directories) for creating space.
Thanks

Hi Denis,
Thanks for reply.
These are the log files which get created in logging directory.
$ ls -ltr
-rwxr-xr-x   1 root     root       14236 Sep  4 12:12 wca_20080902_064855_11229.
log
-rwxr-xr-x   1 darkd01  dark         187 Sep  4 12:15 boe_cmsd_20080829_131646_2
3636.log
These are the environment variables.
$ env
_=/usr/bin/env
LANG=en_US.UTF-8
HZ=100
LC_MONETARY=en_US.ISO8859-1
LC_TIME=en_US.ISO8859-1
PATH=/oracle/product/client_10.2/bin:/usr/bin:
LC_ALL=en_US.UTF-8
ORACLE_BASE=/oracle
BO_BASE=/devl/darkbo
LOGNAME=darkd01
MAIL=/var/mail/darkd01
__pdos_trackpid__=19453
__pdos_program__=login
LC_MESSAGES=C
__pdos_deny_auth__=0
LC_CTYPE=en_US.ISO8859-1
__pdos_date__=1220541345
__pdos_mergeaud_save__=49206
__pdos_authaud_save__=0
SHELL=/bin/ksh
JAVA_HOME=/usr/java1.5.0_12
WAS_LOGS=/opt/IBM/was61/AppServer/profiles/wsp_ark01/logs/wss_ark01
BO_HOME=/devl/darkbo/bobje/bobje
HOME=/export/home/darkd01
LC_COLLATE=en_US.ISO8859-1
LC_NUMERIC=en_US.ISO8859-1
LD_LIBRARY_PATH=/oracle/product/client_10.2/lib32
TERM=vt100
ORACLE_HOME=/oracle/product/client_10.2
PWD=/devl/darkbo/bobje/bobje
TZ=US/Eastern
$
Also I did
$ cat ccm.config
And I searched for 'trace' keyword in output but nothing is there.
Thanks,
Sandip.

Similar Messages

  • Onedrive for business "the server is out of space"

    hi,
    - on premise SharePoint server 2013
    - dedicated "my site" web application.
    - the sites collection has the right quotas
    - the server has a lot of space available.
    OneFor Business client app have a strange behaviour:
    - I synch small files < 1mb: works
    - I synch one file of 300 mb, the client app (OneDrive for b) give me "the server is out of space"
    - I remove my "300mb" file from my client and I synch one file of 50mb: works
    - I synch other 10 files of 50mb and WORKS
    - I retry to synch the file of 300mb and the client app give me "the server is out of space"
    - I remove the file with the error and I try to synch a smaller file like 250 or 200 mb, same error.
    - I try to synch other files of 50, 60, 70mb that added more than 300mb and WORKS!
    what is the problem ?

    Have you looked at Site Quotas on the MySites in Central Administration on the SharePoint server, as well as Database size to validate that there isn't database size restrictions in place, and lastly, free space on the volume holding the database files
    on the SQL Server?
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Home media server for mac, that you can also get to remotely.  Any ideas?  I am out of space on my macbook pro hard drive and appletv.  Need some sort of media server for delivery to macs, ipods, ipads, etc.  Any suggestions?

    Hi,
    I have been using my macbook pro as a home media server, hosting most content on the macbook with ipod touch, ipads, and appletv streaming/sharing content.  I have run out of space on both the macbook pro and the apple tv and am looking to moving the content to a home media server.  Any thoughts/suggestions?  I'd like something that I can access remotely too.  I have an old slingbox and also have a static IP address.
    Any thoughts/suggestions would be most welcome!

    Don't worry I've sorted it! I just had to turn off Reminders as well in iCloud. Calendar then worked fine, even when I turned Calendar and Reminders back on.

  • On a Mac mini OS X server 10.8.5 TimeMchine cannot copy 2.5 TB to a 6 TB Thunderbolt disk, runs out of space, Carbon Copy Cloner works perfectly

    On a Mac mini OS X server 10.8.5 TimeMachine cannot copy 2.5 TB (from a Lacie 2big Thunderbolt data disk) to another 6 TB Thunderbolt disk, runs out of space, Carbon Copy Cloner works perfectly: claiming just 2.5 TB after the copy. Thunderbolt disk is
    LaCie 2big Thunderbolt Series 6 TB

    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Enter the word "Starting" (without the quotes) in the String Matching text field. You should now see log messages with the words "Starting * backup," where * represents any of the words "automatic," "manual," or "standard." Note the timestamp of the last such message that corresponds to an abnormal backup. Now
    CLEAR THE WORD "Starting" FROM THE TEXT FIELD
    so that all messages are showning, and scroll back in the log to the time you noted. Select the messages timestamped from then until the end of the backup, or the end of the log if that's not clear. Copy them to the Clipboard by pressing the key combination command-C. Paste (command-V) into a reply to this message.
    If all you see are messages that contain the word "Starting," you didn't clear the text field.
    If there are runs of repeated messages, post only one example of each. Don't post many repetitions of the same message.
    When posting a log extract, be selective. Don't post more than is requested.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Some personal information, such as the names of your files, may be included — anonymize before posting.

  • Oracle server running out of space

    We have a linux(debian)server which has Oracle10g in it and it is running out of space now. So we decided to add some more disk space(adding a new hard drive in the same server). But if we want to add more data files how can I tell the DB to use the added disk space. Since all the data files are existing in /home/oracle/oradata/orcl and their tablespace (USERS) are in the old disk,is it possible to tell the DB to use the added disk.
    Please let me know if that can be done and how.
    Thanks in advance.

    Drop the tablespace. If you are sure you don't need it.
    ALTER TABLESPACE tools offline;
    DROP TABLESPACE tools;
    1* create tablespace noned datafile '/u02/app/oracle/oradata/DWDEV01/noned.dbf' size 10M extent management local
    SQL> /
    Tablespace created.
    shutdown immediate;
    oracle@debian:~/oradata/DWDEV01$ mv noned.dbf _noned.dbf
    oracle@debian:~/oradata/DWDEV01$ sqlplus sys/p as sysdba
    SQL*Plus: Release 10.2.0.1.0 - Production on Sat Nov 11 18:44:13 2006
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup;
    ORACLE instance started.
    Total System Global Area 285212672 bytes
    Fixed Size 1218992 bytes
    Variable Size 75499088 bytes
    Database Buffers 205520896 bytes
    Redo Buffers 2973696 bytes
    Database mounted.
    ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
    ORA-01110: data file 5: '/u02/app/oracle/oradata/DWDEV01/noned.dbf'
    1* alter database datafile '/u02/app/oracle/oradata/DWDEV01/noned.dbf' offline drop
    SQL> /
    Database altered.
    SQL> drop tablespace noned;
    drop tablespace noned
    ERROR at line 1:
    ORA-01109: database not open
    SQL> alter database open;
    Database altered.
    SQL> drop tablespace noned;
    Tablespace dropped.
    Message was edited by:
    gopalora

  • I want to save my old emails as I've run out of space on my service providers server for my email.

    I want to save my old emails as I've run out of space on my service providers server for my email.
    I can't Archive as that takes space alos on the server.
    What is the best way to do this and still access/read the old emails on my iMac, 10.8.4?

    You can just drag the emails to "On My Mac" which will save them locally. Or you can use a third-party archive utility. If you're using Mail, you might look into MailSteward which will archive all your mail, including attachments, into a database:
    http://www.mailsteward.com
    I've used it for years and have been quite satisfied with it.
    Regards.
    Disclaimer: any product suggestion and link given is strictly for reference and represents my opinion only. No warranties express or implied. I get no personal benefit from the sale of any product I may recommend in any of my posts in the Communities. Your mileage may vary. Void where prohibited. You must be this tall to ride. Objects in mirror may be closer than they appear. Preservatives added to improve freshness. Contestants have been briefed on some questions before the show. No animals were harmed in the making of this post.

  • ANS1329S (RC29)   Server out of data storage space

    Hi
    I got the error message below. How do i resolve.
    thanking you for your anticipated responses
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on p1 channel at 10/12/2007 00:35:17
    ORA-19502: write error on file "alcrm_635733077_24748_1", blockno 40520705 (bloc
    ksize=512)
    ORA-27030: skgfwrt: sbtwrite2 returned error
    ORA-19511: Error received from media manager layer, error text:
    ANS1329S (RC29) Server out of data storage space

    You cannot expect, that everyone here knows TSM (this is an Oracle,not an IBM forum), so it would be great to give some more informations about your environment.
    Message (ANS1329S) is clear, TSM server is out of space. Storage pools may be exhausted or no more tapes available. This is not an Oracle/RMAN issue, you have to contact the person(s) responsible for your TSM environment.
    Werner

  • ADI0052E: TSM API Error: ANS1311E (RC11)   Server out of data storage space

    Hello guys,
    I have a problem with the backup on MaxDB. It always fails with the following error message
    ADI0052E: TSM API Error: ANS1311E (RC11)   Server out of data storage space
    Has anyone faced this problem before? Any solution how could this be solved?
    Thanks for your help.
    Regards,
    Dan

    Hi Dan,
    Check the TSM server and make sure the primary storage pool that is defined
    as your copy destination for the management class you are using is not full
    and or you have not run out of scratch tapes. In either case a storage pool
    has filled up and cannot migrate data causing the RC11. This is usually what
    an RC11 Server out of data storage space means.
    I am sharing configuration for Oracle database. You can check similar configuration for MaxDB database.
    when i defined the node in my tsm server :
    REG NODE hostname_oracle password maxnummp=2, it was defined in standard storage pool, i moved it in other storage.
    Hope this is useful.
    Regards,
    Deepak Kori

  • Hyper V on Win Server 2012 throwing false out of space alerts and then intermittently pausing VM's (When Veeam runs)

    I have a Windows Server 2012 R2 machine running Hyper V with 6 virtual machines on it. My issue is each of the machines have a status saying they are running out of disk space.  Neither of the hard drives on my host are low and none of the VHD's are
    low...which as long as they work, not a big deal. The issue that we're running into is when Veeam runs backup jobs some of my VMs will pause. The odd part is we might have 2 to 5 of the 6 machines pause maybe once a month, so it's not happening consistently.
    We've contacted Veeam support and they've been useless, they advised we upgrade to Veeam 8 which made no change. We have a number of other virtual hosts with virtual machines being backed up by Veeam that never have this issue... Is this a Hyper V issue or
    a Veeam issue? Any advise would be appreciated!

    My boss had worked with Veeam on this issue and they tried an upgrade to resolve this, however, it's happened again a number of times in the last week. I went through my logs and I also looked at my storage (Posted below). The servers that paused today are (App4,
    FS3, FS4, FS5). App3 and FS1 have also paused before, however, not this time. Phone1 has not paused. So our total usage is 2.2 TB of a 2.45 TB drive. All the VHD files are stored on the virtual host D drive. I'm not understanding the status warnings, some
    say they are running out of space when there is indeed enough space on both the VHD files for C’s and D’s.
    I reached out to a vendor/contactor we use for complex issues and I posted his advise below, however, it's not making sense to me... We do not use snap shots and as of this moment we're not at a low space threshold. Is the issue as simple
    as we have too much on this server and the Veeam backups are triggering the host's D drive to go below the low space threshold?
    "When I setup virtual servers, I setup fixed disk and fixed memory usage, this keeps the memory cache from fluctuating and I limit the disks to only what the server needs and then expand them as needed. Fixed disk perform
    better and it is a good way to avoid fragmentation and out-of-control dynamic disks. At this point, your issue is going to be freeing up space on the D: drive of the Hyper-V server – your snapshot logs (and possibly someday replication change logs) are going
    to continue reach the disk space limit. There are some registry changes that some suggest to disable the safeguard, but remember, this is a safeguard as the server need free space on the Hyper-V server host to operate."
    Server Specs
    App3 - Dynamic - C: 47.36 GB used of 50 - D: 250 GB used of 250 - No status warning
    App4 - Dynamic - C: 193.12 used of 200 - D: - Disk running out of space
    FS1 - Dynamic - C: 55.4 used of 100 - D: 1000 used of 1000 - No status warning, however,  the D drive is out of space?
    FS3 - Dynamic - C: 55.62 used of 60 - D: - No status warning
    FS4 - Dynamic - C: 19.1 used of 60 - D: 32.19 used of 100 - Disk running out of space
    FS5 - Dynamic - C: 18.44 used of 127 - D: 74.04 used of 127 - Disk running out of space
    Phone1 - Fixed - C: 127 - No status warning
    Event Errors...
    'FS5' has been paused because it has run out of disk space on 'D:\VM Drives\FS5\'. (Virtual machine ID 15D4F67C-562A-4AB6-AB72-47E8722CCB85)
    'FS1' has been paused because it has run out of disk space on 'D:\vm drives\FS1\Virtual Hard Disks\'. (Virtual machine ID 304B2CAE-C9D2-401E-970B-F75874F6CF08)
    ^FS1 was not paused when I went in to start them this morning...

  • Out of space on server error

    Running data sync 1.2 build 579. If I send an email from my Droid 3 and attach a file, the email fails and I get the following error on the phone: "Message not sent. Out of space on the server." Is this normal? Any one else seen this?
    Thanks,
    Mike

    go into groupwise connector settings and under "connector settings" there will be a blue link at the bottom of the section titled "advanced", click that. in there you will find a size limit for attachments and the default is 500Kb, i set mine to 10240 (10MB) so i could send pictures from my phone.
    j
    p.s. i have broken my left hand. please excuse grammar, punctuation, capitalization and spelling mistakes.
    Originally Posted by dlietz
    Hi Mike,
    I know this is an old thread but did you know what the solution was for this problem? I have a system at build 730 having the same issue. Seems as though small file attachments go out fine, bigger ones don't.
    Thanks.
    Dan

  • Wedding photography running out of space. Need help with thousands of photos, and workflow

    So my studio does wedding photography. Me and my partner both take photos.  We have about 30,000 photos just from the last 2 years.  We are running out of space big time. We both have 1 terabyte magnetic hard drives in our computers. I have about 80gb left and she has about 120gb left. Needless to say, since I myself take about 50gb of photos per wedding, I'm not going to last long. We use lightroom.
    So far I have managed to backup everything on an external 2TB hard drive for both of us. I save both RAW and Jpegs to it. I also have everything backed up on an online server.  So here's my first issue.  What I've been doing, is for the previous year (completed weddings) I delete all the RAWs from the computers. I leave the JPEG's on the computer incase I need them quickly. I keep all the RAWS on the external hard drives and online.  But even doing this, deleting the RAWs from completed weddings off the internal hard drives, I'm still almost out of space. My computer is so slow because of this even though it's a top of the line computer with 6 gigs of triple channel ram. Lightroom runs so very slow and lags so bad.
    So my first question is:  By deleting the raws off of my computer, if I need to go back a few years from now and re-save the raw files or re-edit, will lightroom remember my develop settings or will I have to completely re-edit every photo?  I have been saving a catalog file for each year, and start a new one every year.  I do not use XMP sidecar files I save everything to the lightroom database.  As long as I have the catalog file backed up, am I safe? Or am I doing this wrong? Because if open an old catalog file, it will say all the photos are not found on my computer (obviously).
    My second question is:  I am going to be investing in more hard drives and want to do it right this time around. I want to invest in SSD drives for maximum performance.  Here is what my plan is.  One SSD drive for windows/programs, one SSD drive for lightroom scratch disk, and one 1TB "Velociraptor" 10,000rpm magnetic hard drive for keeping the actual images on.  Is this not right? Am I planning this out wrong?
    Here is our current workflow. I am very welcome to suggestions:
    1) Shoot wedding.
    2) Import RAW files from CF card to computer. It goes in a folder for the client with the images going into different sub folders by category such as "reception" and "ceremony"
    3) Import folder into lightroom4.
    4) Edit photos. I do not use XMP sidecar, I use database.
    5) Export files in JPEG form into the same client folder.
    6) At the end of the year I save the catalog, make a new catalog for the new year. I delete the RAW files off of my hard drive after backing them up on 2TB external hard drive. I leave the jpegs and the client's folder on the computer hard drive.

    My guess is that you are not actually deleting the original raws from your internal hard drives and have lots and lots of crud around that you are not aware off so you should go through your internal drives with a fine-toothed comb. Don't forget to actually empty the trash when you delete! 1TB is more than enough disk space for storing the Lightroom catalog file, previews and some jpegs. 30.000 photos is really not that much. My main catalog  has >50.000 images in it with the raw files all stored on external hard drives. The catalog itself is on my internal 256 GB SSD HD that also houses the previews (so I can use it with the raw files offline) and the operating system as well as a lot of software (photoshop, office, iWorks, Aperture, etc.). It is more than enough space as it still has more than 150 GB free. I automatically back up everything, including the external HD that stores the raw images to separate external HDs stored in different places.

  • Oracle Home running out of space

    Hi,
    I have installed oracle application server 10g and oracle 10g on an Linux server.
    Both products have been installed on the /oracle partition which is about 11GB
    The database control and data files have been setup to another partition /oradata.
    After a few months i seem to be running out of space on the /oracle partition.
    I think it might be due to some log files. Can anyone please give me ideas on which files I can safely purge on the application server home and database home to create some space on the /oracle parition
    Thanks

    hi Prativ,
    For oracle application server,
    The log files in the below three location can be purged,if you want u can take a backup of these older log files in another mount point before deleting
    1st $ORACLE_HOME/webcache/logs
    2nd $ORACLE_HOME/j2ee/OC4J_BI_Forms/log/OC4J_BI_Forms_default_island_1/
    Purge the default-web-access.log and server.log
    3rd $ORACLE_HOM/Apache/Apache/logs
    Regards
    Fabian

  • Jdev 11.1.1.5 Error – Out of Space for Code Cache for Adapters

    JDK Version - jdk160_24 ()
    Linux is 64 bit
    Memory properties used for starting the Admin Server (setDomainEnv.sh) -
    Xms512m -Xmx1024m
    Memory for SOA Server (setSOADomainEnv.sh)-
    DEFAULT_MEM_ARGS="${DEFAULT_MEM_ARGS} -XX:PermSize=128m -XX:MaxPermSize=512m"
    PORT_MEM_ARGS="${PORT_MEM_ARGS} -XX:PermSize=256m -XX:MaxPermSize=512m"
    Here is the log -
    java.lang.VirtualMachineError: out of space in CodeCache for adapters
         at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:204)
         at bc4j.com_xyz_iot_model_entity_IotRequestEO_Id_null.gs.run(bc4j.com_xyz_iot_model_entity_IotRequestEO_Id_null.gs.groovy:1)
         at oracle.jbo.ExprEval.internalEvaluateGroovyScript(ExprEval.java:1208)
         at oracle.jbo.ExprEval.doEvaluate(ExprEval.java:1261)
         at oracle.jbo.ExprEval.evaluateForRow(ExprEval.java:1083)
         at oracle.jbo.server.AttributeDefImpl.evaluateTransientExpression(AttributeDefImpl.java:2141)
         at oracle.jbo.server.EntityImpl.initDefaultExpressionAttributes(EntityImpl.java:1036)
         at oracle.jbo.server.EntityImpl.create(EntityImpl.java:989)
         at oracle.jbo.server.EntityImpl.callCreate(EntityImpl.java:1163)
         at oracle.jbo.server.ViewRowStorage.create(ViewRowStorage.java:1151)
         at oracle.jbo.server.ViewRowImpl.create(ViewRowImpl.java:472)
         at oracle.jbo.server.ViewRowImpl.callCreate(ViewRowImpl.java:489)
         at oracle.jbo.server.ViewObjectImpl.createInstance(ViewObjectImpl.java:5568)
         at oracle.jbo.server.QueryCollection.createRowWithEntities(QueryCollection.java:1937)
         at oracle.jbo.server.ViewRowSetImpl.createRowWithEntities(ViewRowSetImpl.java:2458)
         at oracle.jbo.server.ViewRowSetImpl.doCreateAndInitRow(ViewRowSetImpl.java:2499)
         at oracle.jbo.server.ViewRowSetImpl.createRow(ViewRowSetImpl.java:2480)
         at oracle.jbo.server.ViewObjectImpl.createRow(ViewObjectImpl.java:10857)
         at com.xyz.iot.model.am.IOTDiscountAMImpl.createAndAssociate(IOTDiscountAMImpl.java:477)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)I have already tried -
    1)Set - JVM Options : XX:ReservedCodeCacheSize=64m in setDomainEnv.sh or in the start file for starting WLS
    2)Reducing Logging severity levels to warning from trace so as to reduce the Log file size.

    google search on the exception seems to imply that this option should fix it. So increase to 128mb and see what happens. Also:
    Memory for SOA ServerDo realize that SOA is a resource hungry monster. The heap settings you have may be very inadequate.
    Edited by: gimbal2 on Feb 2, 2012 1:33 AM

  • Sql server log shipping space consumption

    I have implemented sql server logs shipping from hq to dr server
    The secondary databases are in standby mode .
    The issue is that after configuring it , my dr server is running out of space very rapidly
    I have checked the log shipping folders where the trn files resides and they are of very decent size , and the retention is configured for twenty four hours
    I checked the secondary databases and their size is exactly same as that of the corresponding primary databases
    Could you guys please help me out in identifying the reason behind this odd space increase
    I would be grateful if you could point me to some online resources that explains this matter with depth

    The retention is happening . I have checked the folders they do not have records older then 24 hours .
    I dont know may be its because in the secondary server (Dr) there is no full backup job working , is it because of this the ldf file is getting bigger and bigger but again I far as my understanding goes we cannot take a database full backup in the stand
    by mode .
    The TLog files of log shipped DBs on the secondary will be same size as that of primary. The only way to shrink the TLog files on secondary (I am not advising you to do this) is to shrink them on the primary, force-start the TLog backup job, then the copy
    job, then the restore job on the secondary, which will sync  the size of the TLog file on the secondary. 
    If you have allocated the same sized disk on both primary and secondary for TLog files, then check if the Volume Shadow Copy is consuming the space on the secondary
    Satish Kartan www.sqlfood.com

  • TWO Itunes accounts Running out of space on each laptop. Moving one accont fully to NEW machine wit 3tb drive NAS. Can i move second mdeia libary to nas drive ?? Ie tow itunes libraries on NAS drive working independantly

    Two I-Tunes libaries (Mine & My Wifes) Both Machines running out of space.
    My FIX Moving All itunes account Libary etc to new PC with NAS 2tb drive (IOMEGA)or Seagate or LaCie or etc......
    Can i copy the Bulky Library files ITUNEs from My Wifes laptop to the NAS drive ?
    Is this easy to do ?
    ALSO She will still have itunes on her laptop just none of the Bulky Library !
    As the ITUNES program will be Pointed to her files (Library on NAS drive ) ?
    If someone has done a walkthrough of something like this id really appreciate it?
    LASTLY I did hear with itunes new Updates that it stops NAS Drives Streaming Media ? (SO is a NAS drive No use if apple are no allwing the streaming for itunes Server)

    By default, the ITL is always on the C: drive.
    SO it seems you moved yours to the exHD. Just moving it doesn't do anything, though...did you ever do a shift-start of itunes and use it as your library?
    Pressing AND HOLDING the shift key until you get a prompt from ituens to choose a library ITL file.
    I would do that now before you go any further, to make sure of which library files you want to use.
    I would assume the one with the most recent date is the one you want.
    iTunes uses the last library opened. You don't always have to do a shift-start, in other words.
    However if at any point you start itunes without the exHD connected, it automatically goes back to the C: drive.
    In that case, you WOULD need to do another shift-start to use the ITL on the exHD>
    It's option-start on a Mac, BTW.
    You can move the whole folder over to your new MBP and do an option-start.
    Clear as mud?

Maybe you are looking for

  • F.13 exceptions.

    Hi Experts, We are executing F.13 automatic clearing for the GR/IR clearing accounts. While doing so an exception list of documents which cannot be cleared is generated.In such case the GR amount matched with the Invoice amount, therefore we cannot u

  • Procedure  to create file system in AIX  or HP-UX

    hi gurus i have to install SAP BI on one HP-UX server. unfortunately the OS admin is not available and i have to cretae file systems too. can any one tell me how  to create file systems in HP-UX which are required for SAP installation on Oracle. i kn

  • Trying to update a value if a condition is filled

    im trying to update a value in table Golf when the id is the same in table Golf and table soccer. in table soccer id is the primary key and when the cost in soccer table is 0 but it just sets info available for all 3000 rows when it should only do 28

  • TA21088 when trying to print from office,adobe, or from safari  -- application crashes.  Please help ver osx10.7

    When trying to print form office,excel,adobe,or safari ;Application get a unexpected error and crashes. The driver is the latest driver installed.  The Copier is a toshiba es2830c. I have tried renaming and creating a new profile  with no success. I

  • Help copy websites to clipboard then paste to Facebook

    Hi... How can I copy and paste all the  web pages I want to paste into Facebook. In my Touchpad. When I click on website address, at top of page, it, correctly,highlights the whole address, however, in the drop down menu, on the left, there is no edi