Is possible control tape library slot 1 - 10 for file system backup

hi ..
i am new to osb , i just install and setup osb , i have a question as below , hope expert can help me
env: testing
rhel 5.5
tape library with 20 slot
file system backup
1. is possible osb only use slot 1 - 10 for file system backup ? amanda can control slot x - slot y for the configuration .
2. how do i label the tape for slot 1 - slot 10 by obtool ? how to control osb auto load the tape for next backup ? where to check the log say that next tape is tape-02 ?
thanks ..

hi dcooksey
how do i use list for a tape drive...for example, if you want tape drive A to only use slots 1-10 from obtool or webtool ?
becoz i new to backup solution & osb ( i always use ghost or acronis to clone the image ) , my thinking as below , pls correct me if i am wrong
slot 1 - 10 for daily backup
slot 11-16 for full system backup
slot 17 - 20 reserve ( this tape only use for full system backup before perform any application upgrade patches , )
daily backup mon - fri ( 2 week ) ( no backup on saturday and sunday ) , server application offline
full system backup friday ( 1 , 14 on calendar ) every 2 week perform full system backup after daily backup completed
for upgrade application ,
perform full system backup after daily backup , then release the server to application team to perform upgrading .
so how to set my media family for the above setting ? the slot configuration is control by media family ?
hope you can help ...

Similar Messages

  • URM Adapter for File System issue.

    Hi, I am just starting out on using the URM Adapter for File System and I have a few questions about issues I am facing.
    1.     When I try to create multiple searches and map them to Folders/Retention Categories in URM, it does not work. I am able to map one search via one URM source to one Folder/Retention Category (without my custom attribute from question 1). However in Adapter’s Search Preview I am able to perform a search on the documents successfully. Would different searches require different URM sources in Adapter?
    2.     Does the adapter work with other Custom Attributes? I have added an attribute in addition and in the same way as "URMCrawlTimeGMT" is added in Oracle Secure Enterprise Search (I created a custom Document Service and Pipeline to add a metadata value) and in the URM Adapter’s config.properties file and when I create a search in Adapter based on the custom attribute, it does not map the documents into URM. I am however able to search the documents in Adapter’s Search Preview window with the custom attribute displaying correctly.
    Any help with this topic would be really appreciated. Thank you.
    Regards,
    Amar

    Hi Srinath,
    Thanks for the response, as to your questions,
    1. I am not sure how to enable Records Manager in adapter mode. But I am able to login to the Records Manager web page after starting it up through StartManagedWebLogic.cmd URM_server1.
    2. The contents of the file system should be searchable in Records Manager, and should be able to apply retention policies to the documents in the file system, I do not need to have SES, but apparently the adapter needs to have SES as a pre requisite.
    Upon further investigation I found that in the AGENT_DATA table the values being inserted were "User ID"(UA_KEY) and NULL(UA_VALUE), so I just made the UA_VALUE column nullable and I was able to pass that step. Is this the wrong approach to fix the issue.
    Could you please let me know about enabling Records Manager in adapter mode, I am not able to find documentation online, I have been through the Adapter installation and administration guides. Thank you once again.
    Regards,
    Amar

  • Tape Library Concern - Eject & Reinsertion after each backup

    Here is what I am currently using.
    System overview:
    -HP Proliant DL380p Gen 8
    -HP D2600 2TB 3G SATA LFF - 24TB Storageworks array
    -Quantum Scalar i40
    (2) HP LTO-5 tape heads
    (25) Licensed slots
     Tape library is configured as one partition
    Windows 2012 R2 - fully patched
    DPM 2012 R2 Data Center Edition with 7/2014 Update Rollup 3
     I have used DPM for about 6 years since 2007 version but I have always used stand alone LTO-3 & LTO-5 tape drives.  I
    have used tape libraries before that time but with a different product.
    Here is the scenario I am seeing.  Any tapes jobs that are scheduled at the same time will
    pull a tape into each tape drive and backups are performed.  All is working normal from that perspective.  The problem I see is that after each backup is completed the tape is removed from the drive, placed back in the slot, removed again from the
    slot and then re-inserted back into the drive to continue with the next backup job that was scheduled at the same time as the previous one.
    This seems highly inefficient and time consuming not to mention a lot of extra wear & tear
    on the library & tapes.  Is this behavior normal? If so, can it be changed?  or is it is bug?
    I would expect the tape to remain in the tape drive until all backups are completed assuming they
    had available capacity and were not off-site ready.  It appears if multiple servers are in the same protection group the tape is not removed between these backups.  The issue appears to be when the next separate protection group is backed up.
    I’d even be up to manually moving the tape and leaving it in the tape in drive but I don’t see
    that kind of library control with DPM.
    I saw another thread from August where someone mentioned seeing this behavior when doing verifies
    on their backups.
    Thoughts?  PSS call?
    Thanks!

    I'm seeing that if a protection group consisting of hundreds of small SQL databases is configured for short term disk and long term tape, and configured to use 2 drives, that after EACH DATABASE the tape is rewound and re-slotted, only to be re-inserted
    into the drive and fast forwarded to the end for the next database to be written.
    If you've got hundreds of databases (many small, like msdn, model, etc) this can push the time out to 3 minutes plus per database taped, even for a 5MB database.
    My current workaround is to configure the protection group to only allow the use of a single drive.
    Doing so seems to reduce the time per database from 5 minutes down to 30 seconds, presumably just REW and FF tape movement.
    Even though the time is reduced, I'm still disappointed the progress between databases doesn't continuously stream.  I mean 100 small 5MB databases should only take 7 seconds, if streaming at 70MB/s to tape.  Instead it's 30 seconds per 5MB database. 
    :(

  • Is it possible that theseproperties are only available for files and not f

    Hello All,
    I have created few properties like ReqNo (takes Integer) and ReqDate (accepts Date). 
    I have set the above 2 properties as manadatory. Created a Group and added his goug in 'allgroups' such that I see a tab wherein I can fill in these properties.
    My questions is:
    <b>Is it possible that these properties are only available for files and not for folders</b>. The reason being that even if I create a new folder, I need to fill some values in the above two mentioned properties.
    These properties need to be mandatory and cannot make them optional.
    Please help me solve this mystery.
    Awaiting Reply.
    Thanks and Warm Regards,
    Ritu

    Hi Ritu,
    If you only want the property to be available for files and not folders you only enter data into the "Document Validity Patterns" field in the Property config.
    Regards
    Paul

  • Reloading iphoto library and streamlining iphoto file system.

    I had a hard-disk failure recently, and lost a lot of photos: basically all the full size files. Only the thumbnails remained on the system disk, so when I had the new drive installed, the engineer made these the iphoto library.
    Fortunately, I had most of my photos backed up on a separate harddrive (in the 'originals' folder). What is the best way of putting these into my iphoto library in place of the thumbnails already there? (ie, don't want to see both in iphoto).
    I'd put the iPhoto library on the separate hard-disk, but unbeknownst to me, about a year ago, iPhoto began saving imports on the system drive again, so these were the pictures I lost.
    On another point, since I installed iphoto 6, the location of photos has become confusing, with the whole photo library pre-iphoto 6 duplicated in a folders called 'Originals' and 'Data'. Neither of these maintain the original easily navigable and logical date hierarchy file structure, but have saved all photos in a folder marked 2007, by roll number. The date folders which remain only contain a 'data' folder containing .attr files. Not very practical.
    In short the whole thing is a mess: for some photos the system has saved 5 copies in various places.
    Is there anything that can be done to consolidate the iPhoto library and put it back on a date-based filing system?

    raggabishp
    To start at the end:
    Is there anything that can be done ... put it back on a date-based filing system?
    No, once you use iPhoto 6, that's the file system. But it's not that difficult to follow: A Note about the iPhoto Library Folder:
    In this folder there are various files, which are the Library itself and some ancillary files. Then you have three folders
    Originals are the photos as they were downloaded from your camera or scanner.
    (ii) Modified contains edited pics, shots that you have cropped, rotated or changed in any way.
    This allows the Photos -> Revert to Original command - very useful if you don't like the changes you've made.
    (iii) Data holds the thumbnails the the app needs to show you the photos in the iPhoto Window.
    Finding the Picture file is easy: There are three ways (at least) to get files from the iPhoto Window.
    1. Drag and Drop: Drag a photo from the iPhoto Window to the desktop, there iPhoto will make a full-sized copy of the pic.
    2. File -> Export: Select the files in the iPhoto Window and go File -> Export. The dialogue will give you various options, including altering the format, naming the files and changing the size. Again, producing a copy.
    3. Show File: Right- (or Control-) Click on a pic and in the resulting dialogue choose 'Show File'. A Finder window will pop open with the file already selected.
    Rolls in the iPhoto Window correspond exactly with the Roll Folders in the Originals Folder in the iPhoto Library Folder. You can move photos between Rolls, you can rename rolls, edit them, create them, as long as you do it via the iPhoto Window. Check out the Info Pane (wee 'i', lower left) the name and date fields are editable. Edit a Roll Name using the Info Pane, the Roll Folder in iPhoto Library Folder/Originals will also have the new name.
    So the structure is different, but - especially if you use the Film Rolls view - very straightforward.
    There is no easy way to rid yourself of these 'thumbs become originals'. If the thumbs have the same filenames as the Originals, then you could overwrite the thumbs with the Originals, but you would need to do that on a file by file basis, I'm afraid.
    Other than that, compare the full size pics with the thumbs and trash the duplicates is all I can suggest.
    Regards
    TD

  • Unix shell: Environment variable works for file system but not for ASM path

    We would like to switch from file system to ASM for data files of Oracle tablespaces. For the path of the data files, we have so far used environment variables, e.g.,
    CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    This works just fine (from shell scripts, PL/SQL packages, etc.) if ORACLE_DB_DATA denotes a file system path, such as "/home/oracle", but doesn’t work if the environment variable denotes an ASM path like "\+DATA/rac/datafile". I assume that it has something to do with "+" being a special character in the shell. However, escaping "\+" didn’t work. I tried with both bash and ksh.
    Oracle managed files (e.g., set DB_CREATE_FILE_DEST to +DATA/rac/datafile) would be an option. However, this would require changing quite a few scripts and programs. Therefore, I am looking for a solution with the environment variable. Any suggestions?
    The example below is on a RAC Attack system (http://en.wikibooks.org/wiki/RAC_Attack_-OracleCluster_Database_at_Home). I get the same issues on Solaris/AIX/HP-UX on 11.2.0.3 also.
    Thanks,
    Martin
    ==== WORKS JUST FINE WITH ORACLE_DB_DATA DENOTING FILE SYSTEM PATH ====
    collabn1:/home/oracle[RAC1]$ export ORACLE_DB_DATA=/home/oracle
    collabn1:/home/oracle[RAC1]$ sqlplus "/ as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 24 20:57:09 2012
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    Tablespace created.
    SQL> !ls -l ${ORACLE_DB_DATA}/bma.dbf
    -rw-r----- 1 oracle asmadmin 2105344 Aug 24 20:57 /home/oracle/bma.dbf
    SQL> drop tablespace bma including contents and datafiles;
    ==== DOESN’T WORK WITH ORACLE_DB_DATA DENOTING ASM PATH ====
    collabn1:/home/oracle[RAC1]$ export ORACLE_DB_DATA="+DATA/rac/datafile"
    collabn1:/home/oracle[RAC1]$ sqlplus "/ as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 24 21:08:47 2012
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON
    ERROR at line 1:
    ORA-01119: error in creating database file '${ORACLE_DB_DATA}/bma.dbf'
    ORA-27040: file create error, unable to create file
    Linux Error: 2: No such file or directory
    SQL> -- works if I substitute manually
    SQL> CREATE TABLESPACE BMA DATAFILE '+DATA/rac/datafile/bma.dbf' SIZE 2M AUTOEXTEND ON;
    Tablespace created.
    SQL> drop tablespace bma including contents and datafiles;

    My revised understanding is that it is not a shell issue with replacing +, but an Oracle problem. It appears that Oracle first checks whether the path starts with a "+" or not. If it does not (file system), it performs the normal environment variable resolution. If it does start with a "+" (ASM case), Oracle does not perform environment variable resolution. Escaping, such as "\+" instead of "+" doesn't work either.
    To be more specific regarding my use case: I need the substitution to work from SQL*Plus scripts started with @script, PL/SQL packages with execute immediate, and optionally entered interactively in SQL*Plus.
    Thanks,
    Martin

  • Issues with setting appropriate ownership for file system

    Hi All,
    We are using ACFS File system. For some of the mount point we have set to change ownership according to requirement in rc.local file So that all permissions remain intact when the server restarts. But the permissions are not taking into account. Only after the rc.local is executed ASM disks are mounted I guess. Is there any where else can we write scripts to change ownership of mount points for ACFS so that when the disks are mounted proper Unix permissions are setup.
    Thanks & Regards,
    Vikas Krishna

    To configure raw devices if you are using Red Hat Enterprise Linux 4.0:
    To confirm that raw devices are enabled, enter the following command:
    # chkconfig --list
    Scan the output for raw devices. If you do not find raw devices, then use the following command to enable the raw device service:
    # chkconfig --level 345 rawdevices on
    After you confirm that the raw devices service is running, you should change the default ownership of raw devices. When you restart a Red Hat Enterprise Linux 4.0 system, ownership and permissions on raw devices revert by default to the root user. If you are using raw devices with this operating system for your Oracle Clusterware files, then you need to override this default.
    To ensure correct ownership of these devices when the operating system is restarted, create a new file in the /etc/udev/permissions.d directory, called oracle.permissions, and enter the raw device permissions information. Using the example device names discussed in step 5 of the previous section, the following is an example of the contents of /etc/udev/permissions.d/oracle.permissions:
    # OCR
    raw/raw[12]:root:oinstall:0640
    # Voting Disks
    raw/raw[3-5]:oracle:oinstall:0640
    # ASM
    raw/raw[67]:oracle:dba:0660
    After creating the oracle.permissions file, the permissions on the raw devices are set automatically the next time the system is restarted. To set permissions to take effect immediately, without restarting the system, use the chown and chmod commands:
    chown root:oinstall /dev/raw/raw[12]
    chmod 640 /dev/raw/raw[12]
    chown oracle:oinstall /dev/raw/raw[3-5]
    chmod 640 /dev/raw/raw[3-5]
    chown oracle:dba /dev/raw/raw[67]
    chmod 660 /dev/raw/raw[67]
    http://download.oracle.com/docs/cd/B19306_01/rac.102/b28759/preparing.htm#CHDGEEDC
    Edited by: Babu Baskar on Apr 18, 2010 1:33 PM

  • Using old file system backup for Cloning

    I have taken an off-line backup of Oracle 11i (11.5.10.2) 15 days ago. Before taking backup of file system , I verified that all the latest Rapid Clone Patches are applied. No changes or patch work in APPL_TOP or DB has been done since that backup. Now I need to do cloning of this instance, how I can use this backup for Cloning.
    Rapid Clone scripts create and generate some files/directories so I am not sure whether my Old backup of file system will work or not. What is the best way to use old backup for cloning , also what are the file and directories in addition to the old backup of file system which I need to copy to Target System.
    Thanks for reviewing and suggestions.
    Samar

    Samar,
    If you have run preclone before backing it up, your backup should be valid for cloning.
    2.1 in the cloning doc has to be in the backup.
    These doc's should clear out yours doubts on cloning.
    Cloning Oracle Applications Release 11i with Rapid Clone
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230672.1
    FAQ: Cloning Oracle Applications Release 11i
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216664.1

  • 888k Error in ULS Logs for File System Cache

    Hello,
    We have a SharePoint 2010 farm in a three-tier architecture with multiple WFEs and APP servers.
    Roughly once a week we will have a number of WFEs seize up and jump to 100% CPU usage. Usually they come in pairs; two servers will jump to 100% at the same time while all the other servers are fine in the 20% - 50% range.
    Corresponding to the 100% CPU spike, the following appear in the ULS logs:
    "File system cache monitor encoutered error, flushing in memory cache: System.IO.InternalBufferOverflowException: Too many changes at once in directory:C:\ProgramData\Microsoft\SharePoint\Config\<GUID>\."
    When these appear, the ULS logs will show hundreds back-to-back flooding the logs.
    I have yet to figure out how to stop these and bring the CPU usage down while the incident is happening, and how to prevent them in the future.
    While the incident is happening, I have tried clearing the configuration cache, shutting the timer jobs down on each server, deleting all the files but config.ini in the folder listed above, changing config.ini to 1, and restarting the timer. The CPU will
    drop momentarily during this process, but as soon as all the timer jobs are restarted the CPUs jump back to 100% on the same servers.
    This week as part of my weekly maintenance I thought I'd be proactive and clear the cache even though the behavior wasn't happening, and all CPUs were normal. As soon as I finished, the CPU on two servers that were previously fine jumped to 100% and wouldn't
    come down. Needless to say, users complain of latency when servers are at 100% CPU.
    So I am frustrated. The only thing I have found that works when the CPUs jump to 100% with these errors are a reboot. Nothing else, including IISReset and stopping/starting the admin and timer job services work. Being Production systems, reboots during the
    middle of the day are bad.
    Any ideas? I have scoured the Internet resources on this error and have come up relatively empty-handed. All the articles reference clearing the configuration cache, which, in my instance, does not get rid of these issues, and can even trigger them.
    Thanks,
    Joseph Irvine

    Take a look at http://support.microsoft.com/kb/952167 for the list of recommended exclusions per Microsoft.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • File Size capped at 32bit for File System Data soruce?

    I have a report that uses the "File System Data" source. I am using it to find files older then 30 days of type BAK or TRAN. That all works well but the "File Size" field does not display the correct information. For example I have a file that is 191GB or 205,192,528,384 Bytes, but the report displays it as 4,294,967,295 this corresponds with the max for a 32bit INT. Can anyone confirm that this is a limitation of the driver? Is there are 64bit CRDB_FILESYSTEM.DLL?
    Regards

    Hi Thomas
    What version of CR are you using? Please look for the version in the Help | About screen of the designer.
    - Ludek
    Senior Support Engineer AGS Product Support, Global Support Center Canada
      Follow me on
    Twitter

  • Disallow for file system access

    Hi All,
    I would like to not permit to access to file system, how can i do that with permission object. I saw only examples how to set contraints on location where it can access.
    Regards

    I'm sorry but i dont understand the problem. (Maybe someone else will)
    So you have told me that when you pass null to an file permission object, that you find out the mask is "NONE"?
    You want to stop users accessing some files?
    You need to build code that will stop the user accessing certain files
    You want to know if you can put restrictions on the whole VM(What is VM?) or just the context?
    I believe this Link could be helpful for you.

  • Can I use a 1 TB EHD which has an iPhoto library on it for Time Machine backup?

    Or do I need a separate EHD for that? I'm trying to make an ancient 2008 MBP (running OS X 10.6.8, 2.5 GHz Intel Core 2 Duo, 2 GB 667 MHZ DDR2 SDRAM) last me for another 6-9 mos. With my hard drive about full I ordered an EHD and copied my iPhoto library to it before deleting events on iPhoto on my computer. This freed up 95 GB of space. I hate to admit that I've never backed up my computer.
    Thanks for your help.

    Hi,
    you can use the drive with Time Machine while still using the drive as a normal disk. Be careful however, that if the external drive dies for a reason, your photos will be lost forever.
    The best would be to set Time capsule on a secondary hard drive, and asking Time Capsule to backup as well your 1st drive, which contains the iPhoto library.
    That way you'll have 2 full versions of your data
    Thank you for using Apple Support Communities.
    All the best,
    James

  • Search event logs for file system access

    I'm looking to create a script that will allow me to search Windows 2012 security event logs for access to specific folders.  Ideally it would allow the granularity to search for read access events (4663) and specify specific users to view.  One
    example would be to show events for drive F:\ where the folder name is JSmith (including subfolders) and the username is not JSmith.
    I've tried something like this, but can't see how to filter.
    Get-EventLog security | ? {$_.Message.contains("F:\JSmith")}

    Is the match explicit?  How can I use wildcard?  How can I exclude events?
    I recommend asking a search engine and doing some initial research. Here's a starter:
    https://technet.microsoft.com/en-us/library/hh849682.aspx
    http://blogs.msdn.com/b/powershell/archive/2009/06/11/windows-event-log-in-powershell-part-ii.aspx
    http://blogs.technet.com/b/ashleymcglone/archive/2013/08/28/powershell-get-winevent-xml-madness-getting-details-from-event-logs.aspx
    http://blogs.technet.com/b/heyscriptingguy/archive/2011/01/24/use-powershell-cmdlet-to-filter-event-log-for-easy-parsing.aspx
    https://richardspowershellblog.wordpress.com/2009/03/08/get-winevent/
    Don't retire TechNet! -
    (Don't give up yet - 13,225+ strong and growing)

  • Intermedia text search error for file system

    I would like to search a text from a file. store in the file system. I have done the following procedures but when i create i get error.
    BEGIN
    CTX_DDL.CREATE_PREFERENCE('search_docroot_pref','FILE_DATASTORE');
    CTX_DDL.SET_ATTRIBUTE('search_docroot_pref','path','c:/temp/abc');
    END;
    Now when i create INDEX with following syntex
    CREATE INDEX mysearch_ind ON mytable(mycolumn) INDEXTYPE IS
    CTXSYS.context parameters('datastore search_docroot_pref');
    I get the following errors.
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-20000: interMedia Text error:
    DRG-50704: Net8 listener is not running or cannot start external procedures
    ORA-28575: unable to open RPC connection to external procedure agent
    ORA-06512: at "CTXSYS.DRUE", line 126
    ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 54
    ORA-06512: at line 1
    Can any body tell me where i am wrong.
    Thanks,

    Hi
    I was aslo facing same problem.My net8 connection and listner is aslo ok. but getting same errors.
    Raju

  • Tree for file system

    hi frndzz,
    I want to represent the file system using a jtree?please send the code fragment...............
    and also how to know which operating system under the app is running in java?
    Thanks in advance

    Hello,
    These tips may help:
    List the names of all files in a particular directory
    http://www.java-tips.org/java-se-tips/java.io/list-the-names-of-all-files-in-a-particular-directory.html
    How to follow a directory structure
    http://www.java-tips.org/java-se-tips/java.io/how-to-follow-a-directory-structure.html
    How do I list all drives - filesystem roots - on my system
    http://www.java-tips.org/java-se-tips/java.io/how-do-i-list-all-drives---filesystem-roots---on-my-system.html

Maybe you are looking for

  • Post AP_invoices_interface without lines

    Hi Gurus, I've one requriement where in I need to import invoices only into the Payables via Payables Open Interface Import. I did the script by entering the records only into the ap_invoices_interface and submitted the Program. It error out saying '

  • How can i hide a contact? and 2 more questions.

    1.Even if i block or delete a contact from my list, i can still find them on the skype search. Is there anyway that i can block it or hide that contact so they/I can never see me? 2.An another thing, how do we delete the things in ''recent''? hmm....

  • Audigy 2 ZS Platinum Pro Not Working ! External Hub Has No Lights

    In the device manager it shows 'multimedia audio controller' with a yellow '?' and '!' through it and when you click properties it says 'driver not installed'. I just installed the new driver pack w/ audio console from the creative site!!! What's goi

  • X201t with dock multiple monitors

    Hello, I am using an old lenovo x201t but recently bought a docking station to it. I also have two external monitors and I was curious to find out what will happen if I can plug one to the dock and the other to the laptop itself. I thought it might n

  • Exporting Final Cut Chapter Points to Encore

    Blu-ray question. I'm creating a bluray in Encore cause Apple hasn't gotten their act together yet. What is the best way to get your chapter points from Final Cut to Encore? Or do you export as a self contained movie, import the file into Encore then