What is better export each schema in a file or all schema in a file??

HI, I want to know if it's better have an script that do the export of each oracle user in a file, or do a full database export with all schemas, For example,
method1
==========0
exp system/<password> FIlE=user1.dmp OWNER=user1
exp system/<password> FIlE=user2.dmp OWNER=user2
exp system/<password> FIlE=user3.dmp OWNER=user3
method2
==========
exp SYSTEM/password FULL=y FILE=dba.dmp LOG=dba.log CONSISTENT=y
I know that exp and imp are deprecated(10g+), but it's the only way for maintain a compatible for backward (9i)
If I use the method2, then I can take a big file, for example 15 o 20GB, while the method1, I have 15 or 20 files of each 1 or 2 GB, so which it's better option?
Another contra of method 1, is that I create a new schema, then I have to add a line in the script for export the new schema.

There could be a number of reasons you resort to a single method. Single schema import may be because you do not want the entire database backup or possibly the database is too large to export all at once. So, choice is yours.. If you need all the schemas of the database, then you can export the whole database or else, just the specific schema

Similar Messages

  • What is better: Export as *.MOV or Export as *.MP4 ?

    I have a couple of small DigiCam *.MP4 videos. Some of them I have to rotate 90 deg clockwise, to edit, to re-encode and to save as a new video.
    When coming to the "export" step I have the choice between the original *.MP4 (=MPEG-4) format and *.MOV (=Apples Quicktime).
    Which one should I take?
    I guess both are not lossless. So which one have the minimum losses in quality when re-encoding and saving?
    Are there other considerations?
    Peter

    Anytime you work with a potentially destructive process you need to test it and be very confident in it. Do test option II quite well -- I have seen full data exports not actually export all the data. So you need to be very confident it is working as expected. As Glen pointed out some cubes have data loaded high (which is not a best practice as if you load data high it should be allocated back down such that an aggregation will not clobber it).
    In regards to Cameron's findings about a restructure working just as fast as a level 0 exp / load / agg this is a case of a disk bottleneck. A restructure process in all versions older than 11.1.2.2 were always single threaded and if you have a well tuned aggregation which does make use of multiple cores then if your disk subsystem isn't a huge bottleneck the
    level 0 exp / load / agg will ALWAYS be faster. This being said it's not uncommmon to see poorly architected hardware which Essbase is being ran on which means Cameron's finding may be more the typical environment.
    Regards,
    John A. Booth
    http://www.metavero.com

  • [Forum FAQ] How do I export each group data to separated Excel files in Reporting Services?

    Introduction
    There is a scenario that a report grouped by one field for some reasons, then the users want to export each group data to separated Excel files. By default, we can directly export only one file at a time on report server. Is there a way that we can split
    the report based on the group, then export each report to Excel file?
    Solution
    To achieve this requirement, we can add a parameter with the group values to filter the report based on the group, then create a data-driven subscription for the report which get File name and parameter from the group values.
    In the report, create a parameter named Name which use the Name field as Available Values (supposing the group grouped on Name field).
    Add a filter as below in the corresponding tablix:
    Expression: [Name]
    Operator: =
    Value: [@Name]
    Deploy the report. Then create a data-driven subscription with Windows File Share delivery extension for the report in Report Manager.
    During the data-driven subscription, in the step 3, specify a query that returns the Name field with the values as the group in the report.
    In the step 4 (Specify delivery extension settings for Report Server FileShare), below “File name”option, select “Get the value from the database”, then select Name field.
    Below ‘Render Format’ option, select Excel as the static value.
    In the step 5, we can configure parameter Name “Get the value from the database”, then select Name field. 
    Then specify the subscription execute only one time.
    References:
    Create a Data-Driven Subscription
    Windows File Share Delivery in Reporting Services
    Applies to
    Reporting Services 2005
    Reporting Services 2008
    Reporting Services 2008 R2
    Reporting Services 2012
    Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.

    Thanks,
    Is this a supported scenario, or does it use unsupported features?
    For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
    in a supported way?
    Thanks! Josh

  • I have a sequence with 50+ clips, export each separately?

    I have a sequence with 50+ clips, each separated by a small gap, can premiere 'scene detect' and export each as a separate file, or do I need to manually adjust the work bar to each each clip and then export?

    You could do a project manage, set it to trim clips with no handles to a new location. You'll then end up with a folder full of clips, if you want those converted to a specific output format you can just do a batch export of this folder. It's a very autopilot way of achieving a result, but if there's any effects or re-sizing done to the clips this method would not work.

  • Use of SQL Profiles where each schema has very different data distribution and volumes

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    Our architecture has multiple client schemas in the same database. Each schema has the same code base (tables, procedures etc) but each client's data is very different in terms of volumes and skew/distribution per table. This architecture was done based on cost - I know it's not ideal but it can't change.....
    I am fairly seasoned with performance management and so know the usual tricks of when to eat up the table using parallel full table scans etc. I couldn't further optimise a given stmt for our largest table. I'll call it TSPCI and it has monthly partitions (2 years) and totals about 35Gb in the largest client schema.
    Anyway, I was surprised when ADDM suggested that I could achieve 98% improvement if I were to use a given SQL Profile. Great?
    So, here's my issue - I've found that the same SQL_ID is shared across all those different client schemas: I can't see how to get it to pick/use the SQL Profile in only a particular client schema - let's call it NEX - and not in another (lets call it COL).
    If I generate a SQL Profile as NEX, has it analysed and built the SQL Profile based on the NEX schema and is it therefore invalid/undesirable to have that SQL Profile used in the COL schema??
    I suppose that I could add a small change (say /*+ NEX */) to the SQL in the NEX schema to make the given sql unique there and then generate a SQL Profile for that..........
    What am I missing here?

    Well, I can confirm the behaviour: accept a SQL Profile for a given SQL in one schema and verified that it is used in another schema (where the data volume and distribution is very different).
    I can also confirm the workaround - simply add a hint to the SQL to make it unique such that I could use different sql profiles for the otherwise exact same sql in different schemas.
    I'm happy enough with this workaround but I'll leave this thread unanswered in case someone can suggest a better approach.

  • Export: ". exporting post-schema procedural objects and actions"

    Hi all,
    I am trying to do a full database export from the server and an import to another PC.
    I tried exporting recieved the following warnings:
    . exporting synonyms
    . exporting views
    . exporting referential integrity constraints
    . exporting stored porcedures
    . exporting operators
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting triers
    . exporting matrializes views
    . exporting snapshot logs
    . exporting job queues
    . exporting refresh grups and children
    . exporting dimensions
    . exporting post-schema procedural objects and actions
    EXP-00008: ORACLE error 903 encountered
    ORA-00903: invalid table name
    ORA-06512: at "SYS.DBMS_RULE_EXP_RL_INTERNAL", line 311
    ORA-06512: at "SYS.DBMS_RULE_EXP_RULES", line 142
    ORA-06512: at line 1
    EXP-00083: The previous problem occurred when callig SYS.DBMS_RULES.schema_info_exp
    . exporting user history table
    . exporting defualt and system auditing options
    . exporting statistics
    Export terminated successfully with warnings.
    I been through the forum and was advised to run the catalog.sql, then rerun the exp command.
    I did as suggested but the export hangs at the last line:
    Export donw in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    About to export the entire database ...
    . exporting tablespace definitions
    . exporting profiles
    . exporting user definitions
    . exporting roles
    . exporting resource costs
    . exporting rollback segment definitions
    . exporting database links
    . exporting sequence numbers
    . exporting directory aliases
    . exporting context namespaces
    . exporting foreign function library names
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions <- hangs here
    Is there a way to resolve this? Or should I approach another method?
    Any of your help is greatly appreciated. Thank you.
    Thanks and Regards
    San

    I am also trying to figure all the stuffs out. I try to answer as much as I know. Thanks.
    What is the export utility version?
    Using exp, "Export: Release 9.2.0.1.0"
    What is the import utility version?
    Using imp, "Import: Release 9.2.0.1.0"
    What version of Oracle database are you trying to export?
    9.2.0.1.0
    Into what version of Oracle database are you trying to import?
    9.2.0.1.0
    What are the database character sets and values of environment variable 'NLS_LANG' for each case?
    Not sure about this but I didnt change any parameters of the character set, should be
    WE8MSWIN1252
    Using WinXP OS, <- quite problematic, having a hard time trying to configure. :(

  • EXP trouble while exporting pre-schema

    Hi all
    I'm running 9.2.0.6 on Suse Linux 8,2.
    while trying to do export of my repository database I've noticed strange behaviour:
    exp tells me 'exporting pre-schema procedural objects and actions' ....
    and nothing more happens.
    Export log still has a size of 0.
    where can I take a close look, whats going on?
    thanks for your hints
    regards
    Franz Langner

    The export is an additional part of a backup job. It doesn't work this week.
    For getting shure it's not a job failure I started export from shell a few times with different execution times. Each time I got the same result. The process stops at pre-schema, no log-entry.
    I took a look on metalink and followed the advice to run "catexp.sql" again, but there was no change.
    as i saw, the error is reported as problem on metalink since 9.0.3.

  • What format to export to?

    I have a completed project for a movie about 2h in length. I need to know what format to export it into before I take the resulting file and make it into a blu-ray. I am interested mainly in what format(s) is used in the "industry"/by the pros, at this step. As high quality as possible would be a bonus, too.
    Thanks.

    To clarify/consolidate some of the above info, the official Blu-ray specification is, well...pretty specific. The content on the disc must be either MPEG-2 or H.264, and of a specific kind of MPEG-2 or H.264. You can take an .avi or whatever into Encore, but Encore will transcode that material to the proper format to meet Blu-Ray specs before burning. Even if it is H.264, but not the "right kind" such as meant for YouTube, it will have to be transcoded to meet Blu-Ray specs.
    In AME, you should see "H.264 Blu-ray" and "MPEG-2 Blu-Ray" options. Further, there will then be multiple presets available for each, and that is where the "Match the timeline" comes into play. If using HDV source material, such as 1080i 29.97 at 1440x1080 resolution, then you will find a 1440x1080 preset to match. For full HD, there are 1920x1080 options and make sure to match frame rate. Not all frame rates are supported though, so there may be a conversion needed.
    As H.264 is a more efficient codec, it is the better choice for quality, though at a high bitrate MPEG-2 and H.264 may have similar quality. H.264 can pack more quality into a smaller file, so for longer videos (lower bitrate) then H.264 has the definite advantage. Basically no reason to even use MPEG-2, unless maybe doing some quickie proof and you think MPEG-2 will encode quicker to meet a deadline perhaps.
    Once you choose the correct preset, the only adjustment in the settings to consider is the bitrate - if it is too high, the encoded file may be too large to fit the disc, then Encore will be forced to transcode (re-encode) the material again to a lower bitrate. There is a size estimate shown at lower left of AME window, but that is not always accurate. You may want to consult an online "bitrate calculator" - you just punch in the specs of the disc (DVD or Blu-ray, length, etc.) and it will recommend a data rate to use.
    Thanks
    Jeff Pulera
    Safe Harbor Computers

  • Error while exporting a schema using data pump

    Hi all,
    I have 11.1.0.7 database and am using expdp to export a schema. The schema is quite huge and has roughly about 4 GB of data. When i export using the following command,
    expdp owb_exp_v1/welcome directory=dmpdir dumpfile=owb_exp_v1.dmp
    i get the following error after running for around 1 hour.
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA [TABLESPACE_QUOTA:"OWB_EXP_V1"]
    ORA-22813: operand value exceeds system limits
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 7839
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    4A974B9C 18237 package body SYS.KUPW$WORKER
    4A974B9C 7866 package body SYS.KUPW$WORKER
    4A974B9C 2744 package body SYS.KUPW$WORKER
    4A974B9C 8504 package body SYS.KUPW$WORKER
    4A961BF0 1 anonymous block
    4A9DAA4C 1575 package body SYS.DBMS_SQL
    4A974B9C 8342 package body SYS.KUPW$WORKER
    4A974B9C 1545 package body SYS.KUPW$WORKER
    4A8CD200 2 anonymous block
    Job "SYS"."SYS_EXPORT_SCHEMA_01" stopped due to fatal error at 14:01:23
    This owb_exp_v1 user has dba privileges. I am not sure what is causing this error. I have tried running it almost thrice but in vain. I also tried increasing the sort_area_size parameter. Even then, i get this error.
    Kindly help.
    Thanks,
    Vidhya

    Hi,
    Can you let us know what the last object type it was working on? It would be the line in the log file that looks like:
    Processing object type SCHEMA_EXPORT/...
    Thanks
    Dean

  • With multiple Apple ID's, how can I tell what I own on each of them?

    I'm posting this in iCloud because I couldn't figure out a clear better forum, and it seems like people dealing with iCloud all have experience (aka horror stories) of dealing with conflicting/lost/blocked Apple ID's.  If this is the wrong place, let me know (or if you're a god/ess just move it to the right forum).
    I've just gone through the ridiculously laborious and frustrating process of sorting out how many Apple ID's I've collected over the years (answer: 4), figuring out what email addresses and forum/community usernames I have and how to actually get at them, etc.  I must say Apple's whole "Apple ID" login/security system is an embarassing trainwreck.  The number of people who just give up and say "oh well, I guess I just have to create a new Apple ID" is staggering.
    So now that I've ranted ... I know I can't merge my Apple ID's into one account (because that would be too helpful) ... but is there any way to tell what I own on each account, without connecting my iPhone(4) up to my PC and letting iTunes delete everything owned by my main Apple ID, just to see what I might own under some other ID?  I have one ID that has the vast majority of my music and apps, I'm just curious to know if I ever acquired some media on other ID's before I abandon them.
    thanks for any advice/pointers you can provide...

    Now i have big problem this account name uses email which i cannot use in year 2015. I cannot change it to same what i use in App Store. It seems means i must maybe make get another email for this forum which sounds silly but only solution maybe but where i can change email for this forum?

  • I have a huge file which is in GB and I want to split the video into clip and export each clip individually. Can you please help me how to split and export the videos to computer? It will be of great help!!

    I have a huge file which is in GB and I want to split the video into clip and export each clip individually. Can you please help me how to split and export the videos to computer? It will be of great help!!

    video
    What version of Premiere Elements do you have and on what computer operating system is it running?
    Please review the following workflow.
    ATR Premiere Elements Troubleshooting: PE11: Project Assets Organization for Scene and Highlight Grabs from Collection o…
    But please also determine if your project goal is supported by
    a. format of your source
    and
    b. computer resources
    More later based on details that you will post.
    ATR

  • What is better: Builder 2 oder Plug In

    Hi guys!
    As i couldn't find much about that, here is my question.
    Maybe it is interesting enough for some experts to have smal
    discussion.
    What is better to use: builder or th plug-in.?
    I don''t really need the to have a design view, so pure code
    would be enough for me. And what I hate about the builder is the
    performance when compiling large projects. Is this better if you
    use Eclipse (which version) together with the plugin. Can you have
    both installed and use the same project workplace to "share" the
    projects, depending on what you want to do?
    What gives me the best performance for writing, compiling,
    debugging?
    THX

    See iPhone. See iPhone charged to 100%. See user using iPhone. User opens an app and uses it. When done user goes into recently used list and "closes" app. User does this each time when using an app. User gets 145 texts. 71 hours later iPhone only has 8% battery charge. See the battery in red. User charges it.
    See iPhone. iPhone is charged to 100% See user using iPhone. See user opening apps and never closing them on the recently used app list. See recently used list grow to 66. User gets 141 texts. User gets 2 more phone calls than last time. 71 hours and 1 minute later iPhone has 8% battery charge. See the battery in red. User charges it.
    See user get only 1 minute difference in battery life. See user not bother to close apps anymore. See user also explain this is not exactly scientific. See user release Spot to chase down Dah-veed. See user and Jane get busy now.

  • What is better???--sockets question

    I have a server that receives one client (one thread of them) per socket.
    Is it better to have all in the same socket?
    where is the bottle�s neck? in the processor or in the bandwith?

    ok
    and what is better for my server:
    1) receive each client in a different socket
    2)receive all clients in one socketAFAIK you would need a new port for each server socket. How then can your clients know which port to connect too? Therefore I think (2) is the proper solution.
    For a general implementation of a tcp server, have a look at my solution at http://www.ebi.ac.uk/~kirsch/monq-doc/monq/net/TcpServer.html . You can download the code by following the link at the bottom of that page. Comments/questions welcome.
    Harald.

  • Store large volume of Image files, what is better ?  File System or Oracle

    I am working on a IM (Image Management) software that need to store and manage over 8.000.000 images.
    I am not sure if I have to use File System to store images or database (blob or clob).
    Until now I only used File System.
    Could someone that already have any experience with store large volume of images tell me what is the advantages and disadvantages to use File System or to use Oracle Database ?
    My initial database will have 8.000.000 images and it will grow 3.000.000 at year.
    Each image will have sizes between 200 KB and 8 MB, but the mean is 300 KB.
    I am using Oracle 10g I. I read in others forums about postgresql and firebird, that isn't good store images on database because always database crashes.
    I need to know if with Oracle is the same and why. Can I trust in Oracle for this large service ? There are tips to store files on database ?
    Thank's for help.
    Best Regards,
    Eduardo
    Brazil.

    1) Assuming I'm doing my math correctly, you're talking about an initial load of 2.4 TB of images with roughly 0.9 TB added per year, right? That sort of data volume certainly isn't going to cause Oracle to crash, but it does put you into the realm of a rather large database, so you have to be rather careful with the architecture.
    2) CLOBs store Character Large OBjects, so you would not use a CLOB to store binary data. You can use a BLOB. And that may be fine if you just want the database to be a bit-bucket for images. Given the volume of images you are going to have, though, I'm going to wager that you'll want the database to be a bit more sophisticated about how the images are handled, so you probably want to use [Oracle interMedia|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14302/ch_intr.htm#IMURG1000] and store the data in OrdImage columns which provides a number of interfaces to better manage the data.
    3) Storing the data in a database would generally strike me as preferrable if only because of the recoverability implications. If you store data on a file system, you are inevitably going to have cases where an application writes a file and the transaction to insert the row into the database fails or a the transaction to delete a row from the database succeeds before the file is deleted, which can make things inconsistent (images with nothing in the database and database rows with no corresponding images). If something fails, you also can't restore the file system and the database to the same point in time.
    4) Given the volume of data you're dealing with, you may want to look closely at moving to 11g. There are substantial benefits to storing large objects in 11g with Advanced Compression (allowing you to compress the data in LOBs automatically and to automatically de-dupe data if you have similar images). SecureFile LOBs can also be used to substantially reduce the amount of REDO that gets generated when inserting data into a LOB column.
    Justin

  • How to export one schema

    hi,
    I want to do export from the database with only one schema, do anyone know what is the command.
    Amy

    Here you have a complete example to export/import a schema.
    [oracle@ozawa bin]$ export ORACLE_SID=LQACNS1
    [oracle@ozawa bin]$
    [oracle@ozawa bin]$ sqlplus /nolog
    SQL*Plus: Release 9.2.0.1.0 - Production on Thu Mar 4 09:43:57 2004
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    SQL> conn / as sysdba
    Connected.
    SQL>
    SQL> create user joelp identified by joelp
    2 default tablespace users
    3 temporary tablespace temp;
    User created.
    SQL>
    SQL> grant connect, resource to joelp;
    Grant succeeded.
    SQL>
    SQL> conn joelp/joelp
    Connected.
    SQL>
    SQL> create table t1 ( c1 number );
    Table created.
    SQL> insert into t1 values (10);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> quit
    Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    [oracle@ozawa bin]$
    [oracle@ozawa bin]$ exp joelp/joelp file=joel.dmp log=joel.log
    Export: Release 9.2.0.1.0 - Production on Thu Mar 4 09:47:40 2004
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user JOELP
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user JOELP
    About to export JOELP's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    . about to export JOELP's tables via Conventional Path ...
    . . exporting table T1 1 rows exported
    . exporting synonyms
    . exporting views
    . exporting stored procedures
    . exporting operators
    . exporting referential integrity constraints
    . exporting triggers
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting materialized views
    . exporting snapshot logs
    . exporting job queues
    . exporting refresh groups and children
    . exporting dimensions
    . exporting post-schema procedural objects and actions
    . exporting statistics
    Export terminated successfully without warnings.
    [oracle@ozawa bin]$
    [oracle@ozawa bin]$ sqlplus joelp/joelp
    SQL*Plus: Release 9.2.0.1.0 - Production on Thu Mar 4 09:48:41 2004
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    SQL> drop table t1;
    Table dropped.
    SQL> quit
    Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    [oracle@ozawa bin]$
    [oracle@ozawa bin]$ imp joelp/joelp file=joel.dmp
    Import: Release 9.2.0.1.0 - Production on Thu Mar 4 09:49:16 2004
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Export file created by EXPORT:V09.02.00 via conventional path
    import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
    . importing JOELP's objects into JOELP
    . . importing table "T1" 1 rows imported
    Import terminated successfully without warnings.
    [oracle@ozawa bin]$
    [oracle@ozawa bin]$ sqlplus joelp/joelp
    SQL*Plus: Release 9.2.0.1.0 - Production on Thu Mar 4 09:49:28 2004
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    SQL> select * from t1;
    C1
    10
    SQL> quit
    Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    [oracle@ozawa bin]$
    [oracle@ozawa bin]$
    Joel Pérez
    http://otn.oracle.com/experts

Maybe you are looking for

  • How can I get it to boot??

    My 12" PB has a probably about completely full HD. I need to start it up and clean it out, but it's not booting - it gets to the grey startup screen (with the apple on it) and the circle spins for half an hour and then it gives me the 'your computer

  • XI3.0 MailAdapter: Problem with receiving mails from Mailserver

    Hi all, I tried to set up a scenario using the mail adapter as the sender and the file adapter as the receiver. The problem is that the mailadapter logs in into the mailserver but then it stops working with an error. Here is my setup of the MailSende

  • VGA Display Adapter; Extended Desktop?

    I have a iMac G5 (iSight) and run the latest version of Mac OS X Leopard. If I buy a "VGA Display Adapter" will I be able to use my spare VGA display to extend my desktop? Or is the adapter strictly for mirroring?

  • Work Order Release Approval workflow

    Hi All, I am trying to setup Work Order Release Approval workflow. I have set its AME using supervisory level as the action type. When I tried set work order status, I had to set the department as I set department attribute as its AME condition, and

  • Exporting files with AC3 audio

    Forgive me if this is not the right place to post this question. Using QT Pro, I want to trim and edit some files containing AC3 (Dolby Digital) audio. When I export or save the edited video, will they retain the AC3 audio? More specifics: These file