Long-running transactions and the performance penalty

If I change the orch or scope Transaction Type to "Long Running" and do not create any other transaction scopes inside, I'm getting this warning:
warning X4018: Performance Warning: marking service '***' as a longrunning transaction is not necessary and incurs the performance penalty of an extra commit
I didn't find any description of such penalties.
So my questions to gurus:
Does it create some additional persistence point(s) / commit(s) in LR orchestration/scope?
Where are these persistence points happen, especially in LR orchestration?
Leonid Ganeline [BizTalk MVP] BizTalk Development Architecture

The wording may make it sound so but IMHO, if during the build of an orchestration we get carried away with scope shapes we end up with more persistence points which do affect the performance so one additional should not make soo much of a difference. It
may have been put because of end-user feed back where people may have opted for long running transactions without realizing about performance overheads and in subsequent performance optimization sessions with Microsoft put it on the product enhancement list
as "provide us with an indication if we're to incurr performance penalties". A lot of people design orchestration like they write code (not saying that is a bad thing) where they use the scope shape along the lines of a try catch block and what with
Microsoft marketing Long Running Transactions/Compensation blocks as USP's for BizTalk, people did get carried away into using them without understanding the implications.
Not saying that there is no additional persistence points added but just wondering if adding one is sufficient to warrant the warning. But if I nest enough scope shapes and mark them all as long-running, they may add up.
So when I looked at things other than persistence points, I tried to think on how one might implement the long running transaction (nested, incorporating atomic, etc), would you be able to leverage the .Net transaction object (something the pipeline
use and execute under) or would that model not handle the complexities of the Long Running Transaction which by very definiton span across days/months and keeping .Net Transaction objects active or serialization/de-serialization into operating context will
cause more issues.
Regards.

Similar Messages

  • IMDB Cache group load and long running transaction

    Hello,
    We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
    LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
    Command> call ttLogHolds ;
    < 0, 12161024, Long-Running Transaction      , 1.1310 >
    < 170, 30025728, Checkpoint                    , Entity.ds0 >
    < 315, 29945856, Checkpoint                    , Entity.ds1 >
    3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
    Thanks
    Mark

    Hello,
    I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
    1. Autocommit left as the default -
    Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
    (Default setting AutoCommit=1)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
    Command> call ttlogholds ;
    < 0, 11915264, Long-Running Transaction      , 1.79 >
    < 474, 29114368, Checkpoint                    , Entity.ds0 >
    < 540, 1968128, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    2011-01-19 14:10:03.135
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: timestenorad
    28427   0x16fd6910            7.26     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69211971680          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69211971680          TRAQDBA.AADNA
                                                       Command   69211971680          S     69211971680         
                                  8.10029  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.10582  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.10477  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.10332  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.10546  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.10261  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.10637  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.10669  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.10111  Active      Database  0x01312d0001312d00   IX    0                   
    Program File Name: ttIsqlCmd
    29317   0xde257d0             1.79     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211584104          SYS.TABLES
                                                       Command   69211584104          S     69211584104         
    11 outstanding transactions foundAnd the commands were
    < 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM                       , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
    < 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM                       , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
    Command> AutoCommit
    autocommit = 1 (ON)
    Command> AutoCommit 0
    Command> AutoCommit
    autocommit = 0 (OFF)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
    Command>  call ttlogholds ;
    < 1081, 6617088, Long-Running Transaction      , 2.50157 >
    < 1622, 10377216, Checkpoint                    , Entity.ds0 >
    < 1668, 55009280, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    er.oracle$ ttXactAdmin entity                                             
    2011-01-20 07:23:54.125
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: ttIsqlCmd
    2368    0x12bb37d0            2.50157  Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211634216          SYS.TABLES
                                                       Command   69211634216          S     69211634216         
    Program File Name: timestenorad
    28427   0x2abb580af2a0        7.2358   Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69212120320          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69212120320          TRAQDBA.AADNA
                                                       Command   69212120320          S     69212120320         
                                  8.24870  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.26055  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.25659  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.25469  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.25694  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.25465  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.25841  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.26288  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.24924  Active      Database  0x01312d0001312d00   IX    0                   
    11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:37 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39278592, Checkpoint                    , Entity.ds1 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    2 rows found.
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:41 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39290880, Long-Running Transaction      , 2.50167 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    < 2929, 65347584, Checkpoint                    , Entity.ds1 >
    3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
    traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
      2  from v$sql_monitor sm, v$sql s
      3  where sm.sql_id = 'd6fmfrymgs5dn'
      4  and sm.sql_id = s.sql_id ;
    SQL_ID        SQL_EXEC_START       SQL_FULLTEXT
    d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
                                       MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
                                       ."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
                                        "TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
                                       "."AADNA"."ADR_ADDRESS_NAME_KEY"
    Elapsed: 00:00:00.00Thanks
    Mark

  • Sequential Convoy and Long running transaction: Messages still referenced

    Hi everyone<o:p></o:p>
    Being a BizTalk developer since 2006, this thing still stumps me.<o:p></o:p>
    I have a sequential convoy singleton orchestration that debatches messages using a rcvPipeline. The orchestration is needed in a FIFO scenario. In order to execute a rcvPipeline
    within an orchestration I need to encapsulate it within an atomic transaction scope. <o:p></o:p>
    In order to have an atomic scope the orchestration needs to be long running. I have also encapsulated the atomic transaction within a scope (using long running transactions) to have
    exception handling.
    <o:p></o:p>
    Everything works fine except for one major detail:
    When the orchestration executes the messages are still in the messagebox. I can even click on the orchestration instance in the MGMT console and look at the message! Tracking is disabled for the receive port as well as for the orchestration. Still, the messages
    does not get cleaned up.
    <o:p></o:p>
    I have set my DTA-purge to 1 hour and it works fine, but the messages are still in the orchestration.<o:p></o:p>
    My guess is that the long running transactions does not complete (although it looks like they should) and since the transaction is not completed the messages are not removed from
    the message box.
    So, to summarize: Is it possible to combine long running transactions and a singleton orchestration?
    //Mikael Sand (MCTS, ICC 2011) -
    Blog Logica Sweden

    So after a day of looking for the solution it is quite clear that you are right in that the atomic transaction does not commit. I added a compensation block with trace info and it is never hit.
    I also experimented with Isolation level on the atomic transaction and that did nothing.
    Lastly I also made the sendport direct bound and also tried "specify later binding" to a physical port.
    The messages are still being referenced by the orchestration! What can I do to make the atomic transation commit?
    //Mikael Sand (MCTS, ICC 2011) -
    Blog Logica Sweden

  • Long running transactions.

    Hi all,
    In real world systems, how efficient are long running flat transactions in ejb 2.0 since the transactions tends to lock several tables that are part of the transaction.
    Could any one explain in detail or direct to any websites that share the real time experiences with transactions in ejb enviornment.
    Thanks in advance.

    Long run transaction are real performance killers, avoid them if you can.
    It's actually quite easy (and sensible) to break down long transactions into smaller transactions with a little common-sense.
    For example, funds transfer. Say you want to transfer money from one account to another account from country A (which is only open on Tuesdays) to country B (only open on Fridays).
    Option A: Start a long running transaction on Tuesday, commit it on Friday.
    The resources used to do this are very much wasted - and will cause performance degradation.
    Option B: Transfer the funds from Country A to Country C (which is open Tuesday and Friday) - in one transaction on Tuesday. Transfer funds from Country C to country B in a second transaction on Friday. If C to B fails, transfer the funds back - if it succeeds inform A of the success. It's more complex, but there's no free lunch. (This is how real banks do it - ever got a refund from the bank? Well, no, but you know what I mean; it's also why it takes five days for a cheque to clear etc etc).
    This is only a narrative - but there are very few cases where this cannot work in a real transaction.

  • Problem handing long running transaction

    Hi,
    I have a long running transaction, and there is a high possibility that
    someone else will make changes to one of the same objects in that
    transaction. How is everyone handling this situation? Is it best to
    catch the JDOException thrown and attempt the transaction one more time
    or to return to the user on "failure"?
    Also, would it possible with Kodo to set Pessimistic transactions for
    just this case and use optimistic for "read" operations?
    Kam

    Also, would it possible with Kodo to set Pessimistic transactions for
    just this case and use optimistic for "read" operations?You can easily switch between optimistic and pessimistic transactions:
    pm.currentTransaction ().setOptimistic (val);
    You can also use pessimistic transactions for everything, and set the
    nontransactional read property to true to do reads outside of transactions.
    Note, however, that you shouldn't use pessimistic locking for long-running
    transactions in most cases. They use too many DB resources. It's better to
    use optimistic transacactions and deal with the errors (or give them to the
    user).

  • How to measure query run time and mnitor performance

    Hai All,
                   A simple question. How to measure query run time and mnitor performance? I want to see the parameters like how long it took to execute, how much space it took etc.
    Thank you.

    hi,
    some ways
    1. use transaction st03, expert mode.
    2. tables rsddstat*
    3. install bw statistics (technical content)
    there are docs on this, also bi knowledge performance center.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    BW Performance Tuning Knowledge Center - SAP Developer Network (SDN)
    Business Intelligence Performance Tuning [original link is broken]
    also take a look
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/31b6b490-0201-0010-e4b6-a1523327025e
    Prakash's weblog on this topic..
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'.

  • Is there any time out defined for long running transaction?

    hi, 
    i have to make one big data transferring script , though transaction is not required here, but i was planning to,
    please tel me is there any time out for long running transactions.i have to run the script from database it self
    yours sincerley

    Can you show us an example of your script? You can divide the transaction into small chunks to reduce time and locking/blocking as well.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Explicit commit during a long-running transaction in EclipseLink

    Hi,
    I am currently upgrading a J2EE application from OAS with Toplink Essentials to WL 10.3.3 with Eclipselink and have the following issue with transactions.
    The application was developed to have long-running transactions for business reasons in specific scenarios. However, some other queries must be created and committed along the way to make sure that we have this specific data in the database before the final commit. This call (and subsequent code) is in an EJB method that has the "@TransactionAttribute(TransactionAttributeType.REQUIRED)" defined on it. Taking this out gives me the same behaviour.
    The application has the following implementation of the process, which fails:
    Code
    EntityManager em = PersistenceUtil.createEntityManager();
    em.getTransaction().begin();
    PersistenceUtil.saveOrUpdate(em,folder);
    em.getTransaction().commit(); --->>>>FAILS HERE
    Error
    javax.ejb.EJBTransactionRolledbackException: EJB Exception: : javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Cannot call Connection.rollback in distributed transaction. Transaction Manager will commit the resource manager when the distributed transaction is committed.
    So I tried the following to see if it would work, but I believe that the transaction end and anything after that will fail since it requires a transaction to continue
    PersistenceUtil.getUnitOfWork().writeChanges();
    PersistenceUtil.getUnitOfWork().commit();
    Error
    javax.persistence.TransactionRequiredException: joinTransaction has been called on a resource-local EntityManager which is unable to register for a JTA transaction.
    Can anyone help me as to how to commit a transaction within the long running transaction in this environment? I also want to be sure that the long-running transaction does not fail or is not stopped along the way.
    Thanking you in advance

    You seem to be using JTA, so you cannot use JPA transactions, you must defined your transaction in JTA, such as in your SessionBean.
    When using JTA you should never use,
    em.getTransaction().begin();
    If you do not want to use JTA, then you need to set your persistence unit to be RESOURCE_LOCAL in your persistence.xml.
    Also you need to ensure you use a non-jta enabled DataSource.
    James : http://www.eclipselink.org

  • I can no longer run itunes since the latest update

    i can no longer run itunes since the latest update

    Go to Control Panel > Add or Remove Programs (Win XP) or Programs and Features(Later)
    Remove all of these items in the following order:
    iTunes
    Apple Software Update
    Apple Mobile Device Support (If this won't uninstall press on)
    Bonjour
    Apple Application Support
    Reboot, download iTunes, then reinstall, either using an account with administrative rights, or right-clicking the downloaded installer and selecting Run as Administrator.
    See also HT1925: Removing and Reinstalling iTunes for Windows XP or HT1923: Removing and reinstalling iTunes for Windows Vista, Windows 7, or Windows 8
    Should you get the error iTunes.exe - Entry Point Not Found after the above reinstall then copy QTMovieWin.dll from:
    C:\Program Files (x86)\Common Files\Apple\Apple Application Support
    and paste into:
    C:\Program Files (x86)\iTunes
    The above paths would be for a 64-bit machine. Hopefully the same fix with the " (x86)" omitted would work on 32-bit systems with the same error.

  • How to run long running report in the background

    Oracle Application server 10g
    I need to run the long running reports in the background because while the report is running the cursor is loading and the control doesn't return to the user
    this feature exist in oracle 6i by set the following parameter
    RUN_PRODUCT( REPORTS, 'r_1', ASYNCHRONOUS, RUNTIME, FILESYSTEM, pl_id, NULL);
    How can I accomplish the same in OAS 10g?

    Hi,
    I've done this in 11g. I think it will be the same in 10g as well
    Following are the steps in 11g using run_report_object.
    1. Read metalink note: Using the Reports Server Queue PL/SQL Table and API - RW_SERVER_JOB_QUEUE [ID 72531.1]
    2. Implement RW_SERVER_JOB_QUEUE table as per above notes.
    3. When submitting report run it as background
    a. SET_REPORT_OBJECT_PROPERTY(lo_report_object, REPORT_EXECUTION_MODE, ASYNCHRONOUS); -
    SET_REPORT_OBJECT_PROPERTY(lo_report_object, REPORT_COMM_MODE , BATCH);
    4. Display the job_id to user (notification).
    5. Create a new form to view reports base on on RW_SERVER_JOB_QUEUE which provides you the status, etc.... (you may have to create a way to identify the username who submitted the job. This way not all users will see the jobs in RW_SERVER_JOB_QUEUE )
    6. Optionally you can use NOTIFYSUCCESS=email to notify user when the report is finished.
    Cheers
    LS

  • I'm trying to download sheet music and need to run libellous scorch add-on in 32 bit model. I have a mac running lion and the latest safari. I have downloaded FireFox 6 and run it in 32 bit but still I can't see the music score to print. Can anyone help?

    I'm trying to download sheet music and need to run sibelius scorch add-on in 32 bit model. I have a mac running lion and the latest safari. I have downloaded FireFox 6 and run it in 32 bit but still I can't see the music score to print. Can anyone help?

    Hello,
    There are no files like that in that folder (com.apple.safari).  They all have long numbers with .jpeg or .png at the end.  That's what I meant by I cannot find any files like those mentioned in all the helps I read today.  BTW-it's still crashing - no particular pattern to it at all.
    The only two things I have downloaded recently is a new version of Flash and a Mac-driven update to Office.
    Thank you very much for your attempt to help us.
    J.

  • Will rollback failure cause long-running transaction?

    We are getting the following error for one transaction
    [TimesTen][TimesTen 5.1.35 CLIENT]Communication link failure. System call select() failed with OS error 110. This operation has Timed Out. Try increasing your ODBC timeout attribute or check to make sure the target TimesTen Server is running
    After that application tries to do a rollback, but rollback failed.
    Will this transaction become a long-running transaction in server?

    Have you filed a metalink SR to get help on this issue?
    -scheung

  • Call long running process and return immediately

    Hi everyone,
    Here' my problem : I am calling an onDemand application process when clicking on a button. It's a very long running process; and while waiting for the response my browser stops responding (white page).
    So, is there a way to call a process and return immediately to the javascript. The process will set a field to a given value when it finishes. In javascript side, I can then check in a while boucle the value of this field to know if the process has ended.But I dont know how to return just after the application process call ...
    Thanks and best regards,
    Othman

    I presume that you can achieve that by means of a batch job.
    You can use an automatic page reload every X seconds until a certain "flag" (or similar mechanism) changes state and check it in a before header process. In the meanwhile you can display some animated GIF for instance.
    Once the processing is completed you remove the reload timer from the header or you branch to a different page using a programmatic technique like procedure owa_util.redirect_url.
    Bye,
    Flavio
    http://www.oraclequirks.blogspot.com/search/label/Apex

  • When I try to open an image by double clicking on it in Bridge, I get a message telling me to log in to Creative Cloud.  I am running CS6, and the default should be to open files in Photoshop 6 or in Adobe Raw (if it's a Raw file).  I don't want to log in

    When I try to open an image by double clicking on it in Bridge, I get a message telling me to log in to Creative Cloud.  I am running CS6, and the default should be to open files in Photoshop 6 or in Adobe Raw (if it's a Raw file).  I don't want to log into CC since I am not a subscriber, and this means that I have to work around, and go  back to Bridge, and tell it to open the file in Adobe RAW.  However, this does not work for older psd files which for some reason cannot be opened in RAW.  How do I return to the process of simply allowing RAW files to open automatically in Adobe RAW, and simply right clicking on the image in Bridge to bring up the option of opening it in Photoshop?

    <moved from Adobe Creative Cloud to Bridge General Discussion>

  • My Macbook air no longer shows battery and the light on the connector is not on can someone help me please?

    My Macbook air no longer shows battery and the light on the connector is not on can someone help me please

    i saw that someone had updated to maverick and had the same battery issue and fan issue so I reset the SMC and everything is ok now

Maybe you are looking for

  • Can I use OS9 as the start up disk on my Intel Mac?

    I have a new Intel iMac. I want to partition my hard drive, install OS9 and run the old system. I know I can't use "classic" on 10.4.7 but I wonder if I could simply install OS9 and reboot my computer. I don't need to move between the two, meaning I

  • Authentication error while logging in to analytics url

    Hi, I have installed OBIEE 11g on Windows XP 64 bit server recently. I'm unable to login to analytics page "http://xxxxxx:7001/analytics"/"http://xxxxxx:9704/analytics"  But  able to login to enterprise manager('http://xxxxxx:7001/em')and console . I

  • Can't find photo's?

    Hi, I have my i-photo library on a lacie hard drive and set up a pathway in the i-photo preferences ( advanced ) to always look for them on here. I recently uploaded 2 photos onto i-photo without the hard drive plugged in as i wanted to play about wi

  • Late 2013 15" Macbook freezing, but mouse still responsive, any suggestions?

    Hi, I have a new late 2013 Macbook Pro 15" Retina that keeps freezing periodically throughout the day, When it freezes the mouse can still be moved, but nothing else can be clicked on the keyboard commands are non responsive. I did notice a GPU hang

  • Why is my External Disk Drive immediately Ejected?

    I have an external hard drive that I've mounted many times onto my iMac running OSX 10.9.2. Now all of a sudden when I try to mount it I get the error message: "Disk not properly ejected". This even when I just connected it. When I tried again the di