Disk space transaction  and temp table lock  in oracle

Hi,
Today many sessions used to get disk space transaction lock and temp table lock and i am seeing these locks first time in my Production database,
is there any workaround to avoid this contension.
Thanks
Prakash GR

Post your version (all 3 decimal places).
Post the SELECT statement and results that have led you to this conclusion.
Other than the fact that you have seen a number what, precisely, is the issue.

Similar Messages

  • HT5675 This update, upon booting, has caused my disk space to fill up and my system freezes up. The same happens to my coworker's computer. We both watch the remaining disk space decrease and the "spinning rainbow appears" and we must restart. What do we

    Software Update: Java for OS X 2013-002 1.0
    This update, upon booting, has caused my disk space to fill up and my system freezes up. The same happens to my coworker's computer. We both watch the remaining disk space decrease and the "spinning rainbow appears" and we must restart. What do we do?

    First, empty the Trash if you haven't already done so.
    Use a tool such as OmniDiskSweeper (ODS) to explore your volume and find out what's taking up the space. You can delete files with it, but don't do that unless you're sure that you know what you're deleting and that all data is safely backed up. That means you have multiple backups, not just one.
    Proceed further only if the problem hasn't been solved.
    ODS can't see the whole filesystem when you run it just by double-clicking; it only sees files that you have permission to read. To see everything, you have to run it as root.
    Back up all data now.
    Install ODS in the Applications folder as usual.
    Triple-click the line of text below to select it, then copy the selected text to the Clipboard (command-C):sudo /Applications/OmniDiskSweeper.app/Contents/MacOS/OmniDiskSweeper
    Launch the Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Terminal in the icon grid.
    Paste into the Terminal window (command-V). You'll be prompted for your login password, which won't be displayed when you type it. You may get a one-time warning not to screw up. If you see a message that your username "is not in the sudoers file," then you're not logged in as an administrator.
    I don't recommend that you make a habit of doing this. Don't delete anything while running ODS as root. If something needs to be deleted, make sure you know what it is and how it got there, and then delete it by other, safer, means.
    When you're done with ODS, quit it and also quit Terminal.

  • How to know the disk space -(filled and not filled) in Macbook pro?

    How to know the disk space -(filled and not filled) in Macbook pro? what is the easiest way to know about it? how can i know a % of disk space that is filled in my mac book pro (2012 model MD101)

    Hi ...
    What's taking up disk space >  OSX Tips The Storage Display
    Click your Apple menu  top left in your screen. From the drop down menu click About This Mac > More Info > Storage
    Make sure there's at least 15% free disk space.

  • [INS-32021] Insufficient disk space on this volume for the selected Oracle

    Folks,
    Hello. I am installing Oracle 11gR2 RAC Grid using Installer whose location is: /home/.../grid/runInstaller.
    I am on step 4 of 8 in the Wizard whose contents are as follows:
    Oracle Base: /u01/11g_grid
    Software Location: /u01/app/grid
    Cluster Registry Type: ASM
    OSASM Group: dba
    When click "next" button I got this error:
    " [INS-32021] Insufficient disk space on this volume for the selected Oracle home
    Cause - The selected Oracle home was on a volume without enough disk space.
    Action - Choose a location for Oracle home that has enough space (minimum of 3,017MB) or free up space on the existing volume. "
    I test my "home" directory using the command: [ora11g@rac1 home]$ df -k
    Its Ouput:
    Filesystem: /dev/sda1 and tmpfs
    1K-blocks: 20,308,020 and 848,184
    Used: 18,888,968 and 0
    Available: 370,816 and 848,184
    Use: 99% and 0%
    Mounted on : / and /dev/shm
    I run another command: [ora11g@rac1 u01]$df -k and got the same output as above.
    My questions are:
    First, is "Oracle home" the directory "/home/..." or the directory "/u01/..." ?
    Second, how to increase the space for the correct directory to be over 3017MB (Minimum) ?
    Thanks.

    Folks,
    Hello. Thanks a lot for replying.
    I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2).The Virtual Disk for each VM is 20 GB whose raw device is /dev/sda (partition /dev/sda1 is the same size).
    In order to install Grid Infrastructure, I add 5 Hard Disks into the first VM rac1 and 10GB for each Hard Disk.Then create 5 ASM disks respectively as a Disk Group. Let me describe them as follows:
    OS Phyysical file: C:\VM_RAC\sharerac\asmdisk1.vmdk (asmdisk2.vmdk asmdisk3.vmdk asmdisk4.vmdk asmdisk5.vmdk)
    Corresponding raw devices: /dev/sdb (sdc sdd sde sdf)
    Corresponding partition: /dev/sdb1 (sdc1 sdd1 sde1 sdf1)
    Corresponding ASM disk: ASMDISK1 (ASMDISK2 ASMDISK3 ASMDISK4 ASMDISK5)
    When I run Grid installer step 5 of 8, I select all of 5 ASMDISK as ASM disk group on step 5 of 8 in the Wizard.
    The clusterware files are supposed to be stored in the ASM disk group.
    But it seems that on step 4 of 8 in the Wizard, the installer install the Oracle Home /u01 into VM rac1 Hard Disk /dev/sda1 and get the above error message - not enough disk space.
    My questions are:
    First, Why does the Installer store Oracle Home /u01 in /dev/sda1 because it should be in the Disk Group /dev/sdb1 (sdc1 sdd1 sde1 sdf1) ?
    Second, Does the Installer go wrongly ?
    Third, How to have the Installer store the clasterware files in ASM Disk Group ?
    Thanks.

  • I've followed all the recommended steps for disk space limitations and still cannot download emails. Recommendations please.

    I'm running the latest OS on my iMac, and I do not use anti-virus software. I have deleted all but a few emails from my Trash and compacted all folders. The Alert message, "There is not enough disk space to download new messages. Try deleting old mail, emptying the Trash folder, and compacting your mail folders, and then try again." continues to appear.

    the issue can be caused by a lack of physical disk space, a lack of quota in the disk (user account full), file locking and contention (such a an anti virus or backup program trying to access the same space at the same time. Time machine features in some peoples problems) or as the message suggest large Thunderbird files. As you have looked at the Thunderbird issues, perhaps it is time to look t the broader picture.

  • Issue on space . Disk space increases and decreases automatically

    Hi All,
    In our QA environment for past 2 days we are facing issue on space . some unusual behaviour. Taken threaddumps at that time and found the issue
    "172.30.104.53 [1373723166026] <closed>" daemon prio=3 tid=0x0000000101dca800 nid=0x135f in Object.wait() [0xfffffffd8a9ff000]
       java.lang.Thread.State: WAITING (on object monitor)
            at java.lang.Object.wait(Native Method)
            - waiting on <0xfffffffdf9743658> (a com.day.j2ee.servletengine.HttpListener$Worker)
            at java.lang.Object.wait(Object.java:485)
            at com.day.j2ee.servletengine.HttpListener$Worker.await(HttpListener.java:587)
            - locked <0xfffffffdf9743658> (a com.day.j2ee.servletengine.HttpListener$Worker)
            at com.day.j2ee.servletengine.HttpListener$Worker.run(HttpListener.java:612)
            at java.lang.Thread.run(Thread.java:662)
    Please let me know where exact issue on thread.

    If you are seeing daily increases and decrease in your disk utilization (for example it increases during the day/overnaight but in the morning the disk space has recovered what you are seeing is the impact of Tar optimization.
    The Tar Persistence manager is the underlying storage mechanism for CRX. Data is stored in append only tar files. This means the when you update a node or property the new values are written to the Tar files, the indexes are updated to point to the new location, but the old data is also still left in the file. This mechanism allows for a much faster write mechanism. So the more you frequently you update existing content in your repository the larger you Tar files becomes.
    There is a process called Tar File Optimization that by default is scheduled to run from 2 AM to 5 AM server time. This process identifies all the orphaned data in the Tar files and deletes, there by reducing the size of the tar files on disk.
    So if you are in heavy content migration mode, or moving large amounts of content between instances you can see large swings in your disk space utilization as the Tar file balloons up during the day and then shrinks back down over night. In some cases depending on how large your repository is the 3 hours allotted by default is not sufficent to complete the optimizaiton so you may not be recovering all your disk space. During normal production operations this will normally average out over time and the 3 hour window is enough. However during periods of heavy usage, especially during QA or content migration you may find that your tar files are ever increasing in size. If that happens you need pick a period of time over say a weekend and trigger the Tar File optimization to run until complete to recover as much of your disk space as possible.
    See http://helpx.adobe.com/crx/kb/TarPMOptimization.html for details on Tar File Optimization.
    As someone else pointed out you may also have an issue with your data store which requires a different clean up method.
    Another possible culprit is your Lucene index files. Depending on your data model and repository size you can swings in your Lucene indexes because like the Tar File it periodically cleans itself up and large amounts of content change can cause this cycle to become more pronounced.
    This blog post dicusses in more deapth both Tar File Optimization and Data Store garbage collection. http://blog.aemarchitect.com/2013/06/17/importance-of-aem-maintenance-procedures-for-non-p roduction-boxes/

  • Identifying deadlocked resources in graph with 1 row lock and 1 table lock

    Hi, I have run into repeated occurrences of the deadlock graph at the bottom of this post and have a few questions about it:
    1. It appears that proc 44, session 548 is holding a row lock (X). Is the waiter, proc 30, session 542, trying to acquire a row lock (X) also or an exclusive table lock (X) on the table containing that row?
    2. Under what circumstances would something hold a row exclusive table lock (SX) and want to upgrade that to a share row exclusive table lock (SSX)?
    3. Our table cxml_foldercontent has a column 'structuredDataId' with a FK to cxml_structureddata.id and an ON DELETE SET NULL trigger. Would this help explain why an "update" to one table (i.e.g cxml_foldercontent) would also need to acquire a lock in a foreign table, cxml_structureddata?
    4. What is the difference between "Current SQL statement:" and "Current SQL statement for this session:"? That terminology is confusing. Is session 542 executing the "update" or the "delete"?
    5. In the "Rows waited on:" section is it saying that Session 542 is waiting on on obj - rowid = 0000BE63 - AAAL5jAAGAAA6tZAAK or that it is has the lock on that row and other things are waiting on it?
    A couple of notes:
    - the cxml_foldercontent.structuredDataId FK column has an index on it already
    Deadlock graph:
                           ---------Blocker(s)--------  ---------Waiter(s)---------
    Resource Name                    process session holds waits  process session holds waits
    TX-003a0011-000003d0        44       548     X               30        542             X
    TM-0000be63-00000000       30       542     SX              44        548     SX    SSX
    session 548: DID 0001-002C-000002D9     session 542: DID 0001-001E-00000050
    session 542: DID 0001-001E-00000050     session 548: DID 0001-002C-000002D9
    Rows waited on:
    Session 542: obj - rowid = 0000BE63 - AAAL5jAAGAAA6tZAAK
      (dictionary objn - 48739, file - 6, block - 240473, slot - 10)
    Session 548: no row
    Information on the OTHER waiting sessions:
    Session 542:
      pid=30 serial=63708 audsid=143708731 user: 41/CASCADE
      O/S info: user: cascade, term: unknown, ospid: 1234, machine:
                program: JDBC Thin Client
      application name: JDBC Thin Client, hash value=2546894660
      Current SQL Statement:
    update cascade.cxml_foldercontent set name=:1 , lockId=:2 , isCurrentVersion=:3 , versionDate=:4 , metadataId=:5 , permissionsId=:6 , workflowId=:7 , isWorkingCopy=:8 , parentFolderId=:9 , relativeOrder=:10 , cachePath=:11 , isRecycled=:12 , recycleRecordId=:13 , workflowComment=:14 , draftUserId=:15 , siteId=:16 , prevVersionId=:17 , nextVersionId=:18 , originalCopyId=:19 , workingCopyId=:20 , displayName=:21 , title=:22 , summary=:23 , teaser=:24 , keywords=:25 , description=:26 , author=:27 , startDate=:28 , endDate=:29 , reviewDate=:30 , metadataSetId=:31 , expirationNoticeSent=:32 , firstExpirationWarningSent=:33 , secondExpirationWarningSent=:34 , expirationFolderId=:35 , maintainAbsoluteLinks=:36 , xmlId=:37 , structuredDataDefinitionId=:38 , pageConfigurationSetId=:39 , pageDefaultConfigurationId=:40 , structuredDataId=:41 , pageStructuredDataVersion=:42 , shouldBeIndexed=:43 , shouldBePublished=:44 , lastDatePublished=:45 , lastPublishedBy=:46 , draftOriginalId=:47 , contentTypeId=:48  where id=:49
    End of information on OTHER waiting sessions.
    Current SQL statement for this session:
    delete from cascade.cxml_structureddata where id=:1

    Mohamed Houri wrote:
    What is important for a foreign key is to be indexed (of course if the parent table is deleted/merged/updated, or if a performance reason imposes it). Wether this index is unique or not doesn't matter (as far as i know).But, you should ask your self the following question : what is the meaning of having a 1 to 1 relationship between a parent and a child table ? if you succeed to create a unique index on your FK then this means that for each PK value corresponds at most one FK value!! Isn't it? is this what you want to have?Thanks, as I mentioned above, cxml_structureddata is actually the child table of cxml_foldercontent with 1 or more records' owningEntityId referring to rows in cxml_foldercontent. The reason for the FK on cxml_foldercontent.structuredDataId is a little ambiguous but it explained above.
    Will a TX-enqueue held on mode X always be waited on by another TX-enqueue row lock X? Or can it be waited on by an Exclusive (X) table lock?Not really clear. Sorry, are you saying my question is unclear or it's not clear why type of eXclusive lock session 542 is trying to acquire in the first line of the trace? Do you think that the exclusive lock being held by session 548 in the first line is on rows in cxml_foldercontent (due to the ON DELETE SET NULL on these child rows) or rows in the cxml_structureddata that it's actually deleting?
    Is there any way for me to tell for certain?
    The first enqueue is a TX (Transaction Enqueue) held by session 548 on mode X (exclusive). This session represents the blocking session. At the same time the locked row is waited on by the blocked session (542) and the wait is on mode X (exclusive). So put it simply, we have here session 542 waiting for session 548 to release it lock (may be by commiting/roll backing). At this step we are not in presence of a deadlock.
    The second line of the deadlock graph shows that session 542 is the blocking session and it is doing a TM enqueue (DML lock) held on SX(Shared eXclusive). While session 548(which is the waiting session) is blocked by session 542 and is waiting on SSX mode.
    Here we see that 548 is blocking session 542 via a TX enqueue and session 542 is blocking session 548 via a TM enqueue ---> That is the deadlock. Oracle will then immediately choose arbitrarlly a victim session (542 or 548) and kill its process letting the remaining session continuing its work.
    That is your situation explained here.Thanks, any idea why session 542 (the DELETE from cxml_structureddata) would be trying to upgrade it's lock to SSX? Is this lock mode required to update a child tables foreign key columns when using an ON DELETE SET NULL trigger? Having read more about SSX, I'm not sure I understand in what cases it's used. Is there a way for me to confirm with 100% certainty specifically which tables in the TM enqueue locks are being held on? Is session 548 definitely trying to acquire an SSX mode on my cxml_foldecontent table or could it be cxml_structureddata table?
    (a) Verify that all your FK are indexed (be carreful that the FK columns should be at the leading edge of the index)Thanks, we've done this already. When you say the "leading edge" you mean for a composite index? These indexes are all single column.
    (b) Verify the logic of the DML against cxml_foldercontentCan you be more specific? Any idea what I'm looking for?

  • Need help with Low disk space issue and blue screen

    Hi Everyone, just seen a message stating a low disk space on my Mac pro bought last November and tried plugging in an external hard drive to remove some pictures to free up some space but it seems that the computer did not have enough space left to start up and run the hard drive. I then tried to restart and ended up with a blue screen and have no knowledge now how to fix this problem. I phoned support but they say I have no technical support left but do have warranty and I would need to either try with the community here or take the unit to an apple store for an appointment. The store is an 1 and 1/2 from me and I really want o see if there is another fix that could allow me to start again then remove some files and then add external drive to remove more. I was blown away at the low amount of storage.....looked for the icloud option last night to upload there as i was told about this by a UK client of mine and now see it is not up and running. Any advice or help by the communtiy would be greatly appreciated as this is my business and travelling laptop. Cheers, Dean <")))><

    Great to hear Dean, thanks!
    Further notes: OSX needs about 15% or 10GB Free space minimum, but will run mucch faster/safer with 30-40% or 50GB of Free Space... Free Space is no longer ours to use.
    Another tool to help clear up assorted things is Applejack...
    http://www.macupdate.com/info.php/id/15667/applejack
    After installing, reboot holding down CMD+s, (+s), then when the DOS like prompt shows, type in...
    applejack AUTO
    Then let it do all 6 of it's things.
    At least it'll eliminate some questions if it doesn't fix it.
    The 6 things it does are...
    Correct any Disk problems.
    Repair Permissions.
    Clear out Cache Files.
    Repair/check several plist files.
    Dump the VM files for a fresh start.
    Trash old Log files.
    First reboot will be slower, sometimes 2 or 3 restarts will be required for full benefit... my guess is files relying upon other files relying upon other files! :-)
    Disconnect the USB cable from any Uninterruptible Power Supply so the system doesn't shut down in the middle of the process.

  • Global temp tables difference in oracle 10g and 11g

    Hi All,
    we are planning to upgrade metasolv applications from 6.0.15 (currently suing 10g) to 6.2.1(currently using 11g).We are using the Global temp tables in 10g .i just want to know is there any impact if we upgrade the Global temp tables from 10g to 11g.if so can u please explain me clearly ?
    Please and thanks.

    FAQ on new features: Re: 14. What's the difference between different versions of the database?
    This can be used as a reference for all your queries..

  • Please help - Can not use stored procedure with CTE and temp table in OLEDB source

    Hi,
       I am going to create a simple package. It has OLEDB source , a Derived transformation and a OLEDB Target database.
    Now, for the OLEDB Source, I have a stored procedure with CTE and there are many temp tables inside it. When I give like EXEC <Procedure name> then I am getting the error like ''The metadata  could not be determined because statement with CTE.......uses
    temp table. 
    Please help me how to resolve this ?

    you write to the temp tables that get created at the time the procedure runs I guess
    Instead do it a staged approach, run Execute SQL to populate them, then pull the data using the source.
    You must set retainsameconnection to TRUE to be able to use the temp tables
    Arthur My Blog

  • CF 6.1, cfquery, MySQL, and temp tables

    I have a need to use a temp table for a page, and I'm finding
    that temporary tables are not temporary when creating them via CF.
    The temporary table is not being dropped when the page is
    done processing, and what's worse is that when running the page
    from multiple browsers, the temporary table is being SHARED across
    requests, defeating the whole purpose of a "private" temporary
    table for each request of the table.
    I have a feeling this has something to do with the way CF
    maintains its connection pool, but I'd like to know the real story
    from anyone who knows. And even better, I'd like to know how to
    prevent this from happening :)
    The fact that the table isn't dropping automatically doesn't
    bother me; I can always drop it manually. However the fact that I
    can request the page on one computer, then go to another computer
    and request it and I'm using the same temp table. I consider that a
    deal breaker.
    Any ideas?

    > the table isn't dropping automatically doesn't bother me
    I think the reason why the temp tables are not being dropped
    automatically
    is because CF caches Connection(s) to the database.
    Perhaps it's better to drop temp tables than to have CF
    re-create connections
    to the database for each client request?
    I think there is a setting in CF Admin where you can specify
    a DSN to NOT
    to maintain connections across client requests? This might do
    what you
    are looking for but it may not be what you want.
    Good luck!

  • Disk space allocation and FileVault

    I recently had an issue with disk space in my home folder which I would like to share because it might be applicable to quite a few private (non corporate) users.
    The matter has been resolved, just wanted to let you know that you should be VERY CAREFUL about using FileVault on your home folder, that's what got me into trouble. My HD said there was 233GB free, my home folder, encrypted with FileVault, said there was 43MB (yes MB) free - so obviously downloading podcasts or CD's was out of the question. I thought, 'what am I going to do with the other 140GB of space on my iPod Classic if I can't use iTunes?? After much stuffing about copying things to the root level, I was able to unlock FileVault (which was asking for 30 odd GB free space to decrypt) and copy my music etc. back into my home folder which had been somehow corrupted. Now, when I open my home folder it tells me I have as much free space as my HD, problem solved!! I will never use FileVault again. The AppleCare guy said my files were still safe as long as I didn't leave the computer logged-in and unattended, even from the internet, with Firewall up.

    you don't say how many tracks and bit rate etc.
    you could certainly fill that drive w/ 7 sessions. I know I could.
    You should have a 250 GB data drive but certainly you should not have less than 10% free space ever if possible.
    I don't know about lacie disk allocation but changing it while data was on it does not sound like a safe idea.
    I'd suggest you reformat the drive to it's defaults. Pro tools can be a bit picky and you may be asking it to do something it does not like especially since you can't bounce to disk. That is telling me it can't read the drive correctly. I'd also do a block test on the drive.
    I hope that helps. I use MOTU and have in the past use a 30 GB drive for data but that was 16 bit 48 K.
    Now i do everything at least 24 bit 88.1k on 250 GB drives. I have 3 backups.

  • Get Unix disk space available and assign it to a pl/sql variable

    Hello,
    From withint pl/sql, how do I get disk space(df -k) vailable from the os and assign that number to a pl/sql variable? I want to write a stored procedure that will look into how much space left on a os mount point. Thank you.

    You can check this thread
    http://asktom.oracle.com/pls/ask/f?p=4950:8:13164696856450032216::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:952229840241

  • Using OleDbDataAdapter Update with InsertCommands and getting blocking locks on Oracle table

    The following code snippet shows the use of OleDbDataAdapter with InsertCommands.  This code is producing many inserts on the Oracle table and is now suffering from contention... all on the same table.  How does the OleDbDataAdapter produce
    inserts from a dataset... what characteristics do these inserts inherent in terms of batch behavior... or do they naturally contend for the same resource. 
    oc.Open();
    for (int i = 0; i < xImageId.Count; i++)
    // Create the oracle adapter using a SQL which will not return any actual rows just the structure
    OleDbDataAdapter da =
       new OleDbDataAdapter("SELECT BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, " +
       "DIRECT_INVOICING, EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE FROM sysadm.PS_RI_INV_PDF_MERG WHERE 1 = 2", oc);
    // Create a data set
    DataSet ds = new DataSet("documents");
    da.Fill(ds, "documents");
    // Loop through invoices and write to oracle
    string[] sInvoices = invoiceNumber.Split(',');
    foreach (string sInvoice in sInvoices)
        // Create a data set row
        DataRow dr = ds.Tables["documents"].NewRow();
        ... map the data
        // Populate the dataset
        ds.Tables["documents"].Rows.Add(dr);
    // Create the insert command
    string insertCommandText =
        "INSERT /*+ append */ INTO PS_table " +
        "(SEQ_NBR, BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, DIRECT_INVOICING, " +
        "EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE) " +
        "VALUES (INV.nextval, :BUSINESS_UNIT, :INVOICE, :ASSIGNMENT_ID, :END_DT, :RI_TIMECARD_ID, :IMAGE_ID, :FILENAME,  " +
        ":BARCODE_LABEL_ID, :DIRECT_INVOICING, :EXCLUDE_FLG, :DTTM_CREATED, :DTTM_MODIFIED, :IMAGE_DATA, :PROCESS_INSTANCE)";
    // Add the insert command to the data adapter
    da.InsertCommand = new OleDbCommand(insertCommandText);
    da.InsertCommand.Connection = oc;
    // Add the params to the insert
    da.InsertCommand.Parameters.Add(":BUSINESS_UNIT", OleDbType.VarChar, 5, "BUSINESS_UNIT");
    da.InsertCommand.Parameters.Add(":INVOICE", OleDbType.VarChar, 22, "INVOICE");
    da.InsertCommand.Parameters.Add(":ASSIGNMENT_ID", OleDbType.VarChar, 15, "ASSIGNMENT_ID");
    da.InsertCommand.Parameters.Add(":END_DT", OleDbType.Date, 0, "END_DT");
    da.InsertCommand.Parameters.Add(":RI_TIMECARD_ID", OleDbType.VarChar, 10, "RI_TIMECARD_ID");
    da.InsertCommand.Parameters.Add(":IMAGE_ID", OleDbType.VarChar, 8, "IMAGE_ID");
    da.InsertCommand.Parameters.Add(":FILENAME", OleDbType.VarChar, 80, "FILENAME");
    da.InsertCommand.Parameters.Add(":BARCODE_LABEL_ID", OleDbType.VarChar, 18, "BARCODE_LABEL_ID");
    da.InsertCommand.Parameters.Add(":DIRECT_INVOICING", OleDbType.VarChar, 1, "DIRECT_INVOICING");
    da.InsertCommand.Parameters.Add(":EXCLUDE_FLG", OleDbType.VarChar, 1, "EXCLUDE_FLG");
    da.InsertCommand.Parameters.Add(":DTTM_CREATED", OleDbType.Date, 0, "DTTM_CREATED");
    da.InsertCommand.Parameters.Add(":DTTM_MODIFIED", OleDbType.Date, 0, "DTTM_MODIFIED");
    da.InsertCommand.Parameters.Add(":IMAGE_DATA", OleDbType.Binary, System.Convert.ToInt32(filedata.Length), "IMAGE_DATA");
    da.InsertCommand.Parameters.Add(":PROCESS_INSTANCE", OleDbType.VarChar, 10, "PROCESS_INSTANCE");
    // Update the table
    da.Update(ds, "documents");

    Here is what Oracle is showing as blocking locks and the SQL that has been identified with each of the SIDS.  Not sure why there is contention.  There are no triggers or joined tables in this piece of code.
    Here is the SQL all of the SIDs below are running:
    INSERT INTO sysadm.PS_RI_INV_PDF_MERG (SEQ_NBR, BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, DIRECT_INVOICING, EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE) VALUES (SYSADM.INV_PDF_MERG.nextval,
    :BUSINESS_UNIT, :INVOICE, :ASSIGNMENT_ID, :END_DT, :RI_TIMECARD_ID, :IMAGE_ID, :FILENAME, :BARCODE_LABEL_ID, :DIRECT_INVOICING, :EXCLUDE_FLG, :DTTM_CREATED, :DTTM_MODIFIED, :IMAGE_DATA, :PROCESS_INSTANCE)
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1150 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1156 (BTSUSER,biztprdi,BTSNTSvc64.exe) in instance FSLX3
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 6 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX2
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1726 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX2
    SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 2016 (BTSUSER,biztprdi,BTSNTSvc64.exe) in instance FSLX2

  • MacBook Pro Disk Space Question and Time Machine

    Hi all,
    I know this is probably a "noob" question, but I have been backing up my laptop (2009 Mac OS X 10.6.8) onto an external hard drive set up for Time Machine, and now my MacBook Pro only has 1.15 GB available of free space after the many years of usage and saving things onto it (such as images, movies, etc.). I had a question: Are these files backed up onto Time Machine and so can I delete them from my laptop without losing them? I am concerned about losing them and am not sure how Time Machine really works; if I delete these files from my laptop in order to free up space, then backup my computer onto Time Machine, will they be deleted from the Time Machine drive also?? I need to free up a lot of space on my laptop but also don't want to lose any pictures and videos from over the years.
    Thanks in advance.

    570ad 
    My question is, how do I effectively "copy over" files onto my new external hard drive? Is it as easy as connecting the hard drive via usb
    Easy as drag and drop, yes indeed.  Could almost do it with your eyes closed.
    Entire user account, ....no, just get all your files, you created, saved, made, work on ALL VITAL DATA you "dont dare lose" etc.  drag it over, make files in the HD showing where things are, etc.
    There are of course a 1000 ways to organize folders and data on external HD, ..pick what suits you.
    Keep it simple.

Maybe you are looking for