Mirrored RAID taking too long for Data Rebuild

I have created a RAID 1 (Mirrored RAID) using Disk Utility in my Mac Pro. I have used 2 x 4TB WD Enterprise Grade HDDs. I copied 10 gb data on to this 4 TB RAID volume. Then removed one of the HDDs & installed in my other Mac Pro. It mounted as normal HDD & showed all the data that I had copied. To check if Disk Utility will rebuild the data, I deleted some part of this data (2 GBs out of 10 GBs) & installed this drive back in the system on which I had Created this RAID volume.
Initially I was surprised to find that Deleted data is now absent on this RAID volume. But on Launching Disk Utility it started rebuilding data on RAID volume.
BUT for Rebuilding 2 GB data estimated time shown by system was 9 hrs.
Is this NORMAL?
I intend to fill this 4 TB RAID volume with approximately 3.5 TB Data. In case one of the drive fails, how long should it take for Disk Utility to rebuild this amount of Data?
Is there any alternative way of speeding up this process?
Will Apple's RAID card help?
All your suggestions will be appreciated.
Thanks

I know this is an old thread but I'd be interested in what you came up with for a solution.
Here's my RAID journey thus far.
https://discussions.apple.com/message/25119317#25119317
Again, a gig of info and its taking 1 day 10 hours.
I hope theres a solution somewhere.
Drew

Similar Messages

  • Taking too long for Loaded data visible in Reporting

    Hi experts,
    I have a cost center cube, which takes very very long to show the  " Loaded data is not yet visible in Reporting " icon for the request. I donot have aggregate on the cube and no other requests are Red /yellow. What is the reason for this weird nature. I checked the consistency check, there are no over lap /errors for the request.
    Any help is appreciated.
    thanks in adavnce
    D Bret

    Hi Dave,
    Try to run the RSRV for the Cube and try to refresh the data in the cube.
    RSRV --> Combined test --> transaction data --> Status of data in the info cube.
    If you find any errors after running this check,repair it and then refresh the cube's data.
    Hope this helps.
    Thanks & Regards,
    Suchitra.V

  • OracleDataReader.Read taking too long for the first record

    Hi Guys
    I am new to ODP.Net. This is my first time working with Oracle and ODP.Net. I wrote a small stored procedure which returns 25 rows and returns a sys_refcursor. I am capturing that in a OracleDataReader and trying to process the data. I use this in a while loop... the first Read is taking around 10 seconds and from the second on, it is pretty quick. Stored procedure performace is really good when I run it from the SQLPlus or data execute call from .Net. Both web server and database are in the same network, which is really good.
    Any idea what's going on? Any links or articles ro any help will be greatly appreciated...
    I am using Oracle 10g with ODP.net 10.2.0.2.20. and using .Net Framework 3.5 with Visual Studio 2008.
    Here is the code I am using (it is really simple)
    Private Sub FillSearchResults(ByVal DR As Oracle.DataAccess.Client.OracleDataReader)
    Dim objSearch As OCPAFL.DAL.Search
    Dim objList As New List(Of Parcel)
    Try
    'Walk through Projects
    While DR.Read()
    ''''.... do the processing here
    End While
    Catch ex As Exception
    Throw ex
    End Try
    End Sub
    Thanks
    Chandra

    Hi,
    I dont have any good ideas as to what may be causing it based on the limited information provided, but here's what I'd do to try to troubleshoot it further:
    1) Enable client sqlnet tracing (level 16) with timestamps on, and/or ODP tracing, reproduce the complaint, and loook in the trace to see if you can find a gap in the timestamps to try to isolate where the time is being consumed. Is there one 10 second gap? Numerous 2 second gaps? etc. Is the client waiting on data from the server?
    2) Enable 10046 trace on the database before executing the proc in the app and check the resulting database trace. Are binds occurring differently? Is anything occurring differently in the execution plan using ODP vs SQLPlus?
    Oh, also, support for .NET 3.5 starts with 11.1.6.20 ODP, but I rather doubt that's causing the problem.
    Hope it helps,
    Greg

  • Query is taking too long fetch data

    Hi,
    Working on EBS Version 11.5.10.2
    This below query takes 7 minutes reterive to job orders. I need some help to improve this query to retrieve faster.
    select inventory_item_id
    ,job
    ,item_code
    ,item_description
    ,subinventory_code
    ,locators
    ,quantity_open
    ,sum(transaction_quantity) qoh
    from
    select wrv.inventory_item_id
    ,wdj.wip_entity_name Job
    ,rtrim(wrv.concatenated_segments) item_code
    ,wrv.item_description
    ,mil.subinventory_code
    ,mil.segment1||'.'||mil.segment2||'.'||mil.segment3 locators
    ,wrv.quantity_open
    ,transaction_quantity
    from wip_discrete_jobs_v wdj
    ,wip_requirement_operations_v wrv
    ,mtl_onhand_quantities moh
    ,mtl_item_locations mil
    where wdj.wip_entity_id = wrv.wip_entity_id
    and moh.inventory_item_id = wrv.inventory_item_id
    and mil.inventory_location_id = moh.locator_id
    and wdj.job_type_meaning = 'Standard'
    and wdj.status_type_disp = 'Released'
    and wrv.quantity_open > 0
    and wrv.organization_id = 0
    and moh.organization_id = 0
    and mil.organization_id = 0
    --and rtrim(wrv.concatenated_segments) = '2124 2014-336'
    and wdj.wip_entity_name in ('D477334') --,'D612664')
    group by
    inventory_item_id
    ,job
    ,item_code
    ,item_description
    ,subinventory_code
    ,locators
    ,quantity_open
    Thanks and Regards

    {message:id=9360003}

  • Download taking too long for Creative Suite 5.5 Design

    How can I speed up the download time from 7 hours?

    Do a direct download from prodesigntools.com , without using the download manager.
    http://prodesigntools.com/adobe-cs5-5-direct-download-links.html

  • MGP Tuning: MGP Processing taking too long

    Hi All,
    I have question here related to MGP Processing. MGP Processing is taking too long for us, close to 900 secs. Is there a way to tune and optimize this MGP Process. Right now we have only 3 users syncing and this kind of delay in Sync for users in testing Envt is not acceptable to the client. Is there a way to tune this MGP Process. I am aware of the Consperf process. Should I be running the Consperf for all the publication items to optimize their Sync process or should it be only for those publication items that are taking too long in the Compose/Apply process.
    Thanks in advance for all suggestions.

    Hi Rekounas,
    what would be your recommendation for some 70 snapshots and 50 users? And is there any recommendation for mobile server? We have the mobile server on dedicated machine, which is quite old piece of HW. I presume that it should have just some amount of RAM available for caching of MGP data and that it shouldn't be CPU-power consuming much - is that right? At the monent the compose phase of MGP takes some 25-30 seconds per user when there's some new data to process...

  • Custom Webdynpro text field is taking too long to accept input values

    Dear All,
                   I hvae created custom web dynpro for PO header fields in SRM. This WD contains a lot of fields. When i try to put cursor on a text field it is taking too long for the cursor to appear in that input text field. There is no problem with other fields which have search helps associated with them. This field with the problem is just a text field.
    Please help.
    thanks.

    Dear All,
                   I hvae created custom web dynpro for PO header fields in SRM. This WD contains a lot of fields. When i try to put cursor on a text field it is taking too long for the cursor to appear in that input text field. There is no problem with other fields which have search helps associated with them. This field with the problem is just a text field.
    Please help.
    thanks.

  • AirPlay Screen Mirroring (Mavericks) disconnects frequently with "Feedback taking too long to send" message

    My company has several TVs with AppleTVs (3rd generation units) connected in our conference rooms so we can "Screen Mirror" our Mac laptops via AirPlay during meetings. Many employees have complained that AirPlay Screen Mirroring drops frequently during meetings for no apparent reason.
    In attempts to determine the cause of the issue, I removed the AppleTV units from Wi-Fi and hard-wired them all to the LAN (100Mbps/Full duplex, no switchport errors seen on the Cisco switch). I upgraded the AppleTVs to the latest firmware. I had our AppleTV users ensure they were running MacOS Mavericks with the latest software updates installed. I had the Mac laptops hard-wired into the LAN during meetings in the conference rooms. None of these changes resolved the AirPlay issue.
    I reviewed the MacOS "/var/log/system.log" file from the laptops of several users that reported issues. I found a pattern that seemed to indicate that the "coreaudiod" process reported "Feedback taking too long to send" several times before the AppleTV connection was terminated. Also, from a network trace (using "tcpdump") taken during an unexpected AirPlay Screen Mirroring disconnection, I could see that the Mac laptop sent a TCP FIN packet to the AppleTV unit (this would indicate that the MacOS laptop initiated the closing of the AirPlay connection).
    I have included the relevant log file entries below. Please note that the LAN internal to our company is "solid" and there have been no connectivity issues detected or reported during the times the AirPlay sessions were disconnected.
    I believe I have found a workaround to this issue. By going into "System Preferences", "Sound" and then changing the "Output" device BACK to the "Internal Speakers" (rather than the AirPlay destination), the AirPlay Screen Monitoring connection seems to remain stable.
    My questions are:
    - is anyone else experiencing this type of problem? any other solutions recommended?
    - is there a way to change the AirPlay defaults so that Screen Mirroring only sends the video (not audio)?
    - does anyone know what the log file entries indicate (like, what does "Feedback taking too long to send...." mean)?
    - any fix planned for this issue?
    From: "/var/log/system.log":
    Jan 16 10:50:16 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:16.454404 AM [AirPlay] ### Feedback taking too long to send (1 seconds, 1 total)
    Jan 16 10:50:18 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:18.524517 AM [AirPlay] ### Feedback taking too long to send (4 seconds, 2 total)
    Jan 16 10:50:20 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:20.533639 AM [AirPlay] ### Feedback taking too long to send (6 seconds, 3 total)
    Jan 16 10:50:22 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:22.548168 AM [AirPlay] ### Feedback taking too long to send (8 seconds, 4 total)
    Jan 16 10:50:24 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:24.554522 AM [AirPlay] ### Feedback taking too long to send (10 seconds, 5 total)
    Jan 16 10:50:24 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:24.554809 AM [AirPlay] ### Report network status (3, en0) failed: 1/0x1 kCFHostErrorHostNotFound / kCFStreamErrorSOCKSSubDomainVersionCode / kCFStreamErrorSOCKS5BadResponseAddr / kCFStreamErrorDomainPOSIX / evtNotEnb / siInitSDTblErr / kUSBPending / dsBusError / kStatusIsError / kOTSerialSwOverRunErr / cdevResErr / EPERM
    Jan 16 10:50:26 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:26.545531 AM [AirPlay] ### Feedback taking too long to send (12 seconds, 6 total)
    Jan 16 10:50:28 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:28.559050 AM [AirPlay] ### Feedback taking too long to send (14 seconds, 7 total)
    Jan 16 10:50:30 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:30.628868 AM [AirPlay] ### Feedback taking too long to send (16 seconds, 8 total)
    Jan 16 10:50:32 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:32.655638 AM [AirPlay] ### Feedback taking too long to send (18 seconds, 9 total)
    Jan 16 10:50:34 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:34.641952 AM [AirPlay] ### Feedback taking too long to send (20 seconds, 10 total)
    Jan 16 10:50:36 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:36.659854 AM [AirPlay] ### Feedback taking too long to send (22 seconds, 11 total)
    Jan 16 10:50:38 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:38.653594 AM [AirPlay] ### Feedback taking too long to send (24 seconds, 12 total)
    Jan 16 10:50:40 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:40.659279 AM [AirPlay] ### Feedback taking too long to send (26 seconds, 13 total)
    Jan 16 10:50:42 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:42.745549 AM [AirPlay] ### Feedback taking too long to send (28 seconds, 14 total)
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.532853 AM [AirPlay] ### Endpoint "AppleTV" feedback error: -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533151 AM [AirPlay] ### Feedback failed: -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533273 AM [AirPlay] ### Error with endpoint "AppleTV": -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533427 AM [BonjourBrowser] Reconfirming PTR for AppleTV._airplay._tcp.local. on en0
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533588 AM [BonjourBrowser] Reconfirming PTR for 9C207BBD8EA1@AppleTV._raop._tcp.local. on en0
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533839 AM [AirPlay] ### AirPlay report: Network dead for 10+ seconds after 159 seconds, screen, nm "AppleTV", tp WiFi, md AppleTV3,1, sv 190.9, rt 0, fu 0, rssi -54
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.534104 AM [AirPlay] ### Report network status (5, en0) failed: 1/0x1 kCFHostErrorHostNotFound / kCFStreamErrorSOCKSSubDomainVersionCode / kCFStreamErrorSOCKS5BadResponseAddr / kCFStreamErrorDomainPOSIX / evtNotEnb / siInitSDTblErr / kUSBPending / dsBusError / kStatusIsError / kOTSerialSwOverRunErr / cdevResErr / EPERM
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.534315 AM [AirPlay] Deactivating virtual display stream for quiesce
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543682 AM [AirPlayScreenClient] Stopping session
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543815 AM [AirPlay] Quiescing endpoint 'AppleTV'
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543907 AM [AirPlayScreenClient] Stopping session internal
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.544218 AM [AirPlayScreenClient] Stopped session internal
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.544266 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.544297 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.553084 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.554904 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.557604 AM [AirPlayAVSys] Ignoring route away when AirPlay not current
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.560307 AM [AirPlayAVSys] Ignoring route away when AirPlay not current
    Jan 16 10:50:44 My-MacBook-Pro.local WindowServer[89]: Display 0x04280880: GL mask 0x21; bounds (0, 0)[1920 x 1080], 62 modes available
    Jan 16 10:50:44 My-MacBook-Pro.local WindowServer[89]: GLCompositor: GL renderer id 0x01022727, GL mask 0x0000001f, accelerator 0x00004ccb, unit 0, caps QEX|MIPMAP, vram 2048 MB
    I am happy to provide more information if needed.
    Thank you.
    -Tim

    I'm currently still experiencing this as well. I've confirmed it occurs on 10.9.1, 10.9.2, and 10.9.3 on MacBook Pro Retina's, Mid-2012 and 2013 MacBooks. It happens on multiple ATV's not just one, all are updated to 6.1.1 and a simple reboot seems to fix it temporarily but it does come back. All the ATV's are connecting to the network via Wireless not Ethernet. These are 3rd Gen ATV's but I checked the serial number and these do not match the bad batch of Apple TV's from 2013 that were offered up for Replacement for Apple due to the bad firmware update. None of the computers have the Firewall turned on. Here's the two logs that we always find after the issue occurs (logs are recent, happened this morning):
    5/30/14 8:57:36.017 AM coreaudiod[183]: 2014-05-30 08:57:36.016946 AM [AirPlay] ### Feedback taking too long to send (30 seconds, 17 total)
    5/30/14 8:57:36.332 AM coreaudiod[183]: 2014-05-30 08:57:36.331492 AM [AirPlay] ### Feedback failed: -6723/0xFFFFE5BD kCanceledErr
    The user will get disconnected from Airplay anywhere between 30 seconds to 3 minutes after logging on and can reconnect but then once again get disconnected after the same time period. One interesting thing to note is that when the Feedback Taking Too Long to send error starts occuring and the countdown to disconnect start ticking down to 30, its solely referring to Audio not being sent over the network and Video is working just fine. If I try to play sound I get another log and the sound doesn't play through the speakers. After a reboot, sound works fine and the Feedback Error's do not show up. I've also tried switching to Internal Speakers (since it defaultly switches to Airplay Speakers) after connecting to Airplay and seeing the Feedback timer start in the Console Logs but even after that the log continues to saw its taking too long to send and disconnects in 30 seconds. 
    This issue has been ongoing for months, I've got a ticket logged as far back as January with this occuring but its infrequent enough that we've just restarted and moved on. I'd say its an issue that occurs to about 5%-10% of meetings but that's an entire meeting that doesn't have the ability to Airplay until someone comes down and reboots it.
    I don't often post in this forum but its still an active issue with no resolution, proof that its occuring on other people's systems, and no firmware updates having been released to correct it. It'd be nice to know of any workarounds other than having to buy some lamp timers for each conference room just to get a functional ATV or putting up a sign that says hey if you get disconnected every 3 minutes, reboot the ATV. The whole reason we're using Apple products is for ease of use otherwise I'd put together a much cheaper solution myself. Any help or recommended troubleshooting steps would be fantastic at this point.

  • Report script taking too long to export data

    Hello guys,
    I have a report script to export data out of a BSO cube. The cube is 200GB in size. But the exported text file is only 10MB. It takes around 40 mins to export this file.
    I have exported data of this size in less than a minute from other DBs. But this one is taking way too long for me.
    I also have a calc script for the same export but that too is taking 20 mins which is not reasonable for a 10MB export.
    Any idea why a report script could take this long? Is it due to huge size of database? Or is there a way to optimize the report script?
    Any help would be appreciated.
    Thanks

    Thanks for the input guys.
    My DATAEXPORT is taking half the time than my report script export. So yeah it is much faster but still not reasonable(20 mins for one month data) compared to other DBs that are exported very quick.
    In my calc I am just FIXING on level 0 members for most of the dimensions against the specific period, year and scenario. I have checked the conditions for an optimal report script, I think mine is just fine.
    The outline has like 15 dimensions in it and only two of them are dense. Do you think the reason might be the huge size of DB along with too many sparse Dims?
    I appreciate your help on this.
    Thanks

  • I was backing up my iphone by changing the location of library beacause i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up

    I was backing up my iphone by changing the location of library because i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up.
    Also tell me about the performance of iphone 4 with ios 7.1.1...........
    T0X1C

    rabidrabbit wrote:
    Can I back up my iPhone 4S to my ipad 3 (64 gb)?
    no
    rabidrabbit wrote:
    However, now I don't have enough space in iCloud to backup either device. Why not?
    iCloud only give so much space for free storage, then if you exceed the limit of 5gb you have to pay for additional storage.

  • R/3 Extraction taking too long to load data into BW

    HI There,
    I'm trying to extract SAP Standard extractor 0FI_AP_4 into BW, and its taking endless time.
    Even the Extract checker RSA3  is taking too long to execute the data. Dont know why its taking so long to execute.
    Since there in not much data to take such a long time.
    Enhanced the datasource with three fields from BSEG using user exits.
    Is that the reason why its taking too long? Does User Exit slows down the extraction process?
    What measures i should take to quicken the process?
    Thanks for your time
    Vandana

    Thanks for all you replies.
    Please go through the steps I've gone through :
    - Installed the Business Content and its in version 3.5
    - Changed the update rules, Transfer rules and migrated the datasource to BI 7
    - Enhanced the 0FI_AP_3 to include three fields BSEG table
    - Ran RSA3 and the new fields are showing but the loading is quite slow.
    - Commented the code and ran RSA3 and with little difference data is showing up
    - Removed the comments and ran, its fine, though it takes little more time then previous step...but data is showing up
    - Replicated the datasource into BW
    - Created the info package and started the init process (before this deleted the previous stored init process)
    - Data isn't loading and please see the error message below.
    Diagnosis
    The data request was a full update.  In this case, the corresponding table in the source system does not
    contain any data. System Response Info IDoc received with status 8. Procedure Check the data basis in the source system.
    - Checked the transformation between datasource 0FI_AP_4 and Infosource ZFI_AP_4
       and I DID NOT found the three fields which i enhanced from BSEG table in the 0FI_AP_4 datasource.
    - Replicated the datasource 0FI_AP_4 again, but no change.
    Now...I dont know whats happening here.
    When i check the datasource 0FI_AP_4 in RSA6, i can see the three new fields from BSEG.
    When i check RSA3, i can see the data getting populated with the three new fields from BSEG,
    When i check the fields in the datasource 0FI_AP_4 in BW, I can see the three new fields. It shows
    that the connection between BW and R/3 is fine, isn't it?
    Now...Can anyone please suggest me how to go forward from here?
    Thanks for your time
    Vandana

  • Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long.

    Hello Friends,
    The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
    Scenario:
    Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
    Questions are…
    What is best option?
    IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
    I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
    When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
    Thanks
    Karthikeyan Jothi

    http://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
    Processing
    hundreds of millions records can be done in less than an hour.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Data Archive Script is taking too long to delete a large table

    Hi All,
    We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
    CREATE TABLE "APP"."MON_TXNS"
       (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
        "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_PAYER" NUMBER(12,0),
        "ID_PAYER_PI" NUMBER(12,0),
        "ID_PAYEE" NUMBER(12,0),
        "ID_PAYEE_PI" NUMBER(12,0),
        "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
        "STR_TEXT" VARCHAR2(60 CHAR),
        "DAT_MERCHANT_TIMESTAMP" DATE,
        "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
        "DAT_EXPIRATION" DATE,
        "DAT_CREATION" DATE,
        "STR_USER_CREATION" VARCHAR2(30 CHAR),
        "DAT_LAST_UPDATE" DATE,
        "STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
        "STR_OTP" CHAR(6 BYTE),
        "ID_AUTH_METHOD_PAYER" NUMBER(1,0),
        "AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
        "BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
        "ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
         CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
         CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX"  ENABLE,
         CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
          REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
          REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
         CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
          REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
         CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
      CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
      CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
    Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
    SQL> explain plan for
      2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2798378986
    | Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   1 |  DELETE                | MON_TXNS   |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    Please help,
    thanks,
    Banka Ravi

    'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
    Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
    The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.

  • Data manager jobs taking too long or hanging

    Hoping someone here can provide some assistance with regard to the 4.2 version. We are specifically using BPC/OutlookSoft 4.2SP4 (and in process of upgrading to BPC7.5). Three server environment - SQL, OLAP and Web.
    Problem: Data manager jobs in each application of production appset with five applications are either taking too long to complete for very small jobs (single entity/single period data copy/clear, under 1000 records) or completely hanging for larger jobs. This has been an issue for the last 7 days. During normal operation, small DM jobs ran in under a minute and large ones taking only a few minutes.
    Failed attempts at resolution thus far:
    1. Processed all applications from the OLAP server
    2. Confirmed issue is specific to our appset and is not present in ApShell
    3. Copied packages from ApShell to application to eliminate package corruption
    4. Windows security updates were applied to all three servers but I assume this would also impact ApShell.
    5. Cleared tblDTSLog history
    6. Rebooted all three servers
    7. Suspected antivirus however, problem persists with antivirus disabled on all three servers.
    Other Observations
    There are several tables in the SQL database named k2import# and several stored procedures named DMU_k2import#. My guess is these did not get removed because I killed the hung up jobs. I'm not sure if their existence is causing any issues.
    To make the long story short, how can I narrow down at which point the jobs are hanging up or what is taking the longest time? I have turned on Debug Script but I don't' have documentation to make sense of all this info. What exactly is happening when I run a Clear package?  At this point, my next step is to run SQL Profiler to get a look into what is going on behind the scenes on the sql server. I also want to rule out the COM+ objects on the web server but not sure where to start.
    Any help is greatly appreciated!!
    Thank you,
    Hitesh

    Hi ,
    The problem seems to be related to database. Do you have any maintenance plan for database?
    It is specific for your appset because each appset has own database.
    I suspect you have to run an sp_updatestats (Update Statistics) for your database and I think the issue with your jobs hang will be solved.
    DMU_K2_importXXX  table are coming from hang imports ...you can delete these tables because it is just growing the size of database and for sure are not used anymore.
    Regards
    Sorin Radulescu

  • Java.sql.SQLException:ORA-01801:date format is too long for internal buffer

    Hi,
    I am getting the following exception when i trying to insert data in to a table through a stored procedure.
    oracle.apps.fnd.framework.OAException: java.sql.SQLException: ORA-01801: date format is too long for internal buffer
    when execute this stored procedure from ana anonymous block , it gets executed successfully, but i use a OracleCallableStatement to execute the procedure i am getting this error.
    Please let me know how to resolve this error.
    Is this error something to do with the Database Configuration ?
    Thanks & Regards
    Meenal

    I don't know if this will help, but we were getting this error in several of the standard OA framework pages and after much pain and aggravation it was determined that visiting the Sourcing Home Page was changing the timezone. For most pages this just changed the timezone that dates were displayed in, but some had this ORA-01801 error and some others had an ORA-01830 error (date format picture ends before converting entire input string). Unfortunately, if you are not using Sourcing at your site, this probably won't help, but if you are, have a look at patch # 4519817.
    Note that to get the same error, try the following query (I got this error in 9.2.0.5 and 10.1.0.3):
    select to_date('10-Mar-2006', 'DD-Mon-YYYY________________________________________________HH24:MI:SS') from dual;
    It appears that you can't have a date format that is longer than 68 characters.

Maybe you are looking for

  • Are there jumpers on a Mac g4 Mdd

    I am swapping from a 867 Mirror drive to a 1.25 ghz processor on a Mac g4 MDD. Are there any jumpers that need to changed??? Please Help Thanx

  • HFM reports giving "Access Denied" error

    Hi All, I am in a really critical situation. I am getting "Access Denied" error on accessing any of HFM reports through web or FR studio...all planning reports are working fine... I think the DCOM settings have some problem... FR is not able to commu

  • Project Work

    Hello there, I'm working on a database with a java front end. can any one tell me how to fill a dropdown box from a database table. For example: Table : Desktop Laptop Printer Dropdown box should have: Desktop Laptop Printer Thanks

  • GRC Landscape OS Migration

    *Hello Experts ,* *We are planning for Migrate existing GRC ( 5.2 ) from Windows platform to HPUX.* *Whereas my question is , do I need to deploy GRC component after Target Server build ( installing Web as JAVA ) or will my Export / Import will take

  • Finder Preferences

    The 'Always open folders in a new window' check box in my Finder Preferences doesn't seem to do anything. I would much prefer to just have ONE folder open at a time, but whether or not I 'tick' this box I always get multiple folders on top of each ot