Best Practices to Mass Delete Leads

Hi Gurus
Can you please guide on this - We need to mass delete Leads from the system. I can use Crm_Order_Delete FM. But want to know if there are any best practices to follow, or anything else that i should consider before deleting these transactions from the system.
We have our archiving policy under discussion which may take some time, but due to large volume of reduntatn data we have some performance issues. For example when searching for leads and using ACE, the system goes through all the lead data.
That is the reason we are plannign to delete those old records. My concerns is that using CRM_ORDER_DELETE, would it clear all the tables for those deleted transactions and if there are any best practices to follow.
Thanks in Advance.
Regards.
-MP

Hello,
as the root is single label you could get only rid of it with migrating to a new forest. Therefore you should built a lab first and test the steps. As tool you could use ADMT.
http://blogs.msmvps.com/mweber/2010/03/25/migrating-active-directory-to-a-new-forest/
Also you might rethink your design if an empty root is really needed, there is no technical requirement and cost only additional hardware and licenses.
Keep in mind that the new forest MUST use different domain/NetBios names otherwise you cannot create the required trust for migration steps.
You can NOT switch a sub domain to the root and vice versa.
Best regards
Meinolf Weber
MVP, MCP, MCTS
Microsoft MVP - Directory Services
My Blog: http://blogs.msmvps.com/MWeber
Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
Twitter:  

Similar Messages

  • Best Practice for Mass Deleting Transactions

    Hi Gurus
    Can you please guide on this - We need to mass delete Leads from the system. I can use Crm_Order_Delete FM. But want to know if there are any best practices to follow, or anything else that i should consider before deleting these transactions from the system.
    We have our archiving policy under discussion which may take some time, but due to large volume of reduntatn data we have some performance issues. For example when searching for leads and using ACE, the system goes through all the lead data.
    That is the reason we are plannign to delete those old records. My concerns is that using CRM_ORDER_DELETE, would it clear all the tables for those deleted transactions and if there are any best practices to follow.
    Thanks in Advance.
    Regards.
    -MP
    Edited by: Mohanpreet Singh on Apr 15, 2010 5:18 PM

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • Best practices for mass reimaging? Having unmanaged duplicate clients with mismatched Resource IDs. (2012 R2)

    We reimage about 5000-8000 clients each summer. Last summer we were still at RTM, but have since moved on to R2. We build images (Win7 SP1) in vsphere and capture via capture media TS. Last year's image didn't get SCCM client uninstalled prior to capture
    so we would have random issues with computers going to unmanaged status and some that would show up without a client cert. To avoid this issue we stripped the client out prior to capture this year.
    I believe this is how we handled the reimage process last year as well, but I am not positive on that. We were also dealing with a lot of new laptops last summer, where as they obviously have existing records this summer. Since SCCM replaces the wired MAC
    address with the wireless MAC (laptops) we can't just toss these into an OSD collection because it won't pickup the OSD advertisement / PXE. (Is there any workaround for this?) Since this is the case, we are blowing away each client's AD account and DDR in
    SCCM, then doing mass import of hostname and wired MAC into SCCM, dumping them into the appropriate OSD collection, and they image unless they happen to pickup last year's PXE deployment that first has to be cleared, or unless they had a motherboard replaced
    and our MAC database didn't get updated. We did the mass import a week ago and the manual entries are listed with hostname and MAC and entry date of 7/9/2014. This week we started imaging. Almost immediately after reimaging (at which time the AD record is
    created upon rejoining domain) we see a second account show up in SCCM from AD Discovery with dates of 7/14/2014 and 7/15/2014. Neither account is managed or shows that SCCM client is installed, but it shows a site code.
    The manual entry lists an agent name, agent site, wired MAC, name, NetBIOS name, Resource ID of 167xxxxx, assigned site, and CCM GUID.
    The AD Discovery record shows agent name, agent site, domain, IPv4 and IPv6 addresses, name, NetBIOS name, primary group ID, domain, Resource ID of 20971xxxxx, resource name and type, SID, assigned sites, container name, UAC, etc.
    Why won't these records merge and show up as being properly managed? I am not yet sure if they fix themselves after one record or the other is deleted. Obviously this process isn't working well and it removes the clients from their direct membership collections
    and AD groups. I'd think that all of this could be avoided if we just had the wired MAC persist in the DDR.

    Last year's image didn't get SCCM client uninstalled prior to capture so we would have random issues with computers going to unmanaged status and some that would show up without a client cert. To avoid this issue we stripped the client out prior to capture.
    There is no reason or need to do this. There is no correlation between the two as long as the client agent was properly prepared (which does happen with capture media although you should strongly consider using a build and capture task sequence). Clients
    are perfectly capable of living within an image -- I do it all the time and it is a common practice.
    Since SCCM replaces the wired MAC address with the wireless MAC (laptops) we can't just toss these into an OSD collection because it won't pickup the OSD advertisement / PXE. (Is there any workaround for this?)
    This is not correct and thus also unnecessary as ConfigMgr will use the MAC Address *or* the SMSBIOS GUID of the system to determine targeting during OSD. The SMBIOS GUID is an immutable unique ID set by the OEM that is part of the resource record in ConfigMgr
    also.
    Jason | http://blog.configmgrftw.com

  • B04 - Best practice with regards delete privilege and folders

    Hi,
    Many depts sales, finance etc.
    Sales Dept as example
    Just installed BO4 and want to give Sales Dept their own folder and allow users full control over their own documents within Sales Flder but ensuer they don't have permission to delete other Sales users reports in this folder nor delete Sales folder itself.
    Would defined schedule access privilege achieve this.
    Want to ensure delete privilege tight and users can only delete their own reports and can't delete folders.
    Many Thanks

    Hi Philip,
    You can achieve this by creating your own Access Level where you can include the right "Delete object that the user owns".
    For the requirement: Sales Group should only be able to see Sales Folder:
    Assign View Object right to Sales Group first on the Public Folder keeping the option "Apply to sub-objects" unchecked. Then you can assign the custom Access Level to the Sales Folder which includes the right as per your requirement.
    Regards,
    Nakul Mehta

  • Best practice for deletion of SAP standard configuration

    Does anyone have any documentation related to deletion of standard SAP configuration?  My client is requesting deletion of all the standard delivered company codes and I believe it is best practice to never delete them.  I am looking for supporting documentation that supports this.

    Rhonda,
    I have never seen a system where the standard delivered company codes did not exist, so I can't say what the implications of deletion are.  I suppose it is possible to do....
    For documentation,  I guess you could search through service.sap.com/support. 
    I would think you would respond to the request as follows:
    Tell the client it will be extra work to delete the items.  Which will translate into additional billable fees.
    Tell the client (in writing) that you cannot predict what all possible outcomes will be, but that you are willing to work with him to fix anything that gets broken in the process.  For additional billable fees.
    In the end, the client is paying for your advice and your services.  In the end, he owns the license and the system upon which it is running.    He can have anything he wants, there is no other 'right' or 'wrong'.
    I suggest you don't delete anything in client 000.  Most companies keep this as a reference.  If the client later decides to reconstruct, this would be a nice source of info.
    Since company codes greatly affect FI/CO, you might want to post the question in one of those forums.
    Best Regards,
    DB49

  • Capturing deletes in the source -best practices

    What are best practices to capture deletes in the source (10g)? I need to bring the data into the data warehouse. Asynchronous CDC can do the job, but is there anything I should be aware of? Can somebody speak of the best practices of how to implement this? Other options?
    Thanks in advance.
    Edited by: Rinne on Sep 23, 2010 11:05 AM

    Rinne wrote:
    Deletes don't happen often at all. Just about 10 records in a month. But I do need to track them daily. I have a daily job that goes against the source and gets the data out. Currently, I'm relying on a timestamp, but I need to change this to get the deletes.If you can afford (i.e. you have control over the application that uses the source database), you may want to only mark the records as "DELETED" (for e.g. add a flag to the table that is set to indicate that record is deleted). That way you change a DELETE to an UPDATE.

  • Best Practices for Integrating UC-5x0's with SBS 2003/8?

    Almost all of Cisco's SBCS market is the small and medium business space.  Most, if not all of these SMB's have a Microsoft Small Business Server 2003 or 2008. It will be critical, In order for Cisco to be considered as a purchase option, that the UC-5x0 integrates well into these networks.
    To that end, I see a  lot of talk here about how to implement parts and pieces of this, but no guidance from Cisco, no labs and no best practices or other documentation. If I am wrong, please correct me.
    I am currently stumbling through and validating these configurations myself, Once complete, I will post detailed recommendations. However, it would have been nice to have a lab to follow instead of having to learn from each mistake.
    Some of the challanges include;
    1. Where should the UC-540 be placed: As the gateway for QOS or behind a validated UC-5x0 router/security appliance combination
    2. Should the Microsoft Windows Small Business Server handle DCHP (as Microsoft's documentation says it must), or must the UC-540 handle DHCP to prevent loss of features? What about a DHCP relay scheme?
    3. Which device should handle DNS?
    My documentation (and I recommend that any Cisco Lab/Best Practice guidence include it as well) will assume the following real-world scenario, the same which applies to a majority of my SMB clients;
    1. A UC-540 device utilizing SIP for the cost savings
    2. High Speed Internet with 5 static routable IP addresses
    3. An existing Microsoft Small Business Server 2003/8
    4. An additional Line of Business Application or Terminal Server that utilizes the same ports (i.e. TCP 80/443/3389) as the UC-540 and the SBS, but on seperate routable IP's (Making up crazy non-standard port redirections is not an option).
    5. A employee who teleworks from various places that provide a seat and a network jack, which is not under our control (i.e. a employees home, a clients' office, or a telework center). This teleworker should use the built in VPN feature within the SPA or 7925G phones because we will not have administrative access to any third party's VPN/firewall.
    Your thoughs appreciated.

    Progress Report;
    The following changes have been made to the router in support of the previously detailed scenario. Everything appears to be working as intended.
    DHCP is still on the UC540 for now. DNS is being performed by the SBS 2008.
    Interestingly, the CCA still works. The NAT module even shows all the private mapped IP's, but no the corresponding public IP's. I wouldnt recommend trying to make any changes via the CCA in the NAT module.  
    To review, this configuration assumes the following;
    1. The UC540 has a public IP address of 4.2.2.2
    2. A Microsoft Small Business Server 2008 using an internal IP of 192.168.10.10 has an external IP of 4.2.2.3.
    3. A third line of business application server with www, https and RDP that has an internal IP of 192.168.10.11 and an external IP of 4.2.2.4
    First, backup your current configuration via the CCA,
    Next, telent into the UC540, login, edit, cut and paste the following to 1:1 NAT the 2 additional public IP addresses;
    ip nat inside source static tcp 192.168.10.10 25 4.2.2.3 25 extendable
    ip nat inside source static tcp 192.168.10.10 80 4.2.2.3 80 extendable
    ip nat inside source static tcp 192.168.10.10 443 4.2.2.3 443 extendable
    ip nat inside source static tcp 192.168.10.10 987 4.2.2.3 987 extendable
    ip nat inside source static tcp 192.168.10.10 1723 4.2.2.3 1723 extendable
    ip nat inside source static tcp 192.168.10.10 3389 4.2.2.3 3389 extendable
    ip nat inside source static tcp 192.168.10.11 80 4.2.2.4 80 extendable
    ip nat inside source static tcp 192.168.10.11 443 4.2.2.4 443 extendable
    ip nat inside source static tcp 192.168.10.11 3389 4.2.2.4 3389 extendable
    Next, you will need to amend your UC540's default ACL.
    First, copy what you have existing as I have done below (in bold), and paste them into a notepad.
    Then, im told the best practice is to delete the entire existing list first, finally adding the new rules back, along with the addition of rules for your SBS an LOB server (mine in bold) as follows;
    int fas 0/0
    no ip access-group 104 in
    no access-list 104
    access-list 104 remark auto generated by SDM firewall configuration##NO_ACES_24##
    access-list 104 remark SDM_ACL Category=1
    access-list 104 permit tcp any host 4.2.2.3 eq 25 log
    access-list 104 permit tcp any host 4.2.2.3 eq 80 log
    access-list 104 permit tcp any host 4.2.2.3 eq 443 log
    access-list 104 permit tcp any host 4.2.2.3 eq 987 log
    access-list 104 permit tcp any host 4.2.2.3 eq 1723 log
    access-list 104 permit tcp any host 4.2.2.3.35 eq 3389 log 
    access-list 104 permit tcp any host 4.2.2.4 eq 80 log
    access-list 104 permit tcp any host 4.2.2.4 eq 443 log
    access-list 104 permit tcp any host 4.2.2.4 eq 3389 log
    access-list 104 permit udp host 116.170.98.142 eq 5060 any
    access-list 104 permit udp host 116.170.98.143 any eq 5060
    access-list 104 deny   ip 10.1.10.0 0.0.0.3 any
    access-list 104 deny   ip 10.1.1.0 0.0.0.255 any
    access-list 104 deny   ip 192.168.10.0 0.0.0.255 any
    access-list 104 permit udp host 116.170.98.142 eq domain any
    access-list 104 permit udp host 116.170.98.143 eq domain any
    access-list 104 permit icmp any host 4.2.2.2 echo-reply
    access-list 104 permit icmp any host 4.2.2.2 time-exceeded
    access-list 104 permit icmp any host 4.2.2.2 unreachable
    access-list 104 permit udp host 192.168.10.1 eq 5060 any
    access-list 104 permit udp host 192.168.10.1 any eq 5060
    access-list 104 permit udp any any range 16384 32767
    access-list 104 deny   ip 10.0.0.0 0.255.255.255 any
    access-list 104 deny   ip 172.16.0.0 0.15.255.255 any
    access-list 104 deny   ip 192.168.0.0 0.0.255.255 any
    access-list 104 deny   ip 127.0.0.0 0.255.255.255 any
    access-list 104 deny   ip host 255.255.255.255 any
    access-list 104 deny   ip host 0.0.0.0 any
    access-list 104 deny   ip any any log
    int fas 0/0
    ip access-group 104 in
    Lastly, save to memory
    wr mem
    One final note - if you need to use the Microsoft Windows VPN client from a workstation behind the UC540 to connect to a VPN server outside your network, and you were getting Error 721 and/or Error 800...you will need to use the following commands to add to ACL 104;
    (config)#ip access-list extended 104
    (config-ext-nacl)#7 permit gre any any
    Im hoping there may be a better way to allowing VPN clients on the LAN with a much more specific and limited rule. I will update this post with that info when and if I discover one.
    Thanks to Vijay in Cisco Tac for the guidence.

  • AD Sites and Services and Best Practices

    Hey All,
    I am new to OES, but not new to AD. I am in an environment in which DSfW was recently setup to support VDI testing.
    I notice that there is no configuration under AD Sites and Services. We have multiple sites, with DCs setup at each site. The consequence of not having Sites and Services configured is that machines/users in site "A" are logging in through site "B" domain controllers. Obviously, this is not ideal nor best practice. Secondly, this leads me to wonder how the domain controllers are replicating since I do not see NTDS entries in Sites and Service MMC for the domain controllers, yet I do see that AD data is replicating by comparing databases (simply adding a new user on one DC I see it added on the secondary DCs). So I know it's replicating, but apparantly not using AD schema?
    One other question I have about DSfW is regarding the migration from a mixed environment to a full AD environment. We are deploying AD primarily due to VDI initiatives, and currently only testing this. Looking further down the road for planning purposes I have to wonder if it's possible to stand up a 2008 R2 server, join it to the domain, dc promo it, FSMO transfer, then decommossion the DSfW systems. This would leave us with purely Windows DC environment for authentication. Is this something some people have done before? Is it a recommended best path for migrating? Cause I also see others creating a second AD environment, then building the trusts between DSfW's domain and the "new" domain (assuming these are not in the same forrest). That would be less than ideal.
    Thanks in advance for any responses...

    Originally Posted by jmarton
    DSfW does not currently support "sites and services" but it's on the
    roadmap and currently targed for OES 11 SP2.
    Excellent! I feel sane now :) I can live with this, as long as it's expected/normal.
    It sounds like you need sites and services, but once that's in DSfW,
    why migrate from DSfW to MAD if DSfW works for your VDI initiative?
    You are correct. I am simply planning and making sure all the options are in play here.
    I would rather not get too deeply reliant on DSfW if it will make any future possible migration more difficult. Otherwise, DSfW is extremely convenient....I am impressed actually.
    I also believe there may be a way we can control the DC used for specific "contexts" (or OUs as Microsoft calls them). So if I have a group of users in a particular OU that reside at a particular branch I think I should be able to set their preferred domain controller....and if so, that means sites & services becomes nearly irrelevent. I would be ineterested to talk to people who are using DSfW with multiple sites in play.

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Best practice for upgrading task definition without deleting task instances

    best practice for upgrading task definition in production system without deleting or terminating task instances
    If I try and update a task definition with task instances running I get the following error:
    Task definition 'My Task - Add User' may not be modified while there are active task instances
    Is there a best practice to handle this. I tried to force an update through the console but that didn't work. I tried editing the task from the debug page and got the same error.

    1) Rename the original task definition.
    2) Upload the new task definition with the original name.
    3) Later, after all the running tasks have timed out, delete the old definition.
    E.g., if your task definition is "myWorkflow":
    1) Rename "myWorkflow" to "myWorkflow-old-2009-07-28"
    2) Upload the new task definition as "myWorkflow".
    Existing tasks will stay linked to the original (renamed) workflow definition.
    New tasks will use the new definition.
    As the previous poster notes, depending on the changes you are making, letting the old task definitions stay active could have bad side-effects and might be better avoided.

  • Best practice when deleting from different table simultainiously

    Greetings people,
    I have two tables joined with a foreign key contrraint. They are written at the same time to keep the constraint happy but I don't know the best way of deleting them as far as rowsets and datamodels are concerned. Are there "gotchas" like do I delete the row in the foreign key table first?
    I am reading thread:http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=49918
    and getting my head around it.
    Is there a tutorial which deals with this topic?
    I was wondering the best way to go.
    Many Thanks.
    Phil
    is there a "best practice" method for

    Without knowing many details about your specifics... I can suggest a few alternatives -
    You can definitely build coordinating the deletes into your application - you can automatically delete any FK related entries prior to deleting the master, or, refuse to delete the master until the user goes and explicitly deletes the children... just depends on how you want to manage it.
    Also in many databases you can build the cascading delete rules into your database tables themselves.... so that when you delete the master the deletes automatically cascade. I think this is something you typically declare when creating the FK constrataint (delete cascade and update cascade rules).
    hth,
    v

  • Best Practice for Deleted AD Users

    In our environment, we are not using AD groups. Users are being added individually. We are running User Profile Service but I am aware that when a user is deleted in AD, they stay in the content database in the UserInfo table so that some metadata can be
    retained (created by/modified by/etc).
    What are best practices for whether or not to get rid of them from the content database(s)?
    What do some of you consultants/admins out there do about this? It was brought up as a concern to me that they are still being seen in some list permissions/people picker, etc.
    Thank you!

    Personally I would keep them to maintain metadata consistency (Created By etc as you say). I've not had it raised as a concern anywhere I've worked.
    However, there are heaps of resources online to delete such users (even in bulk via Powershell). As such, I am unaware of cases of deleting them causing major problems.
    w: http://www.the-north.com/sharepoint | t: @JMcAllisterCH | YouTube: http://www.youtube.com/user/JamieMcAllisterMVP

  • Bad bind variable & best practice for delete

    I am working with three tables and very new to SQL. I need to create a procedure that will accept an ID and go through two sub tables and delete child records. Item is the main table. I am passing in the ID into the procedure and I want to use it as below. I keep getting a bad bind variable error message. I have verified that the table is setup as a number and my procedure accepts a number. I also want someone to review this from best practice as I am new to procedures.
    PROCEDURE DeleteItem (p_ItemID IN NUMBER, p_RowsAffected OUT number)
    IS
    p_RowsAffected NUMBER;
    -- select the itemdetail for the analysis
    CURSOR c_itemdetail
    IS
    SELECT
    itemdetailid
    FROM itemDETAIL
    WHERE itemid = :p_ItemID;
    BEGIN
    -- loop through each itemdetail and delete the itemdetailoutlay
    FOR r_itemdetail IN c_itemdetail
    LOOP
    BEGIN
    DELETE FROM ITEMDETAILOUTLAY
    WHERE itemdetailid = r_itemdetail.itemdetailid;
    COMMIT;
    END;
    END LOOP;
    -- delete the itemdetail
    BEGIN
    DELETE FROM ITEMDETAIL
    WHERE itemid = :p_ItemID;
    COMMIT;
    END;
    -- delete the main item
    BEGIN
    DELETE FROM ITEM
    WHERE itemdid = :p_ItemID;
    COMMIT;
    p_RowsAffected := SQL%ROWCOUNT;
    END;
    END DeleteItem;

    Hi,
    Welcome to the forum!
    As you may notice, this site normally compresses white-space. Whenever you post code, or any formatted text, on this site, type these 6 characters:
    \(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.
    I don't think you mean to use bind variables anywhere, so don't use colons before any variable names.  You were doing this correctly with p_RowsAffected; do the same thing with p_ItemID.
    Try this:PROCEDURE DeleteItem (p_ItemID IN NUMBER, p_RowsAffected OUT number)
    IS
    -- p_RowsAffected NUMBER;     -- Don't name local variables the same as arguments
    -- select the itemdetail for the analysis
    CURSOR c_itemdetail
    IS
    SELECT
    itemdetailid
    FROM itemDETAIL
    WHERE itemid = p_ItemID;     -- No : before p_ItemID
    BEGIN
    -- loop through each itemdetail and delete the itemdetailoutlay
    FOR r_itemdetail IN c_itemdetail
    LOOP
    BEGIN
    DELETE FROM ITEMDETAILOUTLAY
    WHERE itemdetailid = r_itemdetail.itemdetailid;
    COMMIT;
    END;
    END LOOP;
    -- delete the itemdetail
    BEGIN
    DELETE FROM ITEMDETAIL
    WHERE itemid = p_ItemID;     -- No : before p_ItemID
    COMMIT;
    END;
    -- delete the main item
    BEGIN
    DELETE FROM ITEM
    WHERE itemdid = p_ItemID;     -- No : before p_ItemID
    COMMIT;
    p_RowsAffected := SQL%ROWCOUNT;
    END;
    END DeleteItem;
    The most important "best practice" with PL/SQL is to avoid doing it whenever possible. 
    If SQL offers a way o do the same thing, it's usally best not to code anything in PL/SQL.
    Have you considered foreign key constraints, with "ON DELETE CASCADE"?  That way, you could simply "DELETE FROM item", and all the dependent rows in the other tables would automatically be deleted.   You wouldn't need to remember to call a procedure like this; in fact, you would have no need for a procedure like this.
    Given that you do have such a procedure:
    You're doing row-by-row processing, which some mad wags like to call "slow-by-slow" processing.
    For example, iYou're xplicitly finding each ItemDetailID separately, and deleting each one separately, like this:... CURSOR c_itemdetail
    IS
    SELECT
    itemdetailid
    FROM itemDETAIL
    WHERE itemid = p_ItemID;
    -- loop through each itemdetail and delete the itemdetailoutlay
    FOR r_itemdetail IN c_itemdetail
    LOOP
    BEGIN
    DELETE FROM ITEMDETAILOUTLAY
    WHERE itemdetailid = r_itemdetail.itemdetailid;
    COMMIT;
    END;
    END LOOP;
    It's more efficient for the system (and less coding for you) if you let SQL handle as much as possible, so do this instead...     DELETE FROM ItemDetailOutlay
         WHERE ItemDetailID IN
              ( SELECT itemdetailid
              FROM     itemDETAIL
              WHERE     itemid     = p_ItemID
    Do you really want to COMMIT 3 times?  0 or 1 times might be better.
    What happens if there is some kind of error, say, after you've delete rows form ItemDetailOutlay and ItemDetail, but before you've delete from Item?  Wouldn't you want the entire transaction to fail, and leave all three tables in a consistent state?  If so, either have the calling procedure COMMIT, or have a single COMMIT at the end of DelteItem.
    Edited by: Frank Kulash on May 6, 2010 2:25 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Best Practice: files that should never be moved or deleted

    Hi there.
    I've just gone through my computer for the first time in a couple of years and done a massive cleanup, but without really knowing the consequences I guess you could say and now things are acting a little strange.
    +(I feel silly even writing this, knowing that all the IT heads out there are sighing a unanimous 'what an idiot'.)+
    Can anyone please provide a sort of 'best practice' guide for the files/file types that should absolutely be on the harddrive, or in the apps folder, utilities folder, etc? and if there are any that should 'never' be deleted?
    .. or even some program that sort of saves you from yourself by keeping those files safe from being a.) moved and b.) deleted
    Would be a huge help!
    Thanks in advance ~
    Message was edited by: BluesySue
    Message was edited by: BluesySue

    Apple has already hidden about half of the folders on a typical OS X install. The ones that make up the Unix core of the OS. So unless you know how to find those, you'll never have to worry about them.
    Aside from that, you just want to keep away from the System folder and any of the Library folders. Nothing else should be too crucial to the OS running.
    Of course if you wanted to elaborate on "a little strange" we might be able to be a bit more helpful.

  • Is there a best practice for deleting a published report?

    Post Author: matthewh
    CA Forum: General
    Is there a best practice for deleting a published report from C:\Program Files\Business Objects\BusinessObjects Enterprise 11.5\FileStore on Crystal Reports Server or can I just delete the subfolders?  Does it reference them elsewhere?  I have a load of old reports I need to shed, but can't see how to do it.

    Hi,
    You can refer the SRND guide. As per document (page -292)
    you can configured -You can add a maximum of 50 agents per team.
    http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ippcenterprise9_0_1/design/guide/UCCE_BK_S06086EC_00_srnd-9-0-1.pdf
    Also you can check the Bill of Material document of UCCE , under the section "Operating Conditions, Unified ICM, Unified CC" What are the number should configure in UCCE.

Maybe you are looking for

  • Will I need to upgrade my software?

    I'm running a Macbook Pro with OS X 10.5.8 and am considering buying an IPad Mini so I can download documents for use at meetings. I want to know if I need to upgarde thew software on my Macbook so the two devices will work together.  If the answer i

  • Filter on the reports created using DIRECT DATABASE ACCESS in obiee 10g

    How do i filter on the report that is created by using DIRECT DATABASE ACCESS in obiee 10g? I have reprot A with link to report B, report B is created using DIRECT DATABASE ACCESS. so it is just a table, but i need to filter out 2 columns. here is th

  • Exporting Vector shapes to Illustrator?

    I know I can get illustrator vector shapes into motion, but what about getting them out? I've some nice shapes created I want to get out for print piece and I don't want to have to recreate them.

  • Maintain Validity Date for Mitigation Control Assignment to Users Virsa 5.2

    We have over 1,000 SoD's all mitigated.  The val;idity date for these mitigation controls needs to be updated.  Does anyone know a way to perform a range of updates so it is not necessary to update each user assigned to a Mitigation Control.

  • Pre fill listbox/dropdown dynamicly using ABAP

    Hi, im working with HCM Processing & Forms, using the methods provided standard by SAP, and im trying to get a certain Binding(the dropdown) filled dynamicly. Ive tried looking for this on the internet/sdn but i cant seem to find anything usefull. Ca