Suggestions on Best Practice: Setting Up Guitar Patches

Hello! I've thumbed through all the different amps in Amp Dsnr and noted the ones that work for my genre. Now I'd like to make each one of these sounds a patch I can cycle through. How do I do that most quickly and effectively?

I don't think it's about the number of patches, but how many channel strips there are in the patches and what the strips have on them. My recommendation would be to put as many channel strips at the set level as possible, or even concert level.
For example, if you have 10 songs (10 patches) that all use the same amp setting, put the amp strip at the set level, and all 10 patches inside that set. The 1 strip will be available to all 10 patches without putting extra strain on the Mac by duplicating the strip. Or, if you're on MS2, instead of using sets you could alias the channel strip to as many patches as necessary (copy/paste as alias).
Good luck!

Similar Messages

  • Best Practice setting up NICs for Hyper V 2008 r2

    I am looking at some suggestions for best practice for setting up a hyper V 2008 r2 at a remote location with 5 nics, one for managment vlan and other 4 on the data vlan.  This server will host  2 virtual machines, one is a DC and the other
    is a member local DHCP server.  The server is setup now with one nic on the management Vlan and the other nic's set to get there ip from the local dhcp server on on the host.   We have the virtual networks setup in Hyper V to
    point to each of the nics using the "external connection".  The virtual servers 'DHCP and AD" have there own ip set within them.  Issues we are seeing,  when the site looses external connections for a while they cannot get ip
    addresses from the local dhcp server anymore.
    1. NIC on management Vlan -- IP Static -- Physical host
    2. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V  -- virtual server DHCP
    3. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- Virtual server domain controller
    4. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    5. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    Thanks in advance

    Looks like you may be over complicating things here.  More and more of the recommendations from Microsoft at this point would be to create a Logical Switch and then layer on Logical Networks for your management layers, but here is what I would do for
    you simple remote office.  
    Management NIC:  Looks good (Teaming would be better, but only if you had 2 different switching to protect against link failures at the switch level.  Doesn't seem relevant in this case however.
    NIC for Data Network VLAN:  I would use one NIC in your case if you can have the ability to Trunk multiple VLANs at the switch level to the NIC.  That way you are setting the VLAN on the VMs NIC that you want to access and your
    Virtual Switch configuration is very simple.  On this virtual switch however, I would uncheck IPv4 and IPv6.  There is no need to give this NIC an address as you are just passing traffic through them from the VMs that are marked with VLAN tags.  Again,
    if you have multiple physical switches in the building teaming could be an option, but probably adds more complexity than is necessary for a small office. 
    Even if you keep your Virtual Switches linked to separate NICs unchecking IPv4 and IPv6 makes sense. 
    Disable all the other NICs
    Beyond that, check your routing.  Can you ping between all hosts when there is not interruption? What DHCP server are they getting there addresses on normally?  Where are your name resolution servers (DNS, WINS)?  
    No silver bullet here, but maybe a step in the right direction.
    Rob McShinsky (VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

  • Suggest the Best Practice for Procurement of Commodities like crude, copper

    Dear Gurus,
    Please suggest the best practice for following business Process.
    My client is having the procurement need for a comodity whose prices are fluctuating. Say Crude Oil . The prices are changing every day. Now The client would like to pay to vendor on day of good receipt. But it may be possible that the price on GR day is much higher. How to control these kind of procurement. Presently I have activated the price variance through invoice posting but this is not working.
    Can you suggest best practice for  procurement of comodities.
    Thank you for your consistant support.
    Regards
    Vinod Kakade
    Edited by: vinodkakade on Jul 14, 2011 2:34 PM

    Hi Vinod,
    Would you know the price by the time the GR is to happen, in this case you can ask the vendor to send the confirmations, just prior to that with the correct price.
    Please refer this link
    Change in PO Price after goods receipts and goods issue
    though the thread is marked unanswered.
    Regards
    Shailesh

  • Suggesting on best Mac set up for home photographer

    looking for suggesting on best Mac set up for home photographer and video editor using iPhoto and CS6 products for example. I have a 8 year old Mac and looking to start with a new clean build. I don't need to get too fancy can be a out of box Mac solution

    Any Mac will be faster than an 8-year old one (especially since that Mac is almost certainly PowerPC-based).
    You've posted in the Mac Pro forum, but ironically that's the one Mac that wouldn't really make sense (they're very expensive and are generally overkill for most home applications).
    The iMacs are probably what you want to look at - I would go into an Apple Store and look at the two sizes to determine what works best for you. The 27" has expandable RAM; the 21" doesn't, so you are advised, if you buy the smaller, to max out the RAM from Apple to 16GB when you buy it.
    Don't forget that they lack optical drives now, so you'll need an external USB drive if you use a lot of CD/DVDs. (Doesn't need to be an Apple Superdrive.)
    Matt

  • Best Practices to do software Patching and Software Deployment for bigger environment like 300 K computers

    Hi Friends,
    i am looking for low level suggestions and a ppt/document etc too  , The client base is 300 k users and spread globally ( major in three different regions), the requirement is
    1) methodology to do software patching, can we patch all in one go or do we have to divide as per region etc
    2) How many clients can be targeted for software patching in one go ( ex : can we target 20K clients in one go ?), i know there are other factors too will play key role here like band width etc , but i am looking answers out of real time experience
    3) What Methodology to follow when it comes to critical/emergency updates ?
    Regards
    Tanoj
    OSLM ENGINEER - SCCM 2007 & 2012

    There is no single best practice to patching, if there were then SCCM would ship preconfigured :).  As an example, Microsoft internally patches 300,000 workstations with 98% success in about a week according to their own podcast:
    Microsoft Podcast
    That said, I do follow a few rules when building a patching plan for a client.  Maybe you'll find it helpful:
    Always use a "soak tier".  I forget where I first heard the term, but the idea is to have a good cross section of users get patches one or more weeks before your general deployment.  This will help identify potential issues with a patch
    before it hits general release.  Make sure said group is NOT just the IT department ... we make the worst guinea pigs (we aren't known for closing out end of the month billing or posting legal documents).
    When it comes to workstations, avoid needlessly phased deployment.  99% of the time, using local time zones is enough of a phased deployment.  Unlike servers with very particular boot and patching orders, workstations can simply be patched.  You
    have enough collections in your environment ... so any new collection for patching should be justified.
    Keep your ADR count down.  It's tempting to build a new ADR for everything (workstations, general servers, exchange servers, etc.).  Problem is that best practice also has you building a new SUG every time each ADR runs ... so you end up getting
    flooded with update groups and that much more maintenance.  When possible simply use maintenance windows to break up patching schedules instead of using mostly duplicate ADRs that simply have separate start dates.
    Use Orchastrator.  To me Orchastrator is to Software Updates what MDT is to Operating System Deployments:  effectively mandatory.  Even if you don't have complicated cluster updates you need to automate with SCO integrated to SCCM (there
    are great examples on the web if you do), you can at the very least create run-books to manage that monthly maintenance you otherwise have to handle manually in SCCM (which is a lot IMO).  I have monthly run-books that delete expired updates from SUGs,
    consoldate SUGs older than 6 months unto a single annual group, and even create new update packages (and update all ADRs to use them) every 6 months to keep a single repository from getting too large.
    I'm sure others out there can give you more advice ... but that's my two cents.

  • Best Practice: SAPGUI Version and Patch Upgrades

    Hello -
    Does anyone have some thoughts/information on best practices relating to SAPGUI version and patch upgrades.
    Obviously, sometimes upgrades are forced upon us (e.g. 7.10 for Vista) and in other cases it may just be considered "nice to have".
    Either way - it always signifies regression test and deployment effort.  How do we balance the benefit and cost?
    Thanks, Steve

    Hi Steve,
    you're right for the first part, yes, we (usually) patch twice a year.
    Now for the rest
    An uninstall will only happen on release changes (6.20->6.40->7.10), i. e. about every 4-5 years as SAP releases them.
    Patches are applied to the installation server and the setup on the client will only update changed program parts. For example, upgrading 6.40->7.10 took about 10 minutes (incl. uninstall), applying patch 1 less than 5 minutes.
    I recommend, you read the "SAP Frontend Installation Guide - 7.10" which you find at SMP alias sapgui. Navigate to  Media Library - Literature. It explains setup of the installation server (sounds like a big thing, but ain't much more than creating a share), creating packages, applying updates etc.
    Peter
    Points always appreciated

  • Best Practice setting sessionScope Variable

    Using JDeveloper 11.1.1.4.0
    Where and how do I set the sessionScope variable in the following situation? I developed a named criteria search on the emp table's view object to search for EMPNO. I then dragged the "SearchByEmpno" named criteria onto my page and selected "Query and Table" from the choices. When a user enters an EMPNO to search by, I want to set the sessionScope variable to that value so that it can be used in other routines in the session. Someone suggested that I set the value in the IMPL for the view object, but that violates the separation of model and view controller. Is the proper place on the page's query object or the output table or elsewhere, and how so as to not disturb the normal functions of the query or table--if possible?
    Thanks,
    Troy

    you can intercept the query using querylistener as mentioned in the blog post and set the value in session scope:
    http://andrejusb.blogspot.com/2009/06/working-with-view-criteria-items.html
    Thanks,
    Navaneeth

  • Best Practice: Setting up Agents for cross-training

    The post that sparked this topic:
    http://forum.cisco.com/eforum/servlet/NetProf?page=netprof&forum=Unified%20Communications%20and%20Video&topic=Contact%20Center&topicID=.ee6fe12&fromOutline=&CommCmd=MB%3Fcmd%3Ddisplay_location%26location%3D.2cc2d609
    My contribution to this topic:
    The Scenario:
    Agent2 is a primary resource for Q2, which takes a lot of calls. At any given time there are always at least 5 calls in queue. Agent1 is a primary resource for Q1, which takes fewer calls than Q2, and rarely has calls waiting in queue. Agent1 is special, because he/she is cross-trained in Q2 and helps out when needed. Agent1 should never take a call for Q2 if a call for Q1 is waiting; regardless of how long the caller in Q2 has been waiting.
    The Problem:
    CSQs select their resources independently of what is going on in other CSQs. They only look at their own available resource pool. If a resource is available, that resource becomes the selected resource to handle the current contact; regardless of that resource's other CSQ associations.
    Agent1 runs the risk of helping Q2 callers who have been waiting longer than Q1 callers, even though he/she should be primarily helping Q1 callers.
    The Setup:
    Agents
    Agent1 (Skills: Q1 [8]; Q2 [4])
    Agent2 (Skills: Q2 [8])
    Skills
    Q1
    Q2
    CSQs
    Q1_t1 (Most Skilled; Skill Q1 - 6 and above)
    Q1_t2 (Most Skilled; Skill Q1 - 1 and above)
    Q2_t1 (Most Skilled; Skill Q2 - 6 and above)
    Q2_t2 (Most Skilled; Skill Q2 - 1 and above)
    The Solution:
    You create a tiered structure out of your CSQs.
    Instead of having 10 levels of skill to choose from, you have 5. You can think of this like a 5 star rating for your agents.
    We take advantage of the fact that scripts are interruptible, and at any time during a queue loop an agent becomes available, they will be placed into reserved state immediately.
    We also take advantage of the fact that, if a resource is Ready in a second tier queue, then we know that there are no callers waiting in their primary queue. Otherwise, the resource would be reserved, talking, or not ready.
    In your Q2 script, select from Q2_t1 first.
    If queued and if Get Reporting Statistics shows > 0 resources Ready in Q2_t2, then select from Q2_t2. Dequeue if queued or if a Connect step failure occurs.
    This creates a situation where Agent1, who is skilled in both CSQs, empties his/her primary queue (Q1_t1) before ever taking a call from his/her secondary queue (Q2_t2). If no calls are waiting in Q1, then he/she is still eligible to help out Q2.
    Possible Problems:
    1. There would be a change in the way you look at reporting.
    2. There are now two CSQs, because you cannot change the skill criteria in a script.
    3. In a rare instance the secondary script could get the report stats, see 1 resource ready, and right as it executes the select resource step, the primary script executes its own select resource step. Agent1 is now talking to a secondary contact, and his/her primary contact has to wait.
    The likely hood of this happening increases as callers waiting in Q2 increases.
    Conclusion:
    What are some of your thoughts on this topic?
    How have you solved cross-training previously?
    What would you add, subtract, or modify from my proposed solution?

    Hi Anthony,
    I just found your post about cross-training and I can only say it is great!
    Actually it is really close to the be behaviour I have to implement for a customer:
    - A 2 level helpdesk: level 1 takes all the calls, level 2 takes the calls that level 1 could not solve,
    - Agents of level 2 can help those of level 1 if they are available (or if the number of calls in queue is too high; that point needs to be decided),
    - The level 1 is a team of Agents,
    - The level 2 is divided into 2 Agents teams, each one dedicated to a specific king of incident.
    What I planned is the following (I reused your naming and presentation to explain it ):
    Agents
    For level 1 : Agent1 to Agent20 (Skills: S1 [8])
    For level 2 team 1 : Agent21 to Agent Agent30 (Skills: S1 [4]; S2 [8])
    For level 2 team 2 : Agent31 to Agent Agent40 (Skills: S1 [4]; S3 [8])
    Skills
    S1
    S2
    S3
    CSQs
    Q1_t1 (Most Skilled; Skill S1 - 6 and above)
    Q1_t2 (Most Skilled; Skill S1 - 1 and above)
    Q2 (Most Skilled; Skill S2 - 6 and above)
    Q3 (Most Skilled; Skill S3 - 6 and above)
    In the first script
    Select resources from Q1_t1 first.
    If  queued and if Get Reporting Statistics shows > 0 resources Ready in  Q1_t2, then select from Q1_t2.  Dequeue if queued or if a Connect step  failure occurs.
    When Agent1 to Agent20 answer a call and cannot solve the issue, it transfers the call to the script of Q2 or Q3, depending on the kind of issue.
    In the second script
    There is a single script for queues Q2 and Q3: it is executed differently using a "name of queue" parameter.
    Select resources from Q2/Q3.
    Do you think it would be the best way to answer the need?
    Also, I have understood that dequeue step is used for statistics (remove a call from the statistics of a queue): is that correct or is there another use here?
    Many thanks for your answer!
    Julien

  • Any suggestions for best practice when creating assets for tablets?

    General community question,
    I'm about to start creating a tablet demo of a few assets that have already been created for web that need adapting for tablet (namely iOS on iPad). I was wondering if anyone could share some pointers about what to avoid doing or some top tips to ensure that they can be converted as quickly and easily as possible, particularly given the state of the current version of EA.
    Any suggestions or links to relevant websites would be welcome!
    Cheers,
    D

    General community question,
    I'm about to start creating a tablet demo of a few assets that have already been created for web that need adapting for tablet (namely iOS on iPad). I was wondering if anyone could share some pointers about what to avoid doing or some top tips to ensure that they can be converted as quickly and easily as possible, particularly given the state of the current version of EA.
    Any suggestions or links to relevant websites would be welcome!
    Cheers,
    D

  • 4GM RAM Windows 2000 Server, Suggest me Best SGA Setting

    I am new in this and want to install a database with the above defined configuration.
    May be this upto 50% of 4GB RAM or what so ....
    Thanks in advance
    Nasim

    Hi,
    Take care about RoT :
    Your DBA's rule of thumb (ROT) here is "you want to use 40/50% of RAM for the SGA leaving the other 50% for the dedicated servers (processes -- they allocate PGA) and 10% or so for the OS and related processes"
    From Tom Kyte here
    To help you, you can start by the reading of some posts on this subject :
    http://asktom.oracle.com/pls/asktom/f?p=100:1:1662390917258479::NO:RP::
    Anyway, after first try, take several statspack snapshots and adjust the size if needed.
    Nicolas.

  • Require official Oracle Best Practices about PSU patches

    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    The following is stated!
    Critical Patch Update
    Fixes for security vulnerabilities are released in quarterly Critical Patch Updates (CPU), on dates announced a year in advance and published on the Oracle Technology Network. The patches address significant security vulnerabilities and include other fixes that are prerequisites for the security fixes included in the CPU.
    The major products patched are Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, JD Edwards OneWorld XE, Oracle WebLogic Suite, Oracle Communications and Primavera Product Suite.
    Oracle recommends that CPUs be the primary means of applying security fixes to all affected products as they are released more frequently than patch sets and new product releases.
    BENEFITS
    * Maximum Security—Vulnerabilities are addressed through the CPU in order of severity. This process ensures that the most critical security holes are patched first, resulting in a better security posture for the organization.
    * Lower Administration Costs—Patch updates are cumulative for many Oracle products. This ensures that the application of the latest CPU resolves all previously addressed vulnerabilities.
    * Simplified Patch Management—A fixed CPU schedule takes the guesswork out of patch management. The schedule is also designed to avoid typical "blackout dates" during which customers cannot typically alter their production environments.
    PROGRAM FEATURES
    * Cumulative versus one-off patches—The Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle Communications Suite and Oracle WebLogic Suite patches are cumulative; each Critical Patch Update contains the security fixes from all previous Critical Patch Updates. In practical terms, the latest Critical Patch Update is the only one that needs to be applied if you are solely using these products, as it contains all required fixes. Fixes for other products, including Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, and JD Edwards OneWorld XE are released as one-off patches, so it is necessary to refer to previous Critical Patch Update advisories to find all patches that may need to be applied.
    * Prioritizing security fixes—Oracle fixes significant security vulnerabilities in severity order, regardless of who found the issue—whether the issue was found by a customer, a third party security researcher or by Oracle.
    * Sequence of security fixes—Security vulnerabilities are first fixed in the current code line. This is the code being developed for a future major release of the product. The fixes are scheduled for inclusion in a future Critical Patch Update. However, fixes may be backported for inclusion in future patch sets or product releases that are released before their inclusion in a future Critical Patch Update.
    * Communication policy for security fixes—Each Critical Patch Update includes an advisory. This advisory lists the products affected by the Critical Patch Update and contains a risk matrix for each affected product.
    * Security alerts—Security alerts provide a notification designed to address a single bug or a small number of bugs. Security Alerts have been replaced by scheduled CPUs since January 2005. Unique or dangerous threats can still generate Security Alert email notifications through MetaLink and the Oracle Technology Network.
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!
    Please clarify!
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Do we have any Best Practice document about PSU patches available for customers?

    cnawrati wrote:
    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    Who is the "your" to which you are referring?
    <snip>
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!Um. OK
    Please clarify!
    Of whom are you asking for a clarification?
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you Who is the "you" to which you refer?
    are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Be our guest.
    Do we What do you mean "we", Kemosabi?
    have any Best Practice document about PSU patches available for customers?This is a very confusing posting, but overall it looks like you are under the impression that this forum is some kind of channel for communicating back to Oracle Crop anything that happens to be on your mind about their corporate web site and/or policies and practices. Please be advised that this forum is simply a platform provided BY Oracle Corp as a peer operated user support group. No one here is responsible for anything on any Oracle web site. No one here is responsible for any content anywhere in the oracle.com domain, outside of their own personal posting on this forum. In other words, you can complain all you want about Oracle's policy, practice, and support, but "there's no one here but us chickens."

  • Best Practice for Portal Patches and effort estimation

    Hi ,
    One of our client is applying the following patches
    1. ECC 6.0 SP15(currently SP14)
    2. ESS MSS SP15(currently SP14 with some level of functional customization )
    3. EP 7 SP18(currently SP14)
    We would like to kwow the best practice for applying portal patches and the effort estimation for redoing the portal devt on the new patch.
    o   What is the overall level of effort with applying Portal patches?
    o   How are all the changes to SAP objects handle?  Do they have to be
         manually re-entered?
    o  What is the impact of having a single NWDI instance across the
        Portal Landscape during the Patch process?
    Regards,
    Revathi Raju.

    Hi Revathi,
    o What is the overall level of effort with applying Portal patches?
    overall effort to apply the patch is apprx 1/2-1 days for NW7 system. This is exclude the patch files download because it's based on your download speed.
    o How are all the changes to SAP objects handle? Do they have to be
    manually re-entered?
    Depending on your customization. Normally it wont effect if you created the customzation application apart from SAP standard application
    o What is the impact of having a single NWDI instance across the
    Portal Landscape during the Patch process?
    Any change that related to NWDI, you might be need to re-deployed from NWDI itself.
    Thanks
    Regards,
    AZLY

  • Best Practices:: How to generate XML file from a ResultSet

    Hi all,
    Could someone please suggest the best practices of how to generate an XML file from a resultset? I am developing a web application in Java with Oracle database and one of my tasks is to generate an XML file when the user, for example, click a "download as XML" button on the JSP. The application is basically like an Order with line items. I am using Struts and my first thought has been to have an action class which will extend struts's DownloadAction and through StAX's Iterator API to create an XML file. I intend to have a POJO which will have properties of all columns of my order and line items tables so that for each order I get all line items and:
    1. Write order details then
    2. Through an iterator write line items of that order to an XML file.
    I will greatly appreciate for comments or suggestions on the best way to do this through any pointers on the Web.
    alex

    Use a OracleWebRowSet in which an XML representation of the result set may be obtained.
    http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/oracle10g/webrowset/Readme.html
    http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/jcrowset.htm

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • How to make Forms less dependent of client PC ? (Best Practice?)

    Hello,
    We are on Forms 10g (and I also noticed the same pb with previous versions)
    We built quite big applications that use Webutil and jInitiator (not Java plugin)
    The application is used via Web only.
    At the deploiement of the applications, we experienced many difficulties due to specific configuration of Internet Explorer (or another browser, Firefox) of each one person of the company
    I suppose this situation has also been experienced by others...
    Do you have any suggestion of best practices for making such application less dependent of the client PC configuration ?
    Thanks in advance,
    Olivier

    Try to use http, i don't thing direct open the socket can pass throught the socket if your proxy haven't forward those ports~
    Hi All,
    I have problems with my applet working through
    gh proxy.I am using a client applet which makes a
    socket connection to a Java application running on the
    same pc as the web server.
    Everything works fine when I am directly
    ly connected.However it does't work when connected
    through a proxy? I am using object o/p & i/p streams
    for the exchange of data b/w client applet & the
    server java application.How do I overcome this
    problem.
    Currently I am using WinGate 3.0.2 as my proxy(I
    (I did't see any firewaal setting there) but I intend
    to make applet work through any proxy for that
    matter.Using a higher level protocol such as UDP would
    help in this regard?i think UDP/TCP is not the matter for that!
    This is urgent.PLease reply ASAP...
    best rgds,
    prithvi

Maybe you are looking for

  • Problem to generate classes from a gsoap server's wsdl with wsimport

    Hello, I trie for a long time to generate my client classes from a wsdl file with wsimport. The wsdl file was generated by gsoap. Here, the wsdl file : <?xml version="1.0" encoding="UTF-8"?> <definitions name="form" targetNamespace="https://enterpris

  • Process Chain for Real Time Demon

    Please help I am stuck I followed the step by sdn but this is missing in step. how to create now process chain. I created the below DSO CONNECTED TO dATASOURCE via Trans, Real Time IP Real Time DTP assigned to Datasource and assigned the DS, IP, DTP

  • How do I get application manager to install products after a system restore removed them?

    I had some issues starting the computer and had to use system restore to get it runnng. But that meant that all the applications I installed using creative cloud were gone.  When I started the Application Manger, It read that all the applications tha

  • Installing Oracle 8.1.7 with PS on Win 2000 cluster

    Sorry for bad English. I'm have problem. I have Oracle 8.1.7 with PS. I need setup Oracle with Parallel Server on cluster with Windows 2000 AS, but during setup Oracle no found cluster. In documentation talk about need Operation System Depend layer f

  • Additional field in report S_ALR_87011990 - Asset History Sheet

    Hi This is with reference to report S_ALR_87011990 - Asset History Sheet . I want to add quantity, location etc in this report. can anybody guide me how to add these additional field in this report as in configuration no such option available. Regard