Incremental processing

Hi I am new to SSAS. Could you please share any links for how to implement incremental processing in cube (Step by Step)

Hi Vamsi1980,
According to your description, you are looking for some links about how to implement incremental processing, right?
In SSAS, we can process the whole table, you can split the table in several partitions and process a single partition, you can merge partitions and you can incrementally process a single partition by using ProcessAdd, which is the topic of this article.
Please refer to the link below.
http://www.sqlservergeeks.com/sql-server-incremental-cube-processing-of-adventureworks-cube-in-ssas/
Regards,
Charlie Liao
TechNet Community Support

Similar Messages

  • Query on Increment Process

    Our customer wishes to handle the increment process in the following manner:
    1. For employees, who have put in MORE than 1 yr of service in company as on 1st April, the increment amount to be processed through the Annual Increment process. In this method, the increment will be effective from 1 April but the actual process may be executed in April or one of the subsequent months (i.e with retro effect).
    2. For employees who have LESS than 1 yr service in company, the Annual Increment process won't apply. Instead, their increment should be processed on the day they complete one year. E.g. if Joining Date = 25 May 2011, the first increment should be effective from 25 May 2012. This increment may be processed on 25 May 2012 itself or on any date after this (retro effect). When the Annual Increment process is again executed on 1 April 2013 (i.e. the subsequent year), the increment amount for these employees should be pro-rated, based on the date they complete one of yr of service (e.g. from 25 May 2012 to 31 March 2012, in the above example). From the next financial year (i.e. 1 April 2014) onwards, these employees will get full increment through the normal Annual increment Processing (without any pro-ration).
    Is it possible to handle the above requirement directly through the standard Increment Process available in SAP? Or, how should we go about this?
    -Shambvi

    Hi
    Please refer the document on thie following Tx code HRPBSIN_SALARY_INCRT

  • Salary Increment Process

    Dear Experts,
    I am working on Increment process,
    My client required 3% rise on following heads
    01.Pay Band Pay + 02.Grade Pay = Basic X 3 % Rise = Increment amount + 01.Pay Band Pay
    For Better understand  : 15000  +  5000  =  20000 
    Increment rise = 15000  X 3%   +  5000 X 3 % = 600 Increment Amount + 15000  = 15600 is Pay Band Pay new amout  after increment process.
    New Basic will be : 15600 + 5000 = 20600 & w.e.f  01.07.2012
    Increment is the based on the following points :
    01. If Loss of Pay then increment should be consider on earned amount.
    02. Round off Increment amount i.e.
    601.00  then the round up Rs.670.00
    669.00  then the round up Rs.670.00
    600.70  then the round up Rs.660.00
    Regards,
    Anthony K.

    hi
    data is not commiting in db when i teried with on-update and pre-update trigger
    PROCEDURE increment_process IS
    m_gross_sal number;
    p_rslt varchar2(200);
    p_status varchar2(20);
    BEGIN
    delete from INCR_TEMP where ECODE = :emp_code ;
    m_gross_sal := aod_gross_salary(:emp_orgn,:emp_code,'A');--find current salary
    insert into INCR_TEMP(ECODE , CURR_SAL ,
    INCREMENT_AMT ,TOTAL_AOD,
    STATUS,INCR_TYPE)
    values(:emp_code,m_gross_sal,
    :incr_amt,m_gross_sal+:incr_amt,
    'N','I');
    forms_ddl('commit');
    update_emp_increment(:emp_orgn,:emp_code,
    TRUNC(to_date(to_char(:new_Date,'DD/MM/YYYY'),'DD/MM/YYYY')),null,
    :incr_amt, p_rslt,
    :parameter.p_user,to_date(to_char(SYSDATE,'DD/MM/YYYY'),'DD/MM/YYYY'),'I',
    p_status);
    END;
    PROCEDURE desig_updation IS
    V_count number := get_block_property('employee_master',query_hits);
    BEGIN
    go_block('employee_master');
    first_record;
    for i in 1.. V_count loop
    if((:desig is not null ) and (:new_date is not null) and (:emp_desig<>:desig) and (:new_date >=:emp_desig_date)) then
              :emp_desig :=:desig;
              :emp_grade:=:grade;
              :emp_desig_date:=:new_date;
              :emp_upd_by:=:global.usr;
              :emp_upd_on:=:system.current_datetime;
              if( (:radio_group=2) and (:incr_amt is not null)) then
                   increment_process;
              end if;
         end if;
    if :system.last_record ='TRUE' then exit;
    else
         next_record;
         end if;          
    end loop;
    END;

  • Incremental partition processing with changing dimensions?

    today i tried out an incremental processing technique on my cube. I have a partition by date which had 100 rows and an account dimension which had 50 rows.
    i executed a process full and then added 10 rows to the fact and modified 2 rows in the dimension as well as adding 10 rows to the dimension...
    i imagined that I could just do a process full on the dimension and process update on the partition, but upon doing that my cube was in an "unprocessed" state so i had to perform a process full...is there something i did wrong or do updates to dimensions
    require full rebuilds of all partitions?
    this was just an example on small data sets. in reality i have 20+ partitions and 500m rows in the fact table and 90m in the dimension.
    thanks in advance!
    Craig

    ".. i imagined that I could just do a process full on the dimension and process update on the partition, but upon doing that my cube was in an "unprocessed" state so i had to perform a process full .." - try doing a ProcessUpdate on the dimension
    instead. This paper explains the difference:
    Analysis Services 2005 Processing Architecture
    ProcessUpdate applies only to dimensions. It is the equivalent of incremental dimension processing in Analysis Services 2000. It sends SQL queries to read the entire dimension table and applies the changes—member updates, additions,
    deletions.
    Since ProcessUpdate reads the entire dimension table, it begs the question, "How is it different from ProcessFull?" The difference is that ProcessUpdate does not discard the dimension storage contents. It applies the changes in a "smart" manner that
    preserves the fact data in dependent partitions. ProcessFull, on the other hand, does an implicit ProcessClear on all dependent partitions. ProcessUpdate is inherently slower than ProcessFull since it is doing additional work to apply the changes.
    Depending on the nature of the changes in the dimension table, ProcessUpdate can affect dependent partitions. If only new members were added, then the partitions are not affected. But if members were deleted or if member relationships changed (e.g.,
    a Customer moved from Redmond to Seattle), then some of the aggregation data and bitmap indexes on the partitions are dropped. The cube is still available for queries, albeit with lower performance.
    - Deepak

  • Incremental Load using Do Not Process Processing Option

    Hi,
    I have an SSAS Tabular model which is set to Do Not Process. How do I refresh and add new data to the model without changing the processing option
    me

    Hi Liluthcy,
    In a SQL Server Analysis Service tabular model, the process has the following options:
    Default – This setting specifies Analysis Services will determine the type of processing required. Unprocessed objects will be processed, and if required, recalculating attribute relationships, attribute hierarchies,
    user hierarchies, and calculated columns. This setting generally results in a faster deployment time than using the Full processing option.
    Do Not Process This setting specifies only the metadata will be deployed. After deploying, it may be necessary to run a process operation on the deployed model to update and recalculate data.
    Full – This setting specifies that both the metadata is deployed and a process full operation is performed. This assures that the deployed model has the most recent updates to both metadata and data.
    So you need run a process operation to update the data.
    Reference:
    http://www.sqlbi.com/articles/incremental-processing-in-tabular-using-process-add
    Regards,
    Charlie Liao
    TechNet Community Support

  • Sub contracting service Process

    Hi,
    In my scenario I am sending material i.e Shirts ( item) for ironing process to the vendor as subcontracting   ironing service through order. I would like to get back the material from  vendor  w.r.t PO order and payment to made to service performed throgh invoice verification  w.r.t service entry sheet.
    My thought up on the process to create subcontracting PO  on subcontracting vendor  along with articles. Article will be received to vendor  as subcon article   w.r. t subcon PO. After  completion of service  it will be return to our end w.r. t  PO. GRN will be performed. Since it is regular practice, I would like to create service master   for repective services and will be  treated /incorporated in PO  similar to service purchase order. Based on this purchase order  SES will be created and payment of service will be settled through invoice verification   w.r.t service entry sheet.
    Please let me know  whether my thinking are in line or not. if not please suggest the alternative.you can reach  through phone also.
    Thanks & regards,
    Sanjay Rahangdale
    09327162228

    Thanks..  In IS retails system ,  I want to send  Shirt for Reprocessing vendor  and get it back after reprocessing . After receipt of the same I will process the payment process for Service provided  by the vendor.
    For example
    Shirt 30001001,qty 100  sending to vendor ABC for refinishing( ironing ) and after Refinish proceess I will get back the  100 qty .
    Charges of refinishing is  Rs2/ per unit.. I have to pay 200/- to vendor. and settle the bills.
    At the same time I want to know the inventory stock at vendor end as vendor is supplying /returning shirts in incremental process.
    Please revert.
    Sanjay rahangdale

  • How do I run a full process from SSIS ???

    Hi all
    I run BPC 5.1 SP3, and I need to automate a series of jobs, but the system is giving me problems and I hope someone can help out.
    I need to automate a full optimize and then a full process of our AppSets.
    I know that in SP3, the Appsets are taken offline for the full optimize and are then left offline, so in-between I run the "SystemAvailableTask" to set the Appsets back online.
    Additionally, all dependencies are removed from the FACT table for the optimized AppSets, which are only rebuilt by performing the full process afterwards.
    Anyway, the problem I have is that the optimize is running quite happily from SSIS, as is the SystemAvailableTask.
    Up to this point in the job, everything works as intended, so I now have a fully optimized Appset, which is available for users to access.
    However, the Full Process job then fails.
    I have run the Full Process as a standalone job from SSIS and it takes 2m36s to run, but fails to rebuild any dependencies.
    When I ran it from the SAP Admin program, it took 9m57s to run and rebuilt everything correctly.
    I am currently only offered 1 option in the SSIS package, which is to run a Full or an Incremental process, so I select Full.
    However, on the right-hand side, there are various other options available (such as bApplicationProcess, PROCESSMODE (set to "3"), PROCESSOPTION (set to "1"))
    Should I be changing any of the settings on the right to make the job run properly, or should I be doing it differently.
    Obviously, I need to make this work from SSIS, as I can't schedule a full process any other way, so I would be extremely grateful for any help that you can offer.
    Thanks
    Craig
    Edited by: Craig Aucott on Aug 25, 2009 10:21 AM

    The easiest way to do this is to write a Tuxedo server (i.e., using only
    ATMI and no CORBA stuff) that does the following:
    1.) In tpsvrinit(), the last thing that it should do is a tpacall to the
    service contained in this server (and nowhere else) with the TPNOREPLY
    flag.
    2.) In the method that implements the service, do your database work, sleep
    for a little while, do another tpacall to itself with the TPNOREPLY flag,
    and return.
    Hope this helps,
    Robert
    Ram Ramesh wrote:
    Hello folks:
    How can I run a background process that runs under WLE's control.
    What I am looking for is a way to have a process that runs in an
    infinite loop and polls the database to see if there is any background
    work that needs to be done. But I still want the process to be managed
    by WLE for fault tolerance.
    Thanks,
    Ram Ramesh
    [email protected]

  • Processing Tabular Model

    1  We are developing a Tabular Model. In this model there are likely to be about 15 dimension tables and 2 main fact tables.
    2  One of the fact tables is very large approximately 70 million rows and will increase to 150 million rows in about 2-3 years time.
    3  So on the cube we have created partitions and are using the incremental processing. What we have done is to create a partition definition table header and lines. At the header level we will store the name of the measure group on which we wish to
    create partitions, and in the lines table we will create the definition of each partition. Using an sp we will mark of those rows of the partition lines which we wish to reprocess. Such partitions will be dropped and recreated. So far this is working well.
    4  I want to generalize this solution so that I works across different projects without any changes.
    Now I have two questions :
    Question 1 :
    If I make changes in the tabular project and deploy the same, I believe all partitions will get deleted and all the data will need to be pulled in again. This will happen even if I add a calculated measure. Is there any method to overcome this ?
    Question 2 :
    What is the mechanism of only processing certain measure tables incrementally and all other tables fully ? In my above example only one table has partitions. So if I want to process only the current partition of that table, and all other tables how do I
    achieve this ?
    Sanjay Shah
    Prosys InfoTech, Pune, India

    1) if you only add a measure or a calculated column, you do not need to read data from data source. If you have problem with deployment within VS, consider using Deployment Wizard.
    2) A complete description of process strategies is included in a chapter of our book (http://www.sqlbi.com/books/microsoft-sql-server-2012-analysis-services-the-bism-tabular-model).
    In general, you can control which partition/tables you want to process and in which way, using XMLA scripts, PowerShell and other tools. The easiest way to create an XMLA script is using the Generate Script feature in SSMS when you use the process wizard.
    Marco Russo http://ssasworkshop.com http://www.sqlbi.com http://sqlblog.com/blogs/marco_russo

  • Full vs Delta processing

    Hello, can someone please help me understand difference between the two and if BPC MS 7 has issues with Delta processing?thanks.

    Hi,
    When we process a dimension, BPC decides the way of processing the dimension depending on the nature of updates done to the dimension.
    Full Processing: whenever a new property is added, or a member is inserted in the middle of the worksheet or the hierarchy is changed, BPC does a full processing.
    Incremental: whenever a member is added that do not effect the hierarchy, or modifying the dimension formula, adding a property value, BPC does an incremental processing.
    Hope this helps.

  • Runing BO 6.5 and Xi 3.0 on the same machine

    I have installed 6.5 and Xi 3.0 on the same workstation. Both versions run fine when I run them one at a time (open one, work on it, then exit before starting the other).
    However, when I run 6.5 and Xi 3.0 simultaneously, then the next time I try to run either of them, I receive a message asking me to install again.
    Has anyone run into this?
    Michael O

    Hello Constantino,
    Is there no way this will work? As I have several customers thinking about or planning an upgrade/migration to XI 3.0, running legacy and new software side-by-side on the same machine is a very important issue, especially as migrations are usually an incremental process, requiring both versions to co-exist for some time.
    I have never had any problems running 5.x / 6.x and XI r2 on the same machine, even simultaneous, that's why I'm so surprised at the sudden change.
    Any input would be welcome! Thank you!
    Kind regards,
    Kristof Speeckaert

  • How can I get Photoshop Elements to import photos that it wrongly indicates are wrong or in catalog?

    Had 17,000 photos on harddrive before installing Photoshop Elements 11 today. Had them well organized in a number of folders under My Pictures. PS found and imported only 467 photos. I tried Search, Browse, and import Files; was able to highlight folders and individual photo files. When I clicked Get Media, it processed for 20 seconds and came back with message that "Files are not in correct format or are already in the catalog." In fact, they are in the correct format (jpg - able to easily view with other programs) and they are NOT visible in the catalog. I went into Control Panel and enabled the Watch service, listed all of the files that PS had not imported from, and still cannot show more than 467 of my photos. Very frustrating, any suggestions? Running Win 7 on 6-months old Sony that plays Blue-Ray, with 8 GB RAM and quad core Intel processors.

    This forum is amazing - thanks to all who responded over the weekend. For future users, some replies I found helpful: 1. allow a lot of time for the cataloging process. I had not realized it would be such a slow and incremental process but a day later it is continuing to add to the catalog - am now up to a couple thousand in the catalog, 2. turning off Organize and Stack visually-similar, and the corrective functions during the initial process speeds things up, 3. go into Control Panel, Administrative, Services and activate the Photoshop Elements watch function as soon as you install Elements, 4. it was helpful to repair/optimize, and 5. if you realize it is really messed up, uninstalling Elements and re-installing may work better than lots of little tweaks and fixes. Now if I can just figure out how to get photos ordered by actual date taken rather than random future dates that my camera or previous version of Elements 8 apparently assigned....

  • Why does synch of my iPod touch takes far longer than before iOS 5?

    Synching of iPod before introduction of iCloud never took more than a few minutes and during the process iTunes reported how many photos or songs were being added or deleted based on what had changed on the iPhoto and iTunes libraries since the last synch event.
    Now the synch process begins with a major "backup" effort and then lengthy processes in which it is not visible if or which photos or songs are being added or deleted.  The process takes up to an hour and I have been forced on more than one occasion to RESTORE the iPod in order to get the synch to work at all.
    I carry my entire iPhoto library (about 20'000 photos, excl videos) and iTunes library (7500 songs) on the 64 GB iPod, for which there is plenty of space.  Why is synchronization no longer just the incremental process it originally was and now consumes far too much time?
    - Eran

    I think it's normal...but you can try to do it again.

  • HT201412 What can I do when iPhone wont charge and "This accessory may not be supported" is displayed

    I have recently upgraded to iOS 7.0.2.  Some times it will charge up to 15% or a little more, but it trickles down after that. The upgrade was a slow, incremental process.  I had to restart it several times. 

    A couple of weeks ago I tried cleaning with air and with a can of control cleaner that I used to used on variable resistors and capacitors.  I thought that improved things, but it didnot seem to last.  Today I sprayed Isopropyl alcohol into the receptical and the plug.  It seems to be slowly charging this evening with those two pieces of equipment.

  • The DOs and DON'Ts of ICS

    Written by:  Sumit Jain  After working extensively on Informatica Cloud, I have collected some to dos and guidelines of the Informatica Cloud product. You may know them before hand, but I thought to compile a list and share it with a wider audience:  1. Create a Naming Conventions document detailing the standard naming conventions for different types of Connections, Tasks, Taskflows and schedules within Informatica Cloud. All the developers should rigorously follow these naming conventions. It has been observed that when multiple people are working simultaneously, they tend to follow their own naming standards and at the end, there are lot of tasks and it is very difficult to identify tasks and administrator has to spend a good amount of time identifying the correct tasks.   2. Add meaningful description to all your tasks, connections, taskflows and schedules in such a way that they convey the purpose of their use and thus do not create confusion to other users.   3. The machine the Informatica Cloud Secure Agent runs on must always be on. It must not be in a sleep mode or in “idle”. This might be indicated when the Agent status fluctuates between an active/inactive state. Make sure this computer is on or re-install the agent on a computer that stays on continuously.   4. If you are using the CSV files as source or target, make sure that you match up the date format in the associated connection for flat files, by dropping down the Date Format list and choosing the matching format. And If there isn’t a matching format in the drop down list, then you will need to explicitly format the date in Step 5 of Data Synchronization task by using a transformation function called TO_DATE.   5. If there is a requirement of performing a lookup on Salesforce objects, then do not create a direct lookup. A direct lookup on Salesforce object will call the Salesforce object for each record processed and thus performance will decrease considerably. Instead, write the data of Salesforce object in a flat file and then use the flat file for doing the lookup.    6. For incremental processing, use the “$LastRunTime” and “$LastRunDate” variables in conjunction with a source field of “date” type. Informatica Cloud supports and maintains these variables automatically. For example, if your source has a field called LASTMODIFIEDDATE, you could set up your filter such that LASTMODIFIEDDATE > $LastRunTime. If your schedule then runs the task on a daily basis, that means each day you will only have to deal with the new/changed records from the previous day, instead of worrying about ALL records.   7. If the Informatica Cloud Secure Agent is running on a linux or unix server, it will not support MS SQL Server as source or target.   8. In a multi-user environment where the number of tasks to be created is very high, create Views based on logical groups for viewing similar group task in a single task view. Similarly you can create views for connection and taskflows.   9. SYSDATE is the current datetime you can use to denote the current date and time.   10. Use logical operators like IIF and DECODE to encode conditional logic in the expression in Step 5 of Data Synchronization task.  This has been posted on the community page as well. https://community.informatica.com/docs/DOC-3772 What are some of your best practices? Please share with us on the comment section below. Thanks!

    Overview In part 1 of this series, I discussed why I thought that the new app platform from Salesforce.com ("Salesforce 1") was far from perfect and described this as "the App Gap". Sure, Salesforce 1 is a vastly improved mobile experience for every Salesforce user, but it still provides no help in two crucial areas.  Namely:There is no ability for business users to quickly deploy mobile apps by themselves.There is no automation to help users efficiently complete more than one salesforce activity at a time.   I also explained in part 1 that these shortcomings can be easily addressed by adding Informatica Cloud Extend to any Salesforce 1 implementation.  So for the  remainder of this article I'll explain how to configure your Salesforce org to leverage Informatica Cloud Extend.  Then you'll be closing the Salesforce 1 'App Gap' in no time.  Step 1. Modify VisualForce Launch Pages  It turns out that Informatica Cloud Extend guides can run easily from Salesforce 1. This is because Salesforce 1 now lets the user navigate to an object within the mobile app, and then the user can run a Cloud Extend guide from that object. But in order to get a Cloud Extend guide to run in Salesforce 1, the VisualForce page for launching CE guides must be modified first. VisualForce pages have an option for “Available for Salesforce mobile apps”:  So check that option. Step 2. Replace 'Managed' Cloud Extend VisualForce Pages  The VisualForce pages for standard Salesforce objects are part of the Cloud Extend managed package, so users cannot edit them (we manage these pages in order to improve your Cloud Extend experience.  In addition we also update them as needed with new Cloud Extend releases).  However that doesn't mean that can't replace the relevant VisualForce pages. So that's what we're going to do.  The doc on how to do so is here:   http://help.cloudextend.com/salesforce/documentation/#UserGuide/AdministeringCloudExtend/CustomizingStandardPages.htm So for  our example, let's replace the Opportunities VisualForce page.   Below is an screenshot of an example replacement VisualForce page: The VisualForce markup for the page is:  <apex:page standardController="Opportunity">    <ce4sf20_001:AeSalesGuides objectType="Opportunity"  objectId="{!Opportunity.Id}"  extraInfo="{!JSENCODE(Opportunity.Name)}  ({!JSENCODE(Opportunity.Account.Name)})"/>                  </apex:page>  To replace this page, I first went to the Opportunity screen layout editor and removed the existing VisualForce page for Cloud Extend guides.  Then I replaced it with the “mobile-enabled” VisualForce page that I just created.  Step 3. Testing that it Works!  The final step is to test that it works.  So now open the Salesforce 1 app and navigate to an Opportunity object:  The Cloud Extend guide launcher appears (I only had the “Update Selected Opportunity” guide published for smartphones.  If there were other Cloud Extend mobile guides published for Opportunities, they would have also appeared in this list). Next I click the “Update Selected Opportunity” and a new window within Salesforce 1 launches: Clicking on the Cloud Extend "Update Selected Opportunity" guide again starts the guide running: When the guide finished, I clicked the arrow in the top left of Salesforce 1 to take me back to the Opportunity object that first initiated the Cloud Extend guide. So there you have it!  A a quick way to integrate Informatica Cloud Extend working in the new Salesforce 1 application. Of course there may be other ways to integrate Cloud Extend into Salesforce 1 and we will certainly be looking at those options going forward.

  • OO Batch Model and optimised Java for batch???

    Hi All,
    I'm looking to see if there is any literature of OO models for batch processing and optimising of batch java.
    Thoughts & comments welcome.........
    I have an existing batch process running on a mainframe which is very successful. We would like to leverage this by building a similar batch process to run 'anywhere' so likely options are Java/Unix.
    There are many patterns/models etc for OO based GUI / interactive processes but very few for (that I have found) for batch.
    I have worked mainly with mainframe batch and online applications and come with the baggage that activity that can be processed in batch should be to avoid overloading the online container (CICS region, web server etc).
    I believe that this continues to be true, as well as the particular data we are processing benefits from efficiencies of batching the data together to store eventually on tape.
    In view of not finding any literature (which I doubt is the case) it seems that the problem is the same, so probably the solution is also similar.
    In the procedural solution, a Jackson (or similar), would have been designed which would then reflect the procedures build into the code.
    I expect that if instead of procedures classes where defined, certainly at a higher level then the design would still be ok.
    (So a the higher level you have a 'main' class, which instansiates a 'read' io object, a processing object which handles the actual processing activity and a write io object).
    The level to which would would combine procedures together, or further split them out, would then be the main point of discussion.
    ( However am open to the above suggestion being completly wrong).
    Then there is efficient configuration when processing........
    When running on the mainframe the code is loaded once, the memory for all the working storage strucures created. When actually processing there is no instantiating classes, or running the garbage collector etc. I re-use the same memory for each new record red in / processed / written out and all the code is normally loaded once when first called and the same code is re-used until all records processed.
    Is there any way that I can replicate this within Java either in it's own JVM or running in a container such as websphere? When processing the volume of data that we do (20 million db entries + 40GB of document data avg) then anything not optimised is costing money and available processing time.

    I suspect that batching is underused thoughrather
    than overused.Can you elaborate on that? What kind ofconditions
    would you advocate batching for? Running daily, monthly, etc reports. Or something
    that feeds those.I don't disagree, I just don't even see this as 'batching'. Batching to me is when you take something that could be done incrementally and purposely doing it in large groups at set times or time-periods. If you have a daily report and you do it daily, you're just doing the most obvious approach. It might not even be the most efficient.
    >>
    I have some experience working with batch java
    applications running in Unix. And I can tell you
    that they did not improve anything. I suspect I would agree with that. I am not
    advocating that the batching be done in java. Just
    that idea that an 'incremental' process that requires
    moving data versus a 'batch' process that doesn't
    isn't something that I would normally consider a good
    idea.I think we are thinking about different things. I'm really just talking about incremental or real-time vs. batching.
    They were
    actually the source of many of our issues. That
    added abitrary time lags during times of lowvolumes,
    sometimes adding 30 minutes or more to theprocessing
    of a transaction as it waited for the nextscheduled
    batch. They also made our backlogs worse in timesof
    high volumes because the incoming data flow was
    uneven, we would often get big batches of datafrom
    partner systems (more batching, gotta love it)that
    hit us when our batch process was sleeping. 5,10,
    15 minutes would pass where the server ran at 10%
    capacity while huge backlogs were piling up. It
    didn't make anything better. It was just causing
    idling.
    What was the timeliness requirements for the
    processing? Did it need to be completed by 2am in
    the morning? Or could it have really just been
    completed on demand?It was B2B transactions ASAP was the time requirement. I guess the upper limit was 6 hours or so. But batching didn't really decrease the processing time per transaction anyway and the server was never dedicated to the batch or anything so there were still context switches.
    It just seems to me that in Java with all the nice
    threading we have access to, the server shouldnever
    be idle and if you cannot handle your volume youare
    better off adding more servers, not attempting to
    batch things.I have created applications that were intended to run
    'batch' jobs which could be spread across servers.
    Those particular processes had to finish within a
    very narrow time span as well - about two hours as I
    recall. There was an incremental as well as batch
    functionality that needed to be run for this. The
    batch functionality ran on the database. The
    incremental took the batched results and handle the
    incremental part.
    Although management was never willing to dedicate
    more than one server to the processing so I guess it
    wasn't that important to them.
    I have seen apps that claimed they were 'fast'
    because they did all of the incremental processing
    outside of the database. The design required moving,
    literally, the entire database over the network to
    other servers which would then process it.
    Processing it in the database would have taken
    orders of less time. And that was time sensitive
    data. I can't remember if that app allowed for
    multiple boxes to do the processing. I do know that
    the people working on it could never figure out the
    bottleneck (it was scaling to something like 12
    hours a day which was not acceptable.)I guess I don't see doing it on the DB as implying batching. We use triggers to drive processes in Java, COBOL, whatever.

Maybe you are looking for

  • How to attach a document for a resume on a iPad

    How to attach a document to a resume using a ipad

  • How can I always have popup windows appear on the same screen as PS?

    Hi, I'm hoping someone can help me out. I have 3 displays connected to my computer, 1 of which is a tablet screen I don't use all the time. (It gets disabled and put away when I turn on SLI for video gaming, or if I just dont need to draw anything at

  • Communications Suite 5

    Hi, We already have DS 6.2 deployed in our organization. We have moved from NIS to LDAP. Now we want to deploy Communications Suite 5. I have been testing with the "Deployment Example: Sun Java Communications Suite 5 on a Single Host" (http://docs.su

  • Photoshop 5 choppy during zoom with scroll wheel

    So after a agonizing weekend of talking to Adobe suport to get photoshop installed because they sent me the wrong version and horrible customer service (thanks adobe), I finally get to start using photoshop and it's just buggy as heck. Wierd jerky mo

  • How to move 1 field from 1 region to another region of same page persnaliz

    dear friends, i m geting some problem reagrding the Oracle Apps personalization. i dont have much more knowledge about personalization of OA Framework. i want to change one field from one region to another region of the same page. can u please guide