Improving performance by splitting a cube

I've heard that sometimes the best way to deliver drastic (calc/retrieve) performance improvements is to split a single cube into 2 or more databases within a single application.
A. Is there any truth to this.
B. If so, is there any good reading material with real world examples that I could use to educate myself on the subject?

If there were a lot of interdimension irrevelance then splitting a cube into multiple cubes can have a positive effect. The issue with puting them under the same application is they are sharing memory and can slow them both down. on the otherhand, if you use xref it can be faster because they share memory. The other advantage might be if you have one app for actual data and one for budget. Budget data is typically entered at a less detailed leven and coupd account for faster retrievals.
If you are just looking for faster retrievals, then perhaps an ASO reporting cube could be helpful.
I don't know of any documentaion on this topic.

Similar Messages

  • Split of Cubes to improve the performance of reports

    Hello Friends . We are now Implementing the Finance GL Line Items for Global Automobile operations in BMW and services to Outsourced to Japan which increased the data volume to 300 millions records for last 2 years since we go live. we have 200 Company codes.
    How To Improve performance
    1. Please suggest if I want to split the cubes based on the year and Company codes which are region based. which means european's will run report out of one cube and same report for america will be on another cube
    But Question here is if I make 8 cube (2 For each year : 1- current year comp code ABC & 1 Current Year DEF), (2 For each year : 1- Prev year comp code ABC & 1 Prev Year DEF)
    (2 For each year : 1- Arch year comp code ABC & 1 Archieve Year DEF)
    1. Then what how I can I tell the query to look the data from which cube. since Company code is authorization variable so to pick that value  of comp code and make a customer exit variable for infoprovider  will increase lot of work.
    Is there any good way to do this. does split of cubes make sense based on company code or just make it on year.
    Please suggest me a excellent approach step by step to split cubes for 60 million records in 2 years growth will be same for
    next 4 years since more company codes are coming.
    2. Please suggest if split of cube will improve performance of report or it will make it worse since now query need to go thru 5-6 different cubes.
    Thanks
    Regards
    Soniya

    Hi Soniya,
    There are two ways in which you can split your cube....either based on Year or based on Company code.(i.e Region). While loading the data, write a code in the start routine which will filter tha data. For example, if you are loading data for three region say 1, 2, and 3, you code will be something like
    DELETE SOURCE_PACKAGE WHERE REGION EQ '2' OR
    REGION EQ '3'.
    This will load data to your cube correspoding to region 1.
    you can build your reports either on these cubes or you can have a multiprovider above these cubes and build the report.
    Thanks..
    Shambhu

  • While browsing the cube data Excel the circle pointer starts to spin and the excel go into a not-responding state,any recommendations to improve performance in excel?

    hi,
    while browsing the cube data Excel the circle pointer starts to spin and the excel go into a not-responding state,any recommendations to improve performance in excel? 
    I have 20 measures and 8 dimensions.
    while filtering data by using dimensions in excel it is taking so much time.
    Ex:
    I browsed 15 measures in excel and filtered data based on time(quarter  wise) and other dimesion product. It is taking long time to get  data.
    Can you please help on this issue.
    Regards,
    Samba

    Hi Samba,
    What're the versions of your Office Excel and SQL Server Analysis Services? It will be helpful if you can share the detail computer resource information to us while encountered this issue.
    In addition, we don't know your cube structure and the underlying relationships. But you can take a look at the following articles to troubleshoot the performance issue:
    Improving Excel's Cube Performance:
    http://richardlees.blogspot.com/2010/04/improving-excels-cube-performance.html
    Excel Against a SSAS Cube Slow: Performance Improvement Tips:
    http://www.msbicentral.com/Blogs/tabid/131/articleType/ArticleView/articleId/136/Excel-Against-a-SSAS-Cube-Slow-Performance-Improvement-Tips.aspx
    Regards, 
    Elvis Long
    TechNet Community Support

  • BIA to improve performance for BPS Applications

    Hi All,
    Is it possible to improve performance of BPS applications using BIA. Currently we are running applications on BI-BPS which because of huge range of period are having a performance issue.
    Would request to please share whether in this read and write option of BPS would BIA be helpful and to what extent can the performance be increased?
    Request an early reply as system is in really bad shape and users are grappling with poor performance?
    Rgds,
    Rajeev

    Hi Rajeev,
    If the performance issue you are facing is while running the query on real-time (transactional) infocube being used in BPS, then BIA can help. The closed requests from real-time cube can be indexed in BIA. At the query runtime, analytic engine reads data from database for open request and from BIA for closed and indexed requests. It combines this data with the plan buffer cache and produce the result.
    Hence if you are facing issue with query response time, BIA will defenitely help.
    Regards,
    Praveen

  • Improve Performance of Dimension and Fact table

    Hi All,
    Can any one explain me the steps how to improve performance of Dimension and Fact table.
    Thanks in advace....
    redd

    Hi!
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas

  • How to run query in parallel  to improve performance

    I am using ALDSP2.5, My data tables are split to 12 ways, based on hash of a particular column name. I have a query to get a piece of data I am looking for. However, this data is split across the 12 tables. So, even though my query is the same, I need to run it on 12 tables instead of 1. I want to run all 12 queries in parallel instead of one by one, collapse the datasets returned and return it back to the caller. How can I do this in ALDSP ?
    To be specific, I will call below operation to get data:
    declare function ds:SOA_1MIN_POOL_METRIC() as element(tgt:SOA_1MIN_POOL_METRIC_00)*
    src0:SOA_1MIN_POOL_METRIC(),
    src1:SOA_1MIN_POOL_METRIC(),
    src2:SOA_1MIN_POOL_METRIC(),
    src3:SOA_1MIN_POOL_METRIC(),
    src4:SOA_1MIN_POOL_METRIC(),
    src5:SOA_1MIN_POOL_METRIC(),
    src6:SOA_1MIN_POOL_METRIC(),
    src7:SOA_1MIN_POOL_METRIC(),
    src8:SOA_1MIN_POOL_METRIC(),
    src9:SOA_1MIN_POOL_METRIC(),
    src10:SOA_1MIN_POOL_METRIC(),
    src11:SOA_1MIN_POOL_METRIC()
    This method acts as a proxy, it aggregates data from 12 data tables
    src0:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_00 table
    src1:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_01 table and so on.
    The data source of each table is different (src0, src1 etc), how can I run these queries in parallel to improve performance?

    Thanks Mike.
    The async function works, from the log, I could see the queries are executed in parallel.
    but the behavior is confused, with same input, sometimes it gives me right result, some times(especially when there are few other applications running in the machine) it throws below exception:
    java.lang.IllegalStateException
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream.deRegister(BasicMaterializedTokenStream.java:256)
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream$MatStreamIterator.close(BasicMaterializedTokenStream.java:436)
         at weblogic.xml.query.runtime.core.RTVariable.close(RTVariable.java:54)
         at weblogic.xml.query.runtime.core.RTVariableSync.close(RTVariableSync.java:74)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.IfThenElse.close(IfThenElse.java:99)
         at weblogic.xml.query.runtime.core.CountMapIterator.close(CountMapIterator.java:222)
         at weblogic.xml.query.runtime.core.LetIterator.close(LetIterator.java:140)
         at weblogic.xml.query.runtime.constructor.SuperElementConstructor.prepClose(SuperElementConstructor.java:183)
         at weblogic.xml.query.runtime.constructor.PartMatElemConstructor.close(PartMatElemConstructor.java:251)
         at weblogic.xml.query.runtime.querycide.QueryAssassin.close(QueryAssassin.java:65)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.QueryIterator.close(QueryIterator.java:146)
         at com.bea.ld.server.QueryInvocation.getResult(QueryInvocation.java:462)
         at com.bea.ld.EJBRequestHandler.executeFunction(EJBRequestHandler.java:346)
         at com.bea.ld.ServerBean.executeFunction(ServerBean.java:108)
         at com.bea.ld.Server_ydm4ie_EOImpl.executeFunction(Server_ydm4ie_EOImpl.java:262)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invokeFunction(XmlDataServiceBase.java:312)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invoke(XmlDataServiceBase.java:231)
         at com.ebay.rds.dao.SOAMetricDAO.getMetricAggNumber(SOAMetricDAO.java:502)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:199)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:174)
         at RDSWS.getMetricAggNumber(RDSWS.jws:240)
         at jrockit.reflect.VirtualNativeMethodInvoker.invoke(Ljava.lang.Object;[Ljava.lang.Object;)Ljava.lang.Object;(Unknown Source)
         at java.lang.reflect.Method.invoke(Ljava.lang.Object;[Ljava.lang.Object;I)Ljava.lang.Object;(Unknown Source)
         at com.bea.wlw.runtime.core.dispatcher.DispMethod.invoke(DispMethod.java:371)
    below is my code example, first I get data from all the 12 queries, each query is enclosed with fn-bea:async function, finally, I do a group by aggregation based on the whole data set, is it possible that the exception is due to some threads are not returned data yet, but the aggregation has started?
    the metircName, serviceName, opname, and $soaDbRequest are simply passed from operation parameters.
    let $METRIC_RESULT :=
            fn-bea:async(
                for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src0:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
               fn-bea:async(for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src1:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
             ... //12 similar queries
            for $Metric_data in $METRIC_RESULT    
            group $Metric_data as $Metric_data_Group        
            by   $Metric_data/ROLE_TYPE as $role_type_id  
            return
            <ns0:RawMetric>
                <ns0:endTime?></ns0:endTime>
                <ns0:target?>{$role_type_id}</ns0:target>
    <ns0:value0>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE0)}</ns0:value0>
    <ns0:value1>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE1)}</ns0:value1>
    <ns0:value2>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE2)}</ns0:value2>
    <ns0:value3>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE3)}</ns0:value3>
    </ns0:RawMetric>
    could you tell me why the result is unstable? thanks!

  • Improve Performance with QaaWS with multiple RefreshButtons??

    HI,
    I read, that a connection opens maximal 2 QaaWS. I want to improve Performance.
    Currently I tried to refresh 6 connections with one Button. Would it improve performance if I split this 1 Button with 6 Connections to 3 buttons each 2 connections ?
    Thanks,
    BWBW

    Hi
    HTTP 1.1 limits the number of concurrent HTTP requests to maximum two, so your dashboard will actually be able to send & receive maximum 2 request simultaneously, third will stand-by till one of those first two is handled.
    QaaWS performance is mostly affected by database performance, so if you plan to move to LO to improve performance, I'd recommend you use LO from WebI parts, as if you use LO to consume a universe query, you will experience similar performance limitations.
    If you actually want to consume WebI report parts, and need report filters, you can also consider XI 3.1 SP2 BI Services, where performance is better than QaaWS, and interactions are also easier to implement.
    Hope that helps,
    David.

  • Query performance on remote/virtual cube

    All:
    We are on BW 3.5.
    Can anyone suggest anything about improving query performance on remote/virtual cubes? Analysis shows query performance is sufferring at the database level.
    I am looking for some advise beyond hardware and database parameters. It seems current hardware and database parametrs work fine with basis cubes.
    Another solution is datamart.But can anything be done before/without going to datamart?
    Any help will be appreciated.
    Thanks

    Hi,
    In this case try to find where more time is utilized by using the ST03.
    if there is more time consuming in front end, rearrange the query by using less in the row and using more free chars and filter areas. using the variable also will work.
    use reporting agent and schedule the query in background to fill the cache and rerun the query.
    Reg,
    Vishwa

  • FI-CA events to improve performance

    Hello experts,
    Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
    It seems that this specific exits associated to BW events have been developped especially to improve performance.
    Any documentation, guide should be appreciate.
    Thanks.
    Thibaud.

    Thanks to all for the replies
    @Sybrand
    Please answer first whether the column is stored in a separate lobsegment.
    No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
    @Hemant
    There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
    Is this the one you are referring to
    http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
    By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
    We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
    So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
    @Billy
    We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
    Regarding binding of LOB from client side, will check on that side also to reduce round trips.
    By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
    This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
    Thanks,
    Arun

  • How to preload sound into memory to improve performance?

    Hello all
    I have an application where it needs to play 4 different short wave files on some events. The wave files are small (less then 1 sec each) so they can be preloaded into memory. But I don't really know how to do that.. This is my current code... Performance is really important here, so the faster users can hear the sounds, the better...
    import java.io.*;
    import javax.sound.sampled.*;
    import javax.swing.*;
    import java.awt.event.*;
    public class PlaySound implements ActionListener
         private Clip clip = null;
         public void play(String name)
              if (clip != null)
                   clip.stop();
                   clip = null;
              loadClip(name);
              clip.start();
         private void loadClip(String fnm)
              try
                   AudioInputStream stream = AudioSystem.getAudioInputStream(new File(fnm + ".wav"));
                   AudioFormat format = stream.getFormat();
                   DataLine.Info info = new DataLine.Info(Clip.class, format);
                   if (!AudioSystem.isLineSupported(info))
                        JOptionPane.showMessageDialog(null, "Unsupported sound line", "Warning!", JOptionPane.WARNING_MESSAGE);
                   else
                        clip = (Clip) AudioSystem.getLine(info);
                        clip.open(stream);
                        stream.close();
              catch (Exception e)
                   JOptionPane.showMessageDialog(null, "loadClip E: " + e.toString(), "Warning!", JOptionPane.WARNING_MESSAGE);
         public static void main(String[] args)
              play("a wav file name");
    }     I would appreciate it if someone can point out how I can preload them to improve performance... Thanks in advance!

    The message above should be:
    OMG, me dumb you smart Florian...
    Thank you for your suggestion... It's not the best OR anything close to what I thought it would be, it's certainly one way to do it and better then what I've got now...
    Thanks again Florian, I really appreciate it!!
    BTW, is there anything that would produce the sound faster then this?
    Message was edited by:
    BuggyVB

  • How to improve performance of MediaPlayer?

    I tried to use the MediaPlayer with a On2 VP6 flv movie.
    Showing a video with a resolution of 1024x768 works.
    Showing a video with a resolution of 1280x720 and a average bitrate of 1700 kb/s leads to a delay of the video signal behind the audio signal of a couple of seconds. VLC, Media Player Classic and a couple of other players have no problem with the video. Only the FX MediaPlayer shows a poor performance.
    Additionally mouse events in a second stage (the first stage is used for the video) are not processed in 2 of 3 cases. If the MediaPlayer is switched off, the mouse events work reliable.
    Does somebody know a solution for this problems?
    Cheers
    masim

    duplicate thread..
    How to improve performance of attached query

  • How To Perform Lot Split Transactions Using Transaction Open Interface (MTI)

    Can anyone give me some guidance on how to perform lot split transaction using MTI?
    I am using the following code:
    DECLARE
    l_transaction_type_id NUMBER := 83;
    l_transaction_action_id NUMBER := 41;
    l_transaction_source_type_id NUMBER := 13;
    l_org_id NUMBER := 1884;
    l_txn_header_id NUMBER;
    l_txn_if_id1 NUMBER;
    l_txn_if_id2 NUMBER;
    l_txn_if_id3 NUMBER;
    l_parent_id NUMBER;
    l_sysdate DATE;
    l_item_id NUMBER :=287996;
    l_user_id NUMBER;
    l_distribution_account_id NUMBER;
    l_exp_date DATE;
    BEGIN
    --For Lot Merge, there should be only one resultant lot.
    --The transaction_quantity populated in MTI/MTLI should be the entire
    --quantity that is available to transact for the org/sub/item/locator/LPN in
    --that particular lot number.
    --Get transaction_header_id for all the MTIs
    SELECT APPS.mtl_material_transactions_s.NEXTVAL
    INTO l_txn_header_id
    FROM sys.dual;
    --Get transaction_interface_id of resultant record
    SELECT APPS.mtl_material_transactions_s.NEXTVAL
    INTO l_txn_if_id1
    FROM sys.dual;
    l_parent_id := l_txn_if_id1;
    l_sysdate := SYSDATE;
    l_user_id := -1; --substitute with a valid user_id
    l_distribution_account_id := NULL; --needed for lot translate
    l_exp_date := NULL; --set if required
    --Populate the MTI record for resultant record
    INSERT INTO MTL_TRANSACTIONS_INTERFACE
    transaction_interface_id,
    transaction_header_id,
    Source_Code,
    Source_Line_Id,
    Source_Header_Id,
    Process_flag,
    Transaction_Mode,
    Lock_Flag,
    Inventory_Item_Id,
    revision,
    Organization_id,
    Subinventory_Code,
    Locator_Id,
    Transaction_Type_Id,
    Transaction_Source_Type_Id,
    Transaction_Action_Id,
    Transaction_Quantity,
    Transaction_UOM,
    Primary_Quantity,
    Transaction_Date,
    Last_Update_Date,
    Last_Updated_By,
    Creation_Date,
    Created_By,
    distribution_account_id,
    parent_id,
    transaction_batch_id,
    transaction_batch_seq,
    lpn_id,
    transfer_lpn_id
    VALUES
    l_txn_if_id1, --transaction_header_id
    l_txn_header_id, --transaction_interface_id
    'INV', --source_code
    -1, --source_header_id
    -1, --source_line_id
    1, --process_flag
    3, --transaction_mode
    2, --lock_flag
    l_item_id, --inventory_item_id
    null, --revision
    l_org_id, --organization_id
    'EACH', --subinventory_code
    1198, --locator_id
    l_transaction_type_id, --transaction_type_id
    l_transaction_source_type_id, --transaction_source_type_id
    l_transaction_action_Id, --l_transaction_action_id
    100000, --transaction_quantity
    'EA', --transaction_uom
    100000, --primary_quantity
    l_sysdate, --Transaction_Date
    l_sysdate, --Last_Update_Date
    l_user_id, --Last_Updated_by
    l_sysdate, --Creation_Date
    l_user_id, --Created_by
    l_distribution_account_id, --distribution_account_id
    l_parent_id, --parent_id
    l_txn_header_id, --transaction_batch_id
    2, --transaction_batch_seq
    NULL, --lpn_id (for source MTI)
    NULL --transfer_lpn_id (for resultant MTIs)
    --Insert MTLI corresponding to the resultant MTI record
    INSERT INTO MTL_TRANSACTION_LOTS_INTERFACE(
    transaction_interface_id
    , Source_Code
    , Source_Line_Id
    , Process_Flag
    , Last_Update_Date
    , Last_Updated_By
    , Creation_Date
    , Created_By
    , Lot_Number
    , lot_expiration_date
    , Transaction_Quantity
    , Primary_Quantity
    VALUES (
    l_txn_if_id1 --transaction_interface_id
    , 'INV' --Source_Code
    , -1 --Source_Line_Id
    , 'Y' --Process_Flag
    , l_sysdate --Last_Update_Date
    , l_user_id --Last_Updated_by
    , l_sysdate --Creation_date
    , l_user_id --Created_By
    , 'Q0000.1' --Lot_Number
    , l_exp_date --Lot_Expiration_Date
    , 100000 --transaction_quantity
    , 100000 --primary_quantity
    INSERT INTO MTL_TRANSACTIONS_INTERFACE
    transaction_interface_id,
    transaction_header_id,
    Source_Code,
    Source_Line_Id,
    Source_Header_Id,
    Process_flag,
    Transaction_Mode,
    Lock_Flag,
    Inventory_Item_Id,
    revision,
    Organization_id,
    Subinventory_Code,
    Locator_Id,
    Transaction_Type_Id,
    Transaction_Source_Type_Id,
    Transaction_Action_Id,
    Transaction_Quantity,
    Transaction_UOM,
    Primary_Quantity,
    Transaction_Date,
    Last_Update_Date,
    Last_Updated_By,
    Creation_Date,
    Created_By,
    distribution_account_id,
    parent_id,
    transaction_batch_id,
    transaction_batch_seq,
    lpn_id,
    transfer_lpn_id
    VALUES
    l_txn_if_id1, --transaction_header_id
    l_txn_header_id, --transaction_interface_id
    'INV', --source_code
    -1, --source_header_id
    -1, --source_line_id
    1, --process_flag
    3, --transaction_mode
    2, --lock_flag
    l_item_id, --inventory_item_id
    null, --revision
    l_org_id, --organization_id
    'EACH', --subinventory_code
    1198, --locator_id
    l_transaction_type_id, --transaction_type_id
    l_transaction_source_type_id, --transaction_source_type_id
    l_transaction_action_Id, --l_transaction_action_id
    100000, --transaction_quantity
    'EA', --transaction_uom
    100000, --primary_quantity
    l_sysdate, --Transaction_Date
    l_sysdate, --Last_Update_Date
    l_user_id, --Last_Updated_by
    l_sysdate, --Creation_Date
    l_user_id, --Created_by
    l_distribution_account_id, --distribution_account_id
    l_parent_id, --parent_id
    l_txn_header_id, --transaction_batch_id
    3, --transaction_batch_seq
    NULL, --lpn_id (for source MTI)
    NULL --transfer_lpn_id (for resultant MTIs)
    --Insert MTLI corresponding to the resultant MTI record
    INSERT INTO MTL_TRANSACTION_LOTS_INTERFACE(
    transaction_interface_id
    , Source_Code
    , Source_Line_Id
    , Process_Flag
    , Last_Update_Date
    , Last_Updated_By
    , Creation_Date
    , Created_By
    , Lot_Number
    , lot_expiration_date
    , Transaction_Quantity
    , Primary_Quantity
    VALUES (
    l_txn_if_id1 --transaction_interface_id
    , 'INV' --Source_Code
    , -1 --Source_Line_Id
    , 'Y' --Process_Flag
    , l_sysdate --Last_Update_Date
    , l_user_id --Last_Updated_by
    , l_sysdate --Creation_date
    , l_user_id --Created_By
    , 'Q0000.1' --Lot_Number
    , l_exp_date --Lot_Expiration_Date
    , 100000 --transaction_quantity
    , 100000 --primary_quantity
    --Get transaction_interface_id of Source record-1
    SELECT APPS.mtl_material_transactions_s.NEXTVAL
    INTO l_txn_if_id2
    FROM sys.dual;
    --Populate the MTI record for Source record-1
    INSERT INTO MTL_TRANSACTIONS_INTERFACE
    transaction_interface_id,
    transaction_header_id,
    Source_Code,
    Source_Line_Id,
    Source_Header_Id,
    Process_flag,
    Transaction_Mode,
    Lock_Flag,
    Inventory_Item_Id,
    revision,
    Organization_id,
    Subinventory_Code,
    Locator_Id,
    Transaction_Type_Id,
    Transaction_Source_Type_Id,
    Transaction_Action_Id,
    Transaction_Quantity,
    Transaction_UOM,
    Primary_Quantity,
    Transaction_Date,
    Last_Update_Date,
    Last_Updated_By,
    Creation_Date,
    Created_By,
    distribution_account_id,
    parent_id,
    transaction_batch_id,
    transaction_batch_seq,
    lpn_id,
    transfer_lpn_id
    VALUES
    l_txn_if_id2, --transaction_header_id
    l_txn_header_id, --transaction_interface_id
    'INV', --source_code
    -1, --source_header_id
    -1, --source_line_id
    1, --process_flag
    3, --transaction_mode
    2, --lock_flag
    l_item_id, --inventory_item_id
    null, --revision
    l_org_id, --organization_id
    'EACH', --subinventory_code
    1198, --locator_id
    l_transaction_type_id, --transaction_type_id
    l_transaction_source_type_id, --transaction_source_type_id
    l_transaction_action_Id, --transaction_action_id
    -200000, --transaction_quantity
    'EA', --transaction_uom
    -200000, --primary_quantity
    l_sysdate, --Transaction_Date
    l_sysdate, --Last_Update_Date
    l_user_id, --Last_Updated_by
    l_sysdate, --Creation_Date
    l_user_id, --Created_by
    l_distribution_account_id, --distribution_account_id
    l_parent_id, --parent_id
    l_txn_header_id, --transaction_batch_id
    1, --transaction_batch_seq
    NULL, --lpn_id (for source MTI)
    NULL --transfer_lpn_id (for resultant MTIs)
    --Insert MTLI corresponding to the Source record-1
    INSERT INTO MTL_TRANSACTION_LOTS_INTERFACE(
    transaction_interface_id
    , Source_Code
    , Source_Line_Id
    , Process_Flag
    , Last_Update_Date
    , Last_Updated_By
    , Creation_Date
    , Created_By
    , Lot_Number
    , lot_expiration_date
    , Transaction_Quantity
    , Primary_Quantity
    VALUES (
    l_txn_if_id2 --transaction_interface_id
    , 'INV' --Source_Code
    , -1 --Source_Line_Id
    , 'Y' --Process_Flag
    , l_sysdate --Last_Update_Date
    , l_user_id --Last_Updated_by
    , l_sysdate --Creation_date
    , l_user_id --Created_By
    , 'Q0000' --Lot_Number
    , l_exp_date --Lot_Expiration_Date
    , -200000 --transaction_quantity
    , -200000 --primary_quantity
    END;

    the first MTI record should be the source record ...ie. it should have transaction quantity as negative.
    new set of MTI records should have positive transaction quantities.
    Also ensure that sum of transaction quantities for the set should be 0...
    What is the error that you are getting?
    Thanks,
    Hrishi.

  • How to improve performance when there are many TextBlocks in ItemsControl items?

       Hi,
       I'm trying to find a way to improve performance for a situation when there is an ItemsControl using UI and Data virtualization and each item on that control has 36 TextBlocks. Basically the item is a single string. There are so many TextBlocks
    to allow assigning different brushes to different parts of the string. Performance of this construction is terrible. I have 37 items visible on the screen and if I try to scroll up or down it scrolls into the black space and then it takes a second or two to
    show the items.
       I tried different things. For example, the most successful performance-wise was to replace TextBlocks with Borders and then draw bitmaps. In other words, I prepared 127 bitmaps for each character (I need ASCII only) and then I used those bitmaps
    to set Border.Backgrounds. It improved performance about 1.5 - 2 times but it consumed much more memory (which is not surprising, of course). Required amount of memory is so big that it throws OutOfMemoryException on 512MB emulator but works on 1GB. As a result
    I don't thing it is a good solution.
       Another thing that worked perfect is to replace 36 TextBlocks with only 6 TextBlocks. In this case the performance improvement is about 5 - 10 times but I lose the ability to set different colors to different parts of the string. It seems that
    the performance degrades dramatically with the increase of number of TextBlocks. Is there another technique to draw strings where literally each character can be of different color with decent performance?
    Thank you
    Alex

       Using Runs inside TextBlocks gives approximately the same improvement as using bitmaps 1.5 - 2 times faster but it is not even close to the case with just a couple of TextBlocks in the ItemsControl item. Any other ideas?
    Alex

  • How to improve Performance of the Statements.

    Hi,
    I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
    Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
    Pls. Give me advice for me to improve Performance.
    Thank u...!

    What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
    Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
    Nicolas.

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

Maybe you are looking for

  • How to populate the field Discount Percent (DIS) in the outbound S1ORDEXC?

    Hello all, I am working with the Spec2000 to send out the IDOC S1ORDEXC. Does anyone know how to populate the field Discount Percent (DIS)? This field length of Discount Percent is 2 characters, but our customer discount in the Princing Conditions (S

  • New versions of swf files wont download automatically?!

    Let's say I have a main swf file. If I edit that and upload, chances are good that it will be re-downloaded on my client's browsers when they refresh. However, if that swf then loads a swf using "loadMovie()" that has been edited and uploaded, the br

  • Displaying MS Word File in browser

    Hello, Can anyone pls tell about the method to display MS word file present in a folder in my site on the browser. I dont want it to give it as download dialog box. Thanks Sharath

  • Query SQL with variables Parameters and user defined tables

    Hi  everyone I got a problem about Query SQL [dbo].[@CSOURCE]  is a user defined table select * from [dbo].[@CSOURCE] you can get the result in following code    name T01      newspaper T02      TV T03      radio T04      friends when I execute the f

  • Scrolling Feeds !Newbie Alert!

    Hi, I have a basic website which I'm trying to make a little more interesting. I have added a news feed which has around 5 items on it per day and comprises of small pictures and text. However, the news feed is the whole length of the page and I kind