I need help to improve speed

Hello
I have a JTable with 33 nessesary columns. The row count can be from 1 to 50. That gives maximum 1650 diffrent values to save to the database.
The current system, creates a object for each line(33 values) and fill them into a arraylist, send it to my dao, who takes object by object and stores them. The speed this way is "okay, could be better".
Then when im retreawing the data, I get row by row, create a object, fill my arraylist, goes trough it object by object and put them in each row.
1 row goes fast, 5 rows takes time, 50 rows takes forever...
What can speed things upp here ? If u need some code, please let me know.
Ive been thinking of something thread based, 1 thread for each row, sounds smart?

My JTable lists "meetings" for 1 week. 1 row presents a meetings data.
There are 2 JButtons used for changing week, either +1 or -1 from the week number your standing inn. I will now show everything that happens with 1 click. Thats 4 steps.
Step 1: Saving current data(meetings for current week)
     private void saveM�ter() {
          dao.slettM�ter(�r, uke);
          ArrayList m�ter = new ArrayList();
          for (int i = 0; i < mdlRapport.getRowCount(); i++) {
               UkerapportM�te m�te =
                    new UkerapportM�te(
                         �r,
                         uke,
                         (String) mdlRapport.getValueAt(i, 0),
                         (String) mdlRapport.getValueAt(i, 1),
                         (String) mdlRapport.getValueAt(i, 32),
                         (String) mdlRapport.getValueAt(i, 33));
               UkerapportM�te m�teX = formatM�te(m�te);
               m�ter.add(m�teX);
          dao.lagreM�ter(m�ter);
     }Comments:
dao.slettM�ter(�r, uke); This method removes everything from the database for the current week.
dao.lagreM�ter(m�ter);This method saves everything to the database current week. That method is posted bellow, maybe that is a bottleneck.
     public void lagreM�ter(ArrayList m�ter) {
          Statement stmt = null;
          String sql = null;
          UkerapportM�te m�te = null;
          try {
               for (int i = 0; i < m�ter.size(); i++) {
                    m�te = (UkerapportM�te) m�ter.get(i);
                    sql =      " INSERT INTO Ukerapport VALUES("
                              + m�te.get�rID()
                              + ","
                              + m�te.getUkeID()
                              + ",'"
                              + m�te.getM�tedag()
                              + "','"
                              + m�te.getM�teKlokke()
                              + "','"
                              + m�te.getM�teSted()
                              + "','"
                              + m�te.getM�teKunde()
                    + "','"
                              + m�te.getPrivatBilP()
                    + "','"
                              + m�te.getPrivatBilS()
                    + "','"
                              + m�te.getPrivatVillaP()
                    + "','"
                              + m�te.getPrivatVillaS()
                    + "','"
                              + m�te.getPrivatServP()
                    + "','"
                              + m�te.getPrivatServS()
                    + "','"
                              + m�te.getN�ringslivAntP()
                    + "','"
                              + m�te.getN�ringslivAntS()
                    + "','"
                              + m�te.getMersalgKrP()
                    + "','"
                              + m�te.getMersalgKrS()
                    + "','"
                              + m�te.getMersalgAntP()
                    + "','"     
                              + m�te.getMersalgAntS()
                    + "','"
                              + m�te.getLivP()
                    + "','"
                              + m�te.getLivS()
                    + "','"
                              + m�te.getSparAntP()
                    + "','"
                              + m�te.getSparAntS()
                    + "','"
                              + m�te.getSparKrP()
                    + "','"
                              + m�te.getSparKrS()
                    + "','"
                              + m�te.getKollpensjonAntP()
                    + "','"
                              + m�te.getKollpensjonAntS()
                    + "','"
                              + m�te.getKollpensjonKrP()
                    + "','"
                              + m�te.getKollpensjonKrS()
                    + "','"
                              + m�te.getFinansAntP()
                    + "','"
                              + m�te.getFinansAntS()
                    + "','"
                              + m�te.getFinansKrP()
                    + "','"
                              + m�te.getFinansKrS()
                    + "','"
                              + m�te.getRef()
                    + "','"
                              + m�te.getProv()
                    + "','"
                              + m�te.getForening()
                              + "','"
                              + m�te.getKommentar()
                              + "')";
                    stmt = con.createStatement();
                    //rs = stmt.executeQuery(sql);
                    int x = stmt.executeUpdate(sql);
          } //end try
          catch (SQLException e) {
               System.out.println("UkerapportDAO: Klarer ikke � utf�re sp�rringen: " + sql);
               System.out.println("--lagreM�ter() " + e.getMessage() + "\n");
          }finally{
               try{
               if(stmt!=null)
                    stmt.close();
               }catch(SQLException sqlex){
                    System.out.println("UkerapportDAO: Klarer ikke � lukke");
                    System.out.println("--lagreM�ter() " + sqlex.getMessage() + "\n");
     }Step 2:
I update values for week and year, depening on going 1 week up or down. I dont find it nessesary to show that code....
Step 3: Remove everything from the JTable
     private void clearTable() {
          for (int i = 0; i < mdlRapport.getRowCount(); i++) {
               mdlRapport.removeRow(i);
     }Comments: Dont think theres anything todo here...
Step4: Get new meetings from the database.
private void loadM�ter() {
          ArrayList nyeM�ter = dao.getM�ter(�r, uke);
          mdlRapport.setRowCount(nyeM�ter.size());
          for (int i = 0; i < nyeM�ter.size(); i++) {
               UkerapportM�te m�te = (UkerapportM�te) nyeM�ter.get(i);
               if (m�te.getM�tedag().equals("Blank")) {
                    mdlRapport.setValueAt("", i, 0);
               } else {
                    mdlRapport.setValueAt(m�te.getM�tedag(), i, 0);
               if (m�te.getM�teKlokke().equals("Blank")) {
                    mdlRapport.setValueAt("", i, 1);
               } else {
                    mdlRapport.setValueAt(m�te.getM�teKlokke(), i, 1);
               if (m�te.getProv().equals("0")) {
                    mdlRapport.setValueAt("", i, 31);
               } else {
                    mdlRapport.setValueAt(m�te.getProv() + "", i, 31);
               if (m�te.getForening().equals("Blank")) {
                    mdlRapport.setValueAt("", i, 32);
               } else {
                    mdlRapport.setValueAt(m�te.getForening(), i, 32);
               if (m�te.getKommentar().equals("Blank")) {
                    mdlRapport.setValueAt("", i, 33);
               } else {
                    mdlRapport.setValueAt(m�te.getKommentar(), i, 33);
     }Comments: I skipped alot of code here, This code is ofcouse much longer, I just took the first if*s and the last ones.
ArrayList nyeM�ter = dao.getM�ter(�r, uke); This method is posted bellow.
     public ArrayList getM�ter(int �rID, int ukeID) {
          Statement stmt = null;
          ResultSet rs = null;
          String sql = "SELECT * FROM Ukerapport WHERE ukeID="      + ukeID      + " AND �rID =" + �rID;
          list = new ArrayList();
          try {
               stmt = con.createStatement();
               rs = stmt.executeQuery(sql);
               while (rs.next()) {
                    m�te =
                         new UkerapportM�te( rs.getInt(1), rs.getInt(2), rs.getString(3), rs.getString(4), rs.getString(5), rs.getString(6), rs.getString(7), rs.getString(8), rs.getString(9), rs.getString(10), rs.getString(11), rs.getString(12), rs.getString(13),      rs.getString(14), rs.getString(15), rs.getString(16),      rs.getString(17), rs.getString(18), rs.getString(19), rs.getString(20),      rs.getString(21),      rs.getString(22),      rs.getString(23), rs.getString(24), rs.getString(25), rs.getString(26),      rs.getString(27),      rs.getString(28), rs.getString(29), rs.getString(30), rs.getString(31), rs.getString(32), rs.getString(33), rs.getString(34), rs.getString(35),      rs.getString(36));
                    list.add(m�te);
               rs.close();
               stmt.close();
          } catch (SQLException sqlex) {
               System.out.println( "Error: Klarer ikke � utf�re sp�rringen. " + sqlex.getMessage());
               System.out.println( "--getM�ter()");
          return list;
     }I Think this is the slow part. The if tests where made cause i had trouble saving empty meeting values in the database, so When a meetings value is missing i save it as "Blank", and when i read the data i check for "Blank" and if true I set the value "".
Can you help me make this faster?

Similar Messages

  • Need help on improving expdp speed

    I just tested a export of one table of 3.5 gb, it took almost 1 and half hours.
    See logs here:
    Export: Release 11.1.0.7.0 - 64bit Production on Saturday, 28 April, 2012 22:27:53
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=exp_t454.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 22.59 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "ADMIN"."T454" 3.833 GB 3340156 rows
    Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
    /u01/export/admin_migration/exp_admin_t454_01.dmp
    /u01/export/admin_migration/exp_admin_t454_02.dmp
    Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 23:55:15
    my par file looks like this:
    tables=admin.t454 DIRECTORY=data_pump_dir dumpfile=exp_admin_t454_%U.dmp logfile=exp_admin_t454.log parallel=3 filesize=5000m compression=all
    in the middle of expdp, I ran a status of the job and got this:
    admin1 $ expdp system attach=SYS_EXPORT_TABLE_01
    Export: Release 11.1.0.7.0 - 64bit Production on Saturday, 28 April, 2012 22:49:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Password:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    Job: SYS_EXPORT_TABLE_01
    Owner: SYSTEM
    Operation: EXPORT
    Creator Privs: TRUE
    GUID: BEC5BBC2966860B0E0430AEC944B60B0
    Start Time: Saturday, 28 April, 2012 22:28:07
    Mode: TABLE
    Instance: admin1
    Max Parallelism: 3
    EXPORT Job Parameters:
    Parameter Name Parameter Value:
    CLIENT_COMMAND system/******** parfile=exp_t454.par
    COMPRESSION ALL
    State: EXECUTING
    Bytes Processed: 0
    Current Parallelism: 3
    Job Error Count: 0
    Dump File: /u01/export/admin_migration/exp_admin_t454_%u.dmp
    size: 5,242,880,000
    Dump File: /u01/export/admin_migration/exp_admin_t454_01.dmp
    size: 5,242,880,000
    bytes written: 4,096
    Dump File: /u01/export/admin_migration/exp_admin_t454_02.dmp
    size: 5,242,880,000
    bytes written: 28,672
    Worker 1 Status:
    Process Name: DW01
    State: WORK WAITING
    Worker 2 Status:
    Process Name: DW02
    State: EXECUTING
    Object Schema: admin
    Object Name: T454
    Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
    Completed Objects: 1
    Total Objects: 1
    Completed Rows: 1,695,732
    Worker Parallelism: 1
    Export>
    The database version is 11.1.0.7, and os is aix.
    I wonder what I can do to speed up the expdp. I have to do a migration to expdp a 1tb database soon.
    Thanks in advance.

    Is the table partitioned ? Have you tried traditional export to see how long it takes ?
    Pl see these MOS Docs for possible causes
    Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]
    Bug 12780993 - Poor Datapump EXPDP performance for ESTIMATE phase [ID 12780993.8]     
    Data Pump Export (EXPDP) Runs Very Slow After Upgrade From 11.1.0.6 to 11.1.0.7 [ID 1075468.1]     
    Oracle DataPump Export (EXPDP) Is Slow On Partitioned Tables [ID 1300895.1]     
    Expdp Slow for a Small Table [ID 950995.1]     
    Slow Performance of DataPump Export during Estimate Phase [ID 1354535.1]     
    HTH
    Srini

  • EP6 sp12 Performance Issue, Need help to improve performance

    We have a Portal development environment with EP6.0 sp12.
    What we are experiencing is performance issue, It's not extremely slow, but slow compared to normal ( compared to our prod box). For example, after putting the username and password and clicking the <Log on> Button it's taking more than 10 secs for the first home page to appear. Also currently we have hooked the Portal with 3 xAPPS system and one BW system. The time taken for a BW query to appear ( with selection screen) is also more than 10 secs. However access to one other xAPPS is comparatively faster.
    Do we have a simple to use guide( Not a very elaborate one) with step by step guidance to immediately improve the performance of the Portal.
    Simple guide, easy to implement,  with immediate effect is what we are looking for in the short term
    Thanks
    Arunabha

    Hi Eric,
      I have searched but didn't find the Portal Tuning and Optimization Guide as you have suggested, Can you help to find this.
    Subrato,
      This is good and I would obviously read through this, The issue here is this is only for Network.
      But do you know any other guide, which as very basic ( may be 10 steps) and show step by step the process, it would be very helpful. I already have some information from the thread Portal Performance - page loads slow, client cache reset/cleared too often
    But really looking for answer ( steps to do it quickly and effectively) instead of list of various guides.
    It would be very helpful if you or anybody( who has actually done some performance tuning) can send  a basic list of steps that I can do immediately, instead of reading through these large guides.
    I know I am looking for a shortcut, but this is the need of the hour.
    Thanks
    Arun

  • Need help in improving the performance for the sql query

    Thanks in advance for helping me.
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
    Any suggestions or solutions for improving performance are appreciated
    SQL query:
    update targettable tt
    set mnop = 'G',
    where ( x,y,z ) in
    select a.x, a.y,a.z
    from table1 a
    where (a.x, a.y,a.z) not in (
    select b.x,b.y,b.z
    from table2 b
    where 'O' = b.defg
    and mnop = 'P'
    and hijkl = 'UVW';

    987981 wrote:
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
    The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
    The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
    The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
    Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
    From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
    That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
    So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both?

  • Need help with wireless speeds..

    I just got myself a WRT54GS wireless router and i have noticed the speeds being extremely slow. I've gone through all of the threads and didn't get anything that could help.  I have tried what some mention, about changing the beacon interval to 50 and the Frag and RTS to 2304. Also i have changed the wireless channel a couple of times. Still nothing..
    Now onto my connection in general, i have cable and the speeds when direct connected are very fast. I typically download at about 300kb+ and i can get mb download speeds as well.  But while hooked up wirelessly it has been going at about 20kbs. When i try and watch streaming videos and all, it will be break up and all of that..due to the slow speed.  BTW i don't believe there is anything interfering at all. The comp is in the same room as the router as well
    This is my first time dealing with wireless, so any info and help would be greatly appreciated. What are some good ways to get the speeds up?  The comp i have is new as well lol  I know it limits a lot of the help, but this is a pain in the butt. Thanks
    Message Edited by Ichigo on 07-22-2007 10:13 AM
    Message Edited by Ichigo on 07-22-2007 11:58 AM

    Set "SSID Broadcast" to "enabled". This will help your computer find and lock on to your router's signal.
    Poor wireless connections are often caused by radio interference from other 2.4 GHz devices. This includes wireless phones, wireless baby monitors, microwave ovens, wireless mice and keyboards, wireless speakers, and your neighbor's wireless network. In rare cases, Bluetooth devices can interfere. Even some 5+ GHz phones also use the 2.4 Ghz band. Unplug these devices, and see if that corrects your problem.
    In your router, try a different channel. There are 11 channels in the 2.4 GHz band. Usually channel 1, 6, or 11 works best. Check out your neighbors, and see what channel they are using. Because the channels overlap one another, try to stay at least +5 or -5 channels from your strongest neighbors. For example, if you have a strong neighbor on channel 9, try any channel 1 through 4.
    Also, try to locate the router about 4 to 6 feet above the floor, in an open area. Do not locate it behind your monitor or near other computer equipment or speakers. The antenna should be vertical.
    What encryption are you using?  WEP?  WPA?  WPA2?  Some systems work significantly better with one encryption system vs. another.  So try something different.  WEP is no longer recommended.  You should be using WPA or preferably WPA2.  Password (key) should not have any spaces in it.
    Windows XP requires a patch to run WPA2 (= PSK2 = WPA with AES ). Go to Microsoft Knowledge base, article ID=893357 and it will direct you to the patch.
    Sadly, the patch is not part of the automatic Windows XP updates, so lots of people are missing the patch.
    Hope this helps.
    Message Edited by toomanydonuts on 07-25-2007 05:24 AM

  • Need Help in improving logic for determining the range

    Hi guys,
    I need some help in my program logic to determine the range of value. My purpose of this program is to find which combinations have the lowest amplitude.
    In the attachment is a set of number.
    e.g 10    0 is a combination.
    Each combination, I will need to draw a graph of data acquired using this combination VS gray level. There is 255 gray level. Every 5 gray level I will acquire a point so I will have 52 points.
    Next, I will get the maximum value - minimum value. And this is the amplitude for the combination. I can do all this function, however it is not practical for me to do it this way until 360 360. There is a round 1200+ combination. It requires a long time since I need to interface all this function with my hardware.
    The graph of each combination maybe a irregular shape. Do any of you have a logic to help me to shorten the process and find the range of values (combination) where the lowest amplitude may lies. Thanks!!
    Attachments:
    example.txt ‏11 KB

    Hi Johnsold,
    This is a example of my result. This is only a portion of it. The last column is the amplitude, I store all the 52 points into array and used Array Min and Max function. Then I use max - min to get the amplitude. From the amplitude I cannot see any pattern. Can you please advice me on it. Thanks!!!

  • Need Help in Improving Performace of PLSQL Code

    Hello Gurus,
    I am very new to PL/SQL Coding. i had to design a code that takes values from 3 tables and fills the empty 4th table.
    Table 1 : IDH(primary key),Table 2 :IDH(primary key) and Table 3 where IDH exists but not a Primary key(means has multiple values for each IDH)
    So my approach was i created a STORED PROCEDURE as below
    1. Create a cursor joining Table1 and Table 2
    2. Iterate over the Cursor, for Each IDH create one more cursor over Table 3 and Calculate some values.
    3. insert into the New Table.
    But this seems to take a long time (more than 5 hours)
    Can you please help me in optimizing this solutions ?
    Thanks,
    Ganesh

    BRAND table : IDH,BRAND,SUBBRAND -> IDH Primary key
    MARKET table : IDH,MARKET,VARIANT -> IDH Primary key
    MAT table : col1,IDH,col4,col5,col15 -> no primary key here * redundant table
    New table needs to be created as follows
    LEGEND table : IDH,BRAND,SUBBRAND,MARKET,VARIANT,CODE,EAN
    now its easy to get IDH,BRAND,SUBBRAND,MARKET,VARIANT into the LEGEND table (just a join result will give)
    but CODE is calculated as follows
    FOr an IDH in MAT
         if col4 contains 'OLDKEZ' then
              CODE:= col5;
    EAN is calculated as follows
    FOr an IDH in MAT
         if col1 contains 'MARA' then
              CODE:= col15;
    this i am accomplishing by using 2 cursors (nested), which takes a long time !!!
    Can you give any inputs ?

  • I Need Help With The Speed

    Hi,
    I'm having some trouble with my motherboard witch is a NVIDIA NFORCE 2 K7N2 DELTA and the problem is that it's really slow I have an AMD 3000+ and a ATI RADEON 9200 256MB I’ve tried to overlook the computer but it burned my graphics cards twice so if any of you out there could help me on how to tell me how to speed it up I would be very thank full.
    Thanks.

    when you say slow, how slow is it?
    also you should me post this thread here, post it at the following place so we can help you:
    https://forum-en.msi.com/index.php?boardid=14

  • Need help setting AGP Speed

    The motherboard is K8T Neo-FIS2R with the lastest Bios 1.1.  I am running Win XP Service Pack 1.  The video Card is Radeon 9800 XT with the lastest driver.  In the Bios of the motherboard, the AGP Mode is Auto and it will not allow me to change that.  Fast Write is Enabled.
    My problem that I'm having is when I'm view the APG Speed in the Advanced setting for the Video Card,  the Current AGP Speed is set to off and Fast Write is off.  When I changed the setting to AGP Speed 8X and Fast Write Enabled, I have to reboot the system and after reboot, the seting went back to the original (AGP Speed is Off and Fast Write is Off).  
    Why is it doing this?  During installation of Windows XP, it never ask for any driver except for the Serial ATA.  Do I have to install some driver that came on the CD with the motherboard?  I went to view the CD but the CD is bad, the CD drive would not read it but read other CDs just fine.  I went to the MSI website and looked at drivers for this board and saw VIA Chipset 4 in 1 Driver.  Do I need to install this?  If so, will it override the Serial ATA driver that I have to install before installing Win XP?

    Quote
    Originally posted by joshr45
    its a real bitch seeing your windows logon screen scroll down like a 486 isnt it
    If it scrolls that way, its NOT because you run AGPx4 and NOT because fastwrite is disabled. That is only a driver problem.
    It's best to disable fastwrite anyway. Gives no extra peformance and makes the whole system a little more unstable.
    There is also NO or very little (minimal) performance gain from AGP4 to AGP8.
    Be sure to use the catalyst uninstaller from ati.com before installing the drivers again

  • Need help in improving performance of prorating quantities to stores for existing orders

    I have a code written to allocate quantities to stores for an existing order. Suppose there is a supplier order with quantity of 100 and this needs to distributed among 4 stores which has a demand of 50,40,30 and 20. Since total demand not equal to available quantity. the available quantity needs to be allocated to stores using an algorithm.
    ALgorithm is like allocating the stores in small pieces of innersize. Innersize is nothing but
    quantity within the pack of packs i.e. pack has 4 pieces and each pieces internally has 10 pieces,
    this 10 is called innersize.
    While allocating, each store is provided quantities of innersize first and this looping continues
    until available quantity is over
    Ex:
    store1=10
    store2=10
    store3=10
    store4=10
    second time:
    store1=10(old)+10
    store2=10(old)+10
    store3=10(old)+10
    store4=10(old)+10--demand fulfilled
    third time
    store1=20(old)+10
    store2=20(old)+10
    -- available quantity is over and hence stopped.
    My code below-
    =================================================
    int prorate_allocation()
      char *function = "prorate_allocation";
      long t_cnt_st;
      int t_innersize;
      int   t_qty_ordered;
      int t_cnt_lp;
      bool t_complete;
      sql_cursor alloc_cursor;
      EXEC SQL DECLARE c_order CURSOR FOR -- cursor to get orders, item in that, inner size and available qty.
      SELECT oh.order_no,
      ol.item,
      isc.inner_pack_size,
      ol.qty_ordered
      FROM ABRL_ALC_CHG_TEMP_ORDHEAD oh,
      ordloc ol,
      item_supp_country isc
      WHERE oh.order_no=ol.order_no
      AND oh.supplier=isc.supplier
      and ol.item=isc.item
      AND     EXISTS (SELECT 1 FROM abrl_alc_chg_details aacd WHERE oh.order_no=aacd.order_no)
            AND     ol.qty_ordered>0;
      char   v_order_no[10];
      char v_item[25];
      double v_innersize;
      char   v_qty_ordered[12];
      char v_alloc_no[11];
      char v_location[10];
      char v_qty_allocated[12];
      int *store_quantities;
      bool *store_processed_flag;
      EXEC SQL OPEN c_order;
      if (SQL_ERROR_FOUND)
      sprintf(err_data,"CURSOR OPEN: cursor=c_order");
      strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
      EXEC SQL ALLOCATE :alloc_cursor;
      while(1)
      EXEC SQL FETCH c_order INTO :v_order_no,
      :v_item,
      :v_innersize,
      :v_qty_ordered;
      if (SQL_ERROR_FOUND)
      sprintf(err_data,"CURSOR FETCH: cursor=c_order");
      strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
      if (NO_DATA_FOUND) break;
      t_qty_ordered     =atoi(v_qty_ordered);
      t_innersize =(int)v_innersize;
      t_cnt_lp         = t_qty_ordered/t_innersize;
      t_complete =FALSE;
      EXEC SQL SELECT COUNT(*) INTO :t_cnt_st
      FROM abrl_alc_chg_ad ad,
      alloc_header ah
      WHERE ah.alloc_no=ad.alloc_no
      AND   ah.order_no=:v_order_no
      AND   ah.item=:v_item
      AND   ad.qty_allocated!=0;
      if SQL_ERROR_FOUND
                sprintf(err_data,"SELECT: ALLOC_DETAIL, count = %s\n",t_cnt_st);
                strcpy(table,"ALLOC_DETAIL");
                WRITE_ERROR(SQLCODE,function,table,err_data);
                return(-1);
      if (t_cnt_st>0)
      store_quantities=(int *) calloc(t_cnt_st,sizeof(int));
      store_processed_flag=(bool *) calloc(t_cnt_st,sizeof(bool));
      EXEC SQL EXECUTE
      BEGIN
      OPEN :alloc_cursor FOR SELECT ad.alloc_no,
      ad.to_loc,
      ad.qty_allocated
      FROM    alloc_header ah,
      abrl_alc_chg_ad ad
      WHERE   ah.alloc_no=ad.alloc_no
      AND     ah.item=:v_item
      AND     ah.order_no=:v_order_no
      order by ad.qty_allocated desc;
      END;
      END-EXEC;
      while (t_cnt_lp>0)
      EXEC SQL WHENEVER NOT FOUND DO break;
      for(int i=0;i<t_cnt_st;i++)
      EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
      :v_location,
      :v_qty_allocated;
      if (store_quantities[i]!=(int)v_qty_allocated)
      store_quantities[i]=store_quantities[i]+t_innersize;
      t_cnt_lp--;
      if (t_cnt_lp==0)
      EXEC SQL CLOSE :alloc_cursor;
      break;
      else
      if(store_processed_flag[i]==FALSE)
      store_processed_flag[i]=TRUE;
      t_cnt_st--;
      if (t_cnt_st==0)
      t_complete=TRUE;
      break;
      if (t_complete==TRUE && t_cnt_lp!=0)
      for (int i=0;i<t_cnt_st;i++)
      store_quantities[i]=store_quantities[i]+v_innersize;
      t_cnt_lp--;
      if (t_cnt_lp==0)
      EXEC SQL CLOSE :alloc_cursor;
      break;
      }/*END OF WHILE*/
      EXEC SQL EXECUTE
      BEGIN
      OPEN :alloc_cursor FOR SELECT ad.alloc_no,
      ad.to_loc,
      ad.qty_allocated
      FROM    alloc_header ah,
      abrl_alc_chg_ad ad
      WHERE   ah.alloc_no=ad.alloc_no
      AND     ah.item=:v_item
      AND     ah.order_no=:v_order_no
      order by ad.qty_allocated desc;
      END;
      END-EXEC;
      EXEC SQL WHENEVER NOT FOUND DO break;
      for (int i=0;i<t_cnt_st;i++)
      EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
      :v_location,
      :v_qty_allocated;
      EXEC SQL UPDATE abrl_alc_chg_ad
      SET qty_allocated=:store_quantities[i]
      WHERE to_loc=:v_location
      AND   alloc_no=:v_alloc_no;
      if SQL_ERROR_FOUND
      sprintf(err_data,"UPDATE: ALLOC_DETAIL, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
      strcpy(table,"ALLOC_DETAIL");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
      EXEC SQL UPDATE ABRL_ALC_CHG_DETAILS
      SET PROCESSED='Y'
      WHERE LOCATION=:v_location
      AND   alloc_no=:v_alloc_no
      AND PROCESSED IN ('E','U');
      if SQL_ERROR_FOUND
      sprintf(err_data,"UPDATE: ABRL_ALC_CHG_DETAILS, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
      strcpy(table,"ABRL_ALC_CHG_DETAILS");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
      EXEC SQL COMMIT;
      EXEC SQL CLOSE :alloc_cursor;
      free(store_quantities);
      free(store_processed_flag);
      }/*END OF IF*/
      }/*END OF OUTER WHILE LOOP*/
      EXEC SQL CLOSE c_order;
      if SQL_ERROR_FOUND
      sprintf(err_data,"CURSOR CLOSE: cursor = c_order");
      strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
    return(0);
    } /* end prorate_allocation*/

    I have a code written to allocate quantities to stores for an existing order. Suppose there is a supplier order with quantity of 100 and this needs to distributed among 4 stores which has a demand of 50,40,30 and 20. Since total demand not equal to available quantity. the available quantity needs to be allocated to stores using an algorithm.
    ALgorithm is like allocating the stores in small pieces of innersize. Innersize is nothing but
    quantity within the pack of packs i.e. pack has 4 pieces and each pieces internally has 10 pieces,
    this 10 is called innersize.
    While allocating, each store is provided quantities of innersize first and this looping continues
    until available quantity is over
    Ex:
    store1=10
    store2=10
    store3=10
    store4=10
    second time:
    store1=10(old)+10
    store2=10(old)+10
    store3=10(old)+10
    store4=10(old)+10--demand fulfilled
    third time
    store1=20(old)+10
    store2=20(old)+10
    -- available quantity is over and hence stopped.
    My code below-
    =================================================
    int prorate_allocation()
      char *function = "prorate_allocation";
      long t_cnt_st;
      int t_innersize;
      int   t_qty_ordered;
      int t_cnt_lp;
      bool t_complete;
      sql_cursor alloc_cursor;
      EXEC SQL DECLARE c_order CURSOR FOR -- cursor to get orders, item in that, inner size and available qty.
      SELECT oh.order_no,
      ol.item,
      isc.inner_pack_size,
      ol.qty_ordered
      FROM ABRL_ALC_CHG_TEMP_ORDHEAD oh,
      ordloc ol,
      item_supp_country isc
      WHERE oh.order_no=ol.order_no
      AND oh.supplier=isc.supplier
      and ol.item=isc.item
      AND     EXISTS (SELECT 1 FROM abrl_alc_chg_details aacd WHERE oh.order_no=aacd.order_no)
            AND     ol.qty_ordered>0;
      char   v_order_no[10];
      char v_item[25];
      double v_innersize;
      char   v_qty_ordered[12];
      char v_alloc_no[11];
      char v_location[10];
      char v_qty_allocated[12];
      int *store_quantities;
      bool *store_processed_flag;
      EXEC SQL OPEN c_order;
      if (SQL_ERROR_FOUND)
      sprintf(err_data,"CURSOR OPEN: cursor=c_order");
      strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
      EXEC SQL ALLOCATE :alloc_cursor;
      while(1)
      EXEC SQL FETCH c_order INTO :v_order_no,
      :v_item,
      :v_innersize,
      :v_qty_ordered;
      if (SQL_ERROR_FOUND)
      sprintf(err_data,"CURSOR FETCH: cursor=c_order");
      strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
      if (NO_DATA_FOUND) break;
      t_qty_ordered     =atoi(v_qty_ordered);
      t_innersize =(int)v_innersize;
      t_cnt_lp         = t_qty_ordered/t_innersize;
      t_complete =FALSE;
      EXEC SQL SELECT COUNT(*) INTO :t_cnt_st
      FROM abrl_alc_chg_ad ad,
      alloc_header ah
      WHERE ah.alloc_no=ad.alloc_no
      AND   ah.order_no=:v_order_no
      AND   ah.item=:v_item
      AND   ad.qty_allocated!=0;
      if SQL_ERROR_FOUND
                sprintf(err_data,"SELECT: ALLOC_DETAIL, count = %s\n",t_cnt_st);
                strcpy(table,"ALLOC_DETAIL");
                WRITE_ERROR(SQLCODE,function,table,err_data);
                return(-1);
      if (t_cnt_st>0)
      store_quantities=(int *) calloc(t_cnt_st,sizeof(int));
      store_processed_flag=(bool *) calloc(t_cnt_st,sizeof(bool));
      EXEC SQL EXECUTE
      BEGIN
      OPEN :alloc_cursor FOR SELECT ad.alloc_no,
      ad.to_loc,
      ad.qty_allocated
      FROM    alloc_header ah,
      abrl_alc_chg_ad ad
      WHERE   ah.alloc_no=ad.alloc_no
      AND     ah.item=:v_item
      AND     ah.order_no=:v_order_no
      order by ad.qty_allocated desc;
      END;
      END-EXEC;
      while (t_cnt_lp>0)
      EXEC SQL WHENEVER NOT FOUND DO break;
      for(int i=0;i<t_cnt_st;i++)
      EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
      :v_location,
      :v_qty_allocated;
      if (store_quantities[i]!=(int)v_qty_allocated)
      store_quantities[i]=store_quantities[i]+t_innersize;
      t_cnt_lp--;
      if (t_cnt_lp==0)
      EXEC SQL CLOSE :alloc_cursor;
      break;
      else
      if(store_processed_flag[i]==FALSE)
      store_processed_flag[i]=TRUE;
      t_cnt_st--;
      if (t_cnt_st==0)
      t_complete=TRUE;
      break;
      if (t_complete==TRUE && t_cnt_lp!=0)
      for (int i=0;i<t_cnt_st;i++)
      store_quantities[i]=store_quantities[i]+v_innersize;
      t_cnt_lp--;
      if (t_cnt_lp==0)
      EXEC SQL CLOSE :alloc_cursor;
      break;
      }/*END OF WHILE*/
      EXEC SQL EXECUTE
      BEGIN
      OPEN :alloc_cursor FOR SELECT ad.alloc_no,
      ad.to_loc,
      ad.qty_allocated
      FROM    alloc_header ah,
      abrl_alc_chg_ad ad
      WHERE   ah.alloc_no=ad.alloc_no
      AND     ah.item=:v_item
      AND     ah.order_no=:v_order_no
      order by ad.qty_allocated desc;
      END;
      END-EXEC;
      EXEC SQL WHENEVER NOT FOUND DO break;
      for (int i=0;i<t_cnt_st;i++)
      EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
      :v_location,
      :v_qty_allocated;
      EXEC SQL UPDATE abrl_alc_chg_ad
      SET qty_allocated=:store_quantities[i]
      WHERE to_loc=:v_location
      AND   alloc_no=:v_alloc_no;
      if SQL_ERROR_FOUND
      sprintf(err_data,"UPDATE: ALLOC_DETAIL, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
      strcpy(table,"ALLOC_DETAIL");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
      EXEC SQL UPDATE ABRL_ALC_CHG_DETAILS
      SET PROCESSED='Y'
      WHERE LOCATION=:v_location
      AND   alloc_no=:v_alloc_no
      AND PROCESSED IN ('E','U');
      if SQL_ERROR_FOUND
      sprintf(err_data,"UPDATE: ABRL_ALC_CHG_DETAILS, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
      strcpy(table,"ABRL_ALC_CHG_DETAILS");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
      EXEC SQL COMMIT;
      EXEC SQL CLOSE :alloc_cursor;
      free(store_quantities);
      free(store_processed_flag);
      }/*END OF IF*/
      }/*END OF OUTER WHILE LOOP*/
      EXEC SQL CLOSE c_order;
      if SQL_ERROR_FOUND
      sprintf(err_data,"CURSOR CLOSE: cursor = c_order");
      strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
      WRITE_ERROR(SQLCODE,function,table,err_data);
      return(-1);
    return(0);
    } /* end prorate_allocation*/

  • RE: Need help to improve performance!!

    Hi Experts,
    There is an standard SAP tcode FPREPT  which is to re-print a receipt. The execution of the Query time takes 5+ minutes.
    Can anybody suggest me the best way to improve this and if hlp me with any SAP note available for the same.
    vishal

    Hi,
    Check this note
    Note 607651 - FPREPT/FPY1: Performance for receipt number assignment
    It is a old one for release 471 (FI-CA)
    What is your release ?
    Regards

  • Needed help to improve the performance of a select query?

    Hi,
    I have been preparing a report which involves data to be fetched from 4 to 5 different tables and calculation has to performed on some columns also,
    i planned to write a single cursor to populate 1 temp table.i have used INLINE VIEW,EXISTS more frequently in the select query..please go through the query and suggest me a better way to restructure the query.
    cursor c_acc_pickup_incr(p_branch_code varchar2, p_applDate date, p_st_dt date, p_ed_dt date) is
    select sca.branch_code "BRANCH",
    sca.cust_ac_no "ACCOUNT",
    to_char(p_applDate, 'YYYYMM') "YEARMONTH",
    sca.ccy "CURRENCY",
    sca.account_class "PRODUCT",
    sca.cust_no "CUSTOMER",
    sca.ac_desc "DESCRIPTION",
    null "LOW_BAL",
    null "HIGH_BAL",
    null "AVG_CR_BAL",
    null "AVG_DR_BAL",
    null "CR_DAYS",
    null "DR_DAYS",
    --null                                 "CR_TURNOVER",       
    --null                                 "DR_TURNOVER",       
    null "DR_OD_DAYS",
    (select sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
    (case when (p_applDate >= sca.tod_limit_start_date and
    p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)) then
    sca.tod_limit else 0 end) dd
    from getm_facility gf, sttm_cust_account_linkages scal
    where gf.line_code || gf.line_serial = scal.linked_ref_no
    and cust_ac_no = sca.cust_ac_no) "OD_LIMIT",
    --sc.credit_rating                      "CR_GRADE",        
    null "AVG_NET_BAL",
    null "UNAUTH_OD_AMT",
    sca.acy_blocked_amount "AMT_BLOCKED",
    (select sum(amt)
    from ictb_entries_history ieh
    where ieh.acc = sca.cust_ac_no
    and ieh.brn = sca.branch_code
    and ieh.drcr = 'D'
    and ieh.liqn = 'Y'
    and ieh.entry_passed = 'Y'
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select * from ictm_pr_int ipi, ictm_rule_frm irf
    where ipi.product_code = ieh.prod
    and ipi.rule = irf.rule_id
    and irf.book_flag = 'B')) "DR_INTEREST",
    (select sum(amt)
    from ictb_entries_history ieh
    where ieh.acc = sca.cust_ac_no
    and ieh.brn = sca.branch_code
    and ieh.drcr = 'C'
    and ieh.liqn = 'Y'
    and ieh.entry_passed = 'Y'
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select * from ictm_pr_int ipi, ictm_rule_frm irf
    where ipi.product_code = ieh.prod
    and ipi.rule = irf.rule_id
    and irf.book_flag = 'B')) "CR_INTEREST",
    (select sum(amt) from ictb_entries_history ieh
    where ieh.brn = sca.branch_code
    and ieh.acc = sca.cust_ac_no
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select product_code
    from ictm_product_definition ipd
    where ipd.product_code = ieh.prod
    and ipd.product_type = 'C')) "FEE_INCOME",
    sca.record_stat "ACC_STATUS",
    case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
    and not exists (select 1
    from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null))
    then 1 else 0 end "NEW_ACC_FOR_THE_MONTH",
    case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
    and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
    and not exists (select 1
    from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null))
    then 1 else 0 end "NEW_ACC_FOR_NEW_CUST",
    (select 1 from dual
    where exists (select 1 from ictm_td_closure_renew itcr
    where itcr.brn = sca.branch_code
    and itcr.acc = sca.cust_ac_no
    and itcr.renewal_date = sysdate)
    or exists (select 1 from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null)) "RENEWED_OR_ROLLOVER",
    (select maturity_date from ictm_acc ia
    where ia.brn = sca.branch_code
    and ia.acc = sca.cust_ac_no) "MATURITY_DATE",
    sca.ac_stat_no_dr "DR_DISALLOWED",
    sca.ac_stat_no_cr "CR_DISALLOWED",
    sca.ac_stat_block                     "BLOCKED_ACC",       Not Reqd
    sca.ac_stat_dormant "DORMANT_ACC",
    sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
    sca.ac_stat_frozen "FROZEN_ACC",
    sca.ac_open_date "ACC_OPENING_DT",
    sca.address1 "ADD_LINE_1",
    sca.address2 "ADD_LINE_2",
    sca.address3 "ADD_LINE_3",
    sca.address4 "ADD_LINE_4",
    sca.joint_ac_indicator "JOINT_ACC",
    sca.acy_avl_bal "CR_BAL",
    0 "DR_BAL",
    0 "CR_BAL_LCY", t
    0 "DR_BAL_LCY",
    null "YTD_CR_MOVEMENT",
    null "YTD_DR_MOVEMENT",
    null "YTD_CR_MOVEMENT_LCY",
    null "YTD_DR_MOVEMENT_LCY",
    null "MTD_CR_MOVEMENT",
    null "MTD_DR_MOVEMENT",
    null "MTD_CR_MOVEMENT_LCY",
    null "MTD_DR_MOVEMENT_LCY",
    'N' "BRANCH_TRFR", --New
    sca.provision_amount "PROVISION_AMT",
    sca.account_type "ACCOUNT_TYPE",
    nvl(sca.tod_limit, 0) "TOD_LIMIT",
    nvl(sca.sublimit, 0) "SUB_LIMIT",
    nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
    nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
    from sttm_cust_account sca, sttm_customer sc
    where sca.branch_code = p_branch_code
    and sca.cust_no = sc.customer_no
    and ( exists (select 1 from actb_daily_log adl
    where adl.ac_no = sca.cust_ac_no
    and adl.ac_branch = sca.branch_code
    and adl.trn_dt = p_applDate
    and adl.auth_stat = 'A')
    or exists (select 1 from catm_amount_blocks cab
    where cab.account = sca.cust_ac_no
    and cab.branch = sca.branch_code
    and cab.effective_date = p_applDate
    and cab.auth_stat = 'A')
    or exists (select 1 from ictm_td_closure_renew itcr
    where itcr.acc = sca.cust_ac_no
    and itcr.brn = sca.branch_code
    and itcr.renewal_date = p_applDate)
    or exists (select 1 from sttm_ac_stat_change sasc
    where sasc.cust_ac_no = sca.cust_ac_no
    and sasc.branch_code = sca.branch_code
    and sasc.status_change_date = p_applDate
    and sasc.auth_stat = 'A')
    or exists (select 1 from cstb_acc_brn_trfr_log cabtl
    where cabtl.branch_code = sca.branch_code
    and cabtl.cust_ac_no = sca.cust_ac_no
    and cabtl.process_status = 'S'
    and cabtl.process_date = p_applDate)
    or exists (select 1 from sttbs_provision_history sph
    where sph.branch_code = sca.branch_code
    and sph.cust_ac_no = sca.cust_ac_no
    and sph.esn_date = p_applDate)
    or exists (select 1 from sttms_cust_account_dormancy scad
    where scad.branch_code = sca.branch_code
    and scad.cust_ac_no = sca.cust_ac_no
    and scad.dormancy_start_dt = p_applDate)
    or sca.maker_dt_stamp = p_applDate
    or sca.status_since = p_applDate
    l_tb_acc_det ty_tb_acc_det_int;
    l_brnrec cvpks_utils.rec_brnlcy;
    l_acbr_lcy sttms_branch.branch_lcy%type;
    l_lcy_amount actbs_daily_log.lcy_amount%type;
    l_xrate number;
    l_dt_rec sttm_dates%rowtype;
    l_acc_rec sttm_cust_account%rowtype;
    l_acc_stat_row ty_r_acc_stat;
    Edited by: user13710379 on Jan 7, 2012 12:18 AM

    I see it more like shown below (possibly with no inline selects
    Try to get rid of the remaining inline selects ( left as an exercise ;) )
    and rewrite traditional joins as ansi joins as problems might arise using mixed syntax as I have to leave so I don't have time to complete the query
    select sca.branch_code "BRANCH",
           sca.cust_ac_no "ACCOUNT",
           to_char(p_applDate, 'YYYYMM') "YEARMONTH",
           sca.ccy "CURRENCY",
           sca.account_class "PRODUCT",
           sca.cust_no "CUSTOMER",
           sca.ac_desc "DESCRIPTION",
           null "LOW_BAL",
           null "HIGH_BAL",
           null "AVG_CR_BAL",
           null "AVG_DR_BAL",
           null "CR_DAYS",
           null "DR_DAYS",
    --     null "CR_TURNOVER",
    --     null "DR_TURNOVER",
           null "DR_OD_DAYS",
           w.dd "OD_LIMIT",
    --     sc.credit_rating "CR_GRADE",
           null "AVG_NET_BAL",
           null "UNAUTH_OD_AMT",
           sca.acy_blocked_amount "AMT_BLOCKED",
           x.dr_int "DR_INTEREST",
           x.cr_int "CR_INTEREST",
           y.fee_amt "FEE_INCOME",
           sca.record_stat "ACC_STATUS",
           case when trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
                 and not exists(select 1
                                  from ictm_tdpayin_details itd
                                 where itd.multimode_payopt = 'Y'
                                   and itd.brn = sca.branch_code
                                   and itd.acc = sca.cust_ac_no
                                   and itd.multimode_offset_brn is not null
                                   and itd.multimode_tdoffset_acc is not null
                then 1
                else 0
           end "NEW_ACC_FOR_THE_MONTH",
           case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
                 and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
                 and not exists(select 1
                                  from ictm_tdpayin_details itd
                                 where itd.multimode_payopt = 'Y'
                                   and itd.brn = sca.branch_code
                                   and itd.acc = sca.cust_ac_no
                                   and itd.multimode_offset_brn is not null
                                   and itd.multimode_tdoffset_acc is not null
                then 1
                else 0
           end "NEW_ACC_FOR_NEW_CUST",
           (select 1 from dual
             where exists(select 1
                            from ictm_td_closure_renew itcr
                           where itcr.brn = sca.branch_code
                             and itcr.acc = sca.cust_ac_no
                             and itcr.renewal_date = sysdate
                or exists(select 1
                            from ictm_tdpayin_details itd
                           where itd.multimode_payopt = 'Y'
                             and itd.brn = sca.branch_code
                             and itd.acc = sca.cust_ac_no
                             and itd.multimode_offset_brn is not null
                             and itd.multimode_tdoffset_acc is not null
           ) "RENEWED_OR_ROLLOVER",
           m.maturity_date "MATURITY_DATE",
           sca.ac_stat_no_dr "DR_DISALLOWED",
           sca.ac_stat_no_cr "CR_DISALLOWED",
    --     sca.ac_stat_block "BLOCKED_ACC", --Not Reqd
           sca.ac_stat_dormant "DORMANT_ACC",
           sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
           sca.ac_stat_frozen "FROZEN_ACC",
           sca.ac_open_date "ACC_OPENING_DT",
           sca.address1 "ADD_LINE_1",
           sca.address2 "ADD_LINE_2",
           sca.address3 "ADD_LINE_3",
           sca.address4 "ADD_LINE_4",
           sca.joint_ac_indicator "JOINT_ACC",
           sca.acy_avl_bal "CR_BAL",
           0 "DR_BAL",
           0 "CR_BAL_LCY", t
           0 "DR_BAL_LCY",
           null "YTD_CR_MOVEMENT",
           null "YTD_DR_MOVEMENT",
           null "YTD_CR_MOVEMENT_LCY",
           null "YTD_DR_MOVEMENT_LCY",
           null "MTD_CR_MOVEMENT",
           null "MTD_DR_MOVEMENT",
           null "MTD_CR_MOVEMENT_LCY",
           null "MTD_DR_MOVEMENT_LCY",
           'N' "BRANCH_TRFR", --New
           sca.provision_amount "PROVISION_AMT",
           sca.account_type "ACCOUNT_TYPE",
           nvl(sca.tod_limit, 0) "TOD_LIMIT",
           nvl(sca.sublimit, 0) "SUB_LIMIT",
           nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
           nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
      from sttm_cust_account sca,
           sttm_customer sc,
           (select sca.cust_ac_no
                   sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
                       case when p_applDate >= sca.tod_limit_start_date
                             and p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)
                            then sca.tod_limit else 0
                       end
                      ) dd
              from sttm_cust_account sca
                   getm_facility gf,
                   sttm_cust_account_linkages scal
             where gf.line_code || gf.line_serial = scal.linked_ref_no
               and cust_ac_no = sca.cust_ac_no
             group by sca.cust_ac_no
           ) w,
           (select acc,
                   brn,
                   sum(decode(drcr,'D',amt)) dr_int,
                   sum(decode(drcr,'C',amt)) cr_int
              from ictb_entries_history ieh
             where ent_dt between p_st_dt and p_ed_dt
               and drcr in ('C','D')
               and liqn = 'Y'
               and entry_passed = 'Y'
               and exists(select null
                            from ictm_pr_int ipi,
                                 ictm_rule_frm irf
                           where ipi.rule = irf.rule_id
                             and ipi.product_code = ieh.prod 
                             and irf.book_flag = 'B'
             group by acc,brn
           ) x,
           (select acc,
                   brn,
                   sum(amt) fee_amt
              from ictb_entries_history ieh
             where ieh.ent_dt between p_st_dt and p_ed_dt
               and exists(select product_code
                            from ictm_product_definition ipd
                           where ipd.product_code = ieh.prod
                             and ipd.product_type = 'C'
             group by acc,brn
           ) y,
           ictm_acc m,
           (select sca.cust_ac_no,
                   sca.branch_code
                   coalesce(nvl2(coalesce(t1.ac_no,t1.ac_branch),'exists',null),
                            nvl2(coalesce(t2.account,t2.account),'exists',null),
                            nvl2(coalesce(t3.acc,t3.brn),'exists',null),
                            nvl2(coalesce(t4.cust_ac_no,t4.branch_code),'exists',null),
                            nvl2(coalesce(t5.cust_ac_no,t5.branch_code),'exists',null),
                            nvl2(coalesce(t6.cust_ac_no,t6.branch_code),'exists',null),
                            nvl2(coalesce(t7.cust_ac_no,t7.branch_code),'exists',null),
                            decode(sca.maker_dt_stamp,p_applDate,'exists'),
                            decode(sca.status_since,p_applDate,'exists')
                           ) existence
              from sttm_cust_account sca
                   left outer join
                   (select ac_no,ac_branch
                      from actb_daily_log
                     where trn_dt = p_applDate
                       and auth_stat = 'A'
                   ) t1
                on (sca.cust_ac_no = t1.ac_no
               and  sca.branch_code = t1.ac_branch
                   left outer join
                   (select account,account
                      from catm_amount_blocks
                     where effective_date = p_applDate
                       and auth_stat = 'A'
                   ) t2
                on (sca.cust_ac_no = t2.account
               and  sca.branch_code = t2.branch
                   left outer join
                   (select acc,brn
                      from ictm_td_closure_renew itcr
                     where renewal_date = p_applDate
                   ) t3
                on (sca.cust_ac_no = t3.acc
               and  sca.branch_code = t3.brn
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttm_ac_stat_change
                     where status_change_date = p_applDate
                       and auth_stat = 'A'
                   ) t4
                on (sca.cust_ac_no = t4.cust_ac_no
               and  sca.branch_code = t4.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from cstb_acc_brn_trfr_log
                     where process_date = p_applDate
                       and process_status = 'S'
                   ) t5
                on (sca.cust_ac_no = t5.cust_ac_no
               and  sca.branch_code = t5.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttbs_provision_history
                     where esn_date = p_applDate
                   ) t6
                on (sca.cust_ac_no = t6.cust_ac_no
               and  sca.branch_code = t6.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttms_cust_account_dormancy
                     where dormancy_start_dt = p_applDate
                   ) t7
                on (sca.cust_ac_no = t7.cust_ac_no
               and  sca.branch_code = t7.branch_code
           ) z
    where sca.branch_code = p_branch_code
       and sca.cust_no = sc.customer_no
       and sca.cust_ac_no = w.cust_ac_no
       and sca.cust_ac_no = x.acc
       and sca.branch_code = x.brn
       and sca.cust_ac_no = y.acc
       and sca.branch_code = y.brn
       and sca.cust_ac_no = m.acc
       and sca.branch_code = m.brn
       and sca.cust_ac_no = z.sca.cust_ac_no
       and sca.branch_code = z.branch_code
       and z.existence is not nullRegards
    Etbin

  • I need help in getting VZ to fix my DSL speed - they refused to fix it two times

    I live in E. Boston and opened a trouble ticket on 8-26-13 for slow internet speed. Ticket number was {edited for privacy}Agreement was made for a tech to arrive on the 28th between 1 and 5 PM. Somebody showed at my door on the 28th at 11:15AM. He immediately said that he had already done some outside checking and there was a problem outside of my home. I showed him my slow connection speed and he left saying he would be back later. I saw him on poles around my home and he called later saying he needed help from his office and would let me know the status by 3PM or so. I never heard from him again. I continued to check the ticket status and it was open.
    On Friday, Aug. 30th, about 9:15AM, I received a call from a VZ telephone tech named Rich. He again did some outside checking and came into my home. I showed him my slow internet speed and he went back outside after telling me that he had no test equipment but he would go back outside to see what he could do and call his manager since he was not a VZ internet tech. He came back a little later and told me that he did all that he could do and asked me to run my tests again. I did so in front of the tech and the speed was at least 30% slower than it is supposed to be. The VZ telephone tech said he would call for help from his office to send a tech with proper test equipment and he left. A number of hours passed and the VZ telephone tech called and told me HIS MANAGER TOLD HIM TO CLOSE THE TICKET AND TO SEND THE MANAGER THE INFORMATION.
    I immediately asked for and got the name and office of the manager that did this and I have it. I will hold off posting it for a couple of days but then I will let everyone know how this Verizon manager performs as well as who he is. I wonder how his superiors will like that?
    Does anyone have any suggestions who in Verizon cares enough to get my problem resolved? I have already filed a complaint with the AGO today. Verizon does this to keep their numbers from being respective of what they really are as well as knowing that customers do not want to again walk through all of the crap in opening a trouble ticket in India. I have been in telecom for 45 years and this is just disgraceful. It is about time VZ developed some integrity.

    Your issue has been escalated to a Verizon agent. Before the agent can begin assisting you, they will need to collect further information from you.Please go to your profile page for the forum, and look in the middle, right at the top where you will find an area titled "My Support Cases". You can reach your profile page by clicking on your name beside your post, or at the top left of this page underneath the title of the board.
    Under “My Support Cases” you will find a link to the private board where you and the agent may exchange information. This should be checked on a frequent basis as the agent may be waiting for information from you before they can proceed with any actions. Please keep all correspondence regarding your issue in the private support portal.

  • Need Help, i have an Ipad2 after 6.3.1 update, i have no cameras, or e-mail, I already reset, multiple times, but no improvment, can anyone help?

    Need Help, i have an Ipad2 after 6.3.1 update, i have no cameras, or e-mail, I already reset, multiple times, but no improvment, can anyone help?

    When you say reset, do you mean a reboot by holding both the power and home buttons until the apple logo appears, ignoring the red slider if it appears?

  • I need to improve speed....

    Gents,
    I'm having an Java app which is logging "messages" into another another window(emulating the console with some clever functionality). In this window I'm using a JTextArea for displaying messages, and it seems that repainting this JTextArea is a bottleneck for performance... Do some of you have experience about the topic and some ideas of how to improve speed ?
    Maybe use a simpler component than JTextArea, but I need copy&paste functionality in there ?
    Cheers,
    Preben

    I feel the need...the need for speed!Sounds like my last 2 weeks! We developed filters for the database to clean some data and projected comletion of the run with filtes was 135 days for the first set (not good, especially since 17 different algorithms had to be applied). We've steadily brought it down, so now a run takes about 16 minutes.
    speed... Speed... SPEED... I feel the need!

Maybe you are looking for

  • Problem with scanning in Photoshop CS6 after upgrading to Mountain Lion

    After upgrading to latest version of Mountain Lion i've got a problem with scanning directly into Photoshop CS6. on the previous Lion version when I wanted to scan something, i just opened Photoshop -> File -> Import -> images from device then a new

  • ITunes 7.1 won't start, invalid Win32 application error message comes up

    I repeatedly tried to download iTunes 7.1 for my new iPod nano on my Windows XP, when the download completes and I try to run the program an error message comes up stating that iTunesupdate.exe is not a valid Win32 application...I have tried erasing

  • How do you change the size of the font in the menu bar running Maverick?

    On Maverick, how do you change the size of the menu bar font?

  • Degeneration

    This truly is an FCP question: "Since you are going through an S-Video cabel (YC-luminence and color) you will be losing a generation of video because: Digital-Analog-Digital. One of the beautiful things about staying in the digital realm is the abil

  • PDF Optimizer Inconsistent

    I am running Acrobat 8 Professional on a Mac with Snow Leopard. I am trying to reduce a 48 MB pdf (every page is an image). I am getting inconstant results when trying to reduce the file size. I added a few links to some of the pages, then use PDF Op