Official release of data modeler is very slow on Linux (64bit)

I was testing the beta version on Linux (64bit) and it was fast (reverse engineering from database for example), but I found out the official release of the data modeler is slower compared to the Beta version. The same JDK version, the same Linux 64bit distro. Is there a memory leak with the official release ? Has any one else observed such sluggish performance with the official release of the data modeler ?
thanx

By itself the buffer hit cache ratio is a meaningless statistic. It can in fact be a misleading indicator since it does not actually reflect application performance.
Tune the query. Make sure it is running as well as it can.
Then look at overall machine resources: average and peak cpu, memory, and IO loads.
If spare resources exist then considering more resources to the more important databases on the system.
Document any performance changes that occur after each change. It is possible for the database performance problem being latching, that is, shared pool access and you might need space in the shared pool more than the buffers. It depends on the application and user load.
Why are you using a 2K database block size? I would think 4k or 8k would probably be better even for a true OLTP with almost all access by index.
To get help on the query you will need to post it, the explain plan, and information on available indexes, table row counts, and perhaps column statistics for the indexed columns and filter conditions.
HTH -- Mark D Powell --

Similar Messages

  • Internal Disk to Disk Data Transfer Speed Very Slow

    I have a G5 Xserve running Tiger with all updates applied that has recently started experiencing very slow Drive to Drive Data transfer speeds.
    When transferring data from one drive to another ( Internal to Internal, Internal to USB, Internal, Internal to FW, USB to USB or any other combination of the three ) we only are getting about 2GB / hr transfer speeds.
    I initially thought the internal drive was going bad. I tested the drive and found some minor header issues etc... that were able to be repaired so I replace the internal boot drive
    I tested and immediately got the same issue.
    I also tried booting from a FW drive and I got the same issue.
    If I connect to the server over the ethernet network, I get what I would expect to be typical data transfer rates of about 20GB+ / hr. Much higher than the internal rates and I am copying data from the same internal drives so I really don't think the drive is the issue.
    I called AppleCare and discussed the issue with them. They said it sounded like a controller issue so I purchased a replacement MLB from them. After replacing the drive data transfer speeds jumped back to normal for about a day maybe two.
    Now we are back to experiencing slow data transfer speeds internally ( 2GB / hr ) and normal transfer speeds ( 20GB+ / hr ) over the network.
    Any ideas on what might be causing the problem would be appreciated

    As suggested, do check for other I/O load on the spindles. And check for general system load.
    I don't know of a good GUI in-built I/O monitor here (and particularly for Tiger Server), though there is iopending and DTrace and Apple-provided [performance scripts|http://support.apple.com/kb/HT1992] with Leopard and Leopard Server. top would show you busy processes.
    Also look for memory errors and memory constraints and check for anything interesting in the contents of the system logs.
    The next spot after the controllers (and it's usually my first "hardware" stop for these sorts of cases, and usually before swapping the motherboard) are the disks that are involved, and whatever widgets are in the PCI slots. Loose cables, bad cables, and spindle-swaps. Yes, disks can sometimes slow down like this, and that's not usually a Good Thing. I know you think this isn't the disks, but that's one of the remaining common hardware factors. And don't presume any SMART disk monitoring has predictive value; SMART can miss a number of these cases.
    (Sometimes you have to use the classic "field service" technique of swapping parts and of shutting down software pieces until the problem goes away. Then work from there.)
    And the other question is around how much time and effort should be spent on this Xserve G5 box; whether you're now in the market for a replacement G5 box or a newer Intel Xserve box as a more cost-effective solution.
    (How current and how reliable is your disk archive?)

  • Data Base is very slow

    Dear All,
    Certain query on my database is very slow .
    One of the query some times does not execute at all.this query involves a big table of about 5 million records.
    Some Facts About my Database.
    OS: SUN Solaris
    DataBase: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    RAM:32GB
    Dedicated oracle server
    Processors:16
    DB Block Size:2048
    Large pool:150994944
    log_buffer:10485760
    shared_pool_size:150994944
    There are in total 21 production Database running on the same BOX.
    Previously my buffer cache hit ratio was 27%.
    So I recommended an increase in DB_CACHE_SIZE from 101MB to 300MB
    and SGA_MAX_SIZE to 800MB from 600MB.
    As a result ,The buffer cache hit ratio increased to 75%.
    But still the queries run slow.
    I even tried with Partitioning the Big Table Didn't help.
    My question is ,is the system over loaded ?
    or increasing the db_cache_size will help ?
    Regards.

    By itself the buffer hit cache ratio is a meaningless statistic. It can in fact be a misleading indicator since it does not actually reflect application performance.
    Tune the query. Make sure it is running as well as it can.
    Then look at overall machine resources: average and peak cpu, memory, and IO loads.
    If spare resources exist then considering more resources to the more important databases on the system.
    Document any performance changes that occur after each change. It is possible for the database performance problem being latching, that is, shared pool access and you might need space in the shared pool more than the buffers. It depends on the application and user load.
    Why are you using a 2K database block size? I would think 4k or 8k would probably be better even for a true OLTP with almost all access by index.
    To get help on the query you will need to post it, the explain plan, and information on available indexes, table row counts, and perhaps column statistics for the indexed columns and filter conditions.
    HTH -- Mark D Powell --

  • Data Services Designer - Very Slow on VPN

    Hello,
    Any idea why Data Services Designer is very slow and many a times goes into Not Responding state. I'm using this client tool to connect to the Data Services Repository + Servers via VPN.
    It takes few minutes to load the jobs or to save the changes. Some times hangs.
    Wanted to know if anyone is facing similar issues, and any workaround/setup changes to eliminate these delays...
    Regards,
    Madan
    Edited by: Madan Mohan Reddy Zollu on Mar 12, 2010 9:24 AM

    Data Services Designer is communicating with the repo (to store/retrieve objects) and the jobserver (to execute jobs and get status/log files) so if there is a slow network connection, response time in the Designer could become problematic.
    One way to solve this is use CITRIX or terminal services to have your Designer close to the database and only screens are send over the slow connection. In the Windows installation guide there is a chapter that documents how to setupDesigner in a (multi-user) Citrix environment.

  • Data load becomes very slow

    Hi,after a migration from Version 5 to 6.5 the dataload becomes very slow.With V5 the dataload takes 1 hour, with 6.5 it takes about 3 hours.the calculation takes the same time.Any idea?

    To many sub VIs could not be found so I can not give you more than some advises. But I see that you run all your loops at full speed. I do not think it is very wise. Insert the "wait (ms)" functions in all while loops, but not the loops handling the daq functions. Since they are controlled by a occurrence. In Loops handling user input only you may set the wait time as high as 500ms. In more important loops use shorter time.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • Oracle VM 2.2.2 - TCP/IP data transfer  is very slow

    Hi, i've encountered a disturbing problem with OVM 2.2.2.
    My dom0 network setup (4 identical servers):
    eth0/eth1 (ixbe 10gbit) -> bond0 (mode=1) -> xenbr0 -> domU vif's
    Besides bonding setup, it's default OVM 2.2.2 installation.
    Problem description:
    TCP/IP data dransfer speed:
    - between two dom0 hosts: 40-50MB/s
    - between two domU hosts within one dom0 host: 40-50MB/s
    - between dom0 and locally hosted domU: 40-50MB/s
    - between any single domU and anything outside it's dom0 host: 55KB/s -
    something is definitely wrong here.
    domU network config:
    vif = ['bridge=xenbr0,mac=00:16:3E:46:9D:F1,type=netfront']
    vif_other_config = []
    I have similar installation on Debian/Xen, and everything is running
    fine, e.g. i don't have any data transfer speed related issues.
    regards
    Robert

    There is also an issue with the ixgbe driver in the stock OVM2.2.2 kernel (bug:1297057 on MoS). We were getting abysmal results for receive traffic (measured in hundreds of kilobytes!!! per second at times) compared to transmit. It's not exactly the same as your problem, so don't blindly follow what I say below!!!
    ### "myserver01" is a PV domU on Oracle VM 2.2.2 server running stock kernel ###
    [root@myserver02 netperf]# ./netperf -l 60 -H myserver01 -t TCP_STREAM
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver01.mycompany.co.nz (<IP>) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec
    87380 16384 16384 60.23 1.46
    ### Repeat the test in the opposite direction, to show TX is fine from "myserver01" ###
    [root@myserver01 netperf]# ./netperf -l 60 -H myserver02 -t TCP_STREAM
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver02.mycompany.co.nz (<IP>) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec
    87380 16384 16384 60.01 2141.59
    In my case, a workaround as advised by Oracle Support is to run:
    ethtool -C eth0 rx-usecs 0
    ethtool -C eth1 rx-usecs 0
    against the slaves within your bond group. This will give you better performance (in my case, got up to ~1.2GBit/s), although there are some fixes coming out in the next kernel which get even better speeds (in my tests, ~2.2GBit/s):
    Edited by: user10786594 on 11/09/2011 02:22

  • Master Data loading is very slow.

    Hi Experts,
    I have scheduled Master Data Attribute Process Chain daily. 0employee_attr, infoObject is having only 487315 & it is taking more than 12 hrs. There are number of infoobjects are much bigger is taking 10 minutes only. 0Employee attr in increasing 5-10 records daily only. Earlier it was taking 4-5 hours.
    Regards,
    Anand Mehrotra.

    Hi,
    You must have the following profiles to BWREMOTE or ALEREMOTE users.So add it. Bcoz either of these two users will use in background to get extract teh data from ECC, so add tehse profiels in BW.
    S_BI-WHM_RFC, S_BI-WHM_SPC, S_BI-WX_RFC
    And also check the following things.
    1.Connections from BW to ECC and ECC to BW in SM59
    2.Check Port,Partner Profiles,and Message Types in WE20 in ECC & BW.
    3.Check Dumps in ST22, and SM21.
    4.If Idocs are stuck i.e see the OLTP Idoc numbers in RSMO Screen in (BW) detials tab see in bottom, you can see OLTP Idoc number and take the Idoc numbers and then goto to ECC see the status in WE05 or WE02, if error then check the log else goto to BD87 in ECC and give the Idoc numbers and execute manually and see in RSMO and refresh.
    5.Check the LUWs struck in SM58,User Name = * (star) and run it and see Strucked LUWs and select our LUW and execute manually and see in RSMO in BW.
    See in SDN
    Re: Loading error in the production  system
    Thanks
    Reddy

  • Data load is very slow

    Hi Experts,
    I am working on CRM Analytic. I am loading address data from extractor 0BP_DEF_ADDRESS_ATTR to Business partner, with  19 lakhs records. When I execute the DTP it is taking 3 to 4 days to complete the load.
    Please provide me solution so that my data load will become fast.
    With Regards,
    Avenai

    Hi,
    Increase the Number of parallel processes.
    in order to increase parallel processes --> from menu of DTP -> goto -> "settings for batch manager" -> increase the Number of parallel processes (by default it will be 3 increase it to 6).
    Increase the Datapacket size in the DTp -extraction tab.
    Do you have any routines used in the transfromations? if yes try to debug the code where its taking time, find n try to fine tune the code with the help of ABAP person.
    Below option may also be one of the reasons while using CRM data sources...
    The data source consists of lots of fields which are not used or mapped in the transformation ... try to hide those fields or you can create a copy of your data source using BWA1 transaction in CRM system.
    Regards
    KP

  • My Macbook Pro (Early 2011 model) is VERY slow when I use it without the charger

    When it's not plugged in, my macbook doesn't function in its normal speed. Basically it lags. But when I plug it in to a power source, it works fine.
    I've tried checking hardware but the AHT doesn't work anymore.
    I'm not sure when it actually started but I think this has been happening after I upgraded to Yosemite.

    Try SMC and NVRAM resets:
    https://support.apple.com/en-us/HT201295
    https://support.apple.com/en-us/HT204063
    If no success, make an Apple store genius bar appointment for a FREE evaluation.
    Ciao.

  • Java 5 is VERY slow on Linux Fedora core3

    Why Java is 10-15 times slower on Fedora Linux, in comparising to Windows XP?
    I have two identical PC (dual Pentuim3 1Gh, 1.5Gig memory).
    Java version - JDK 1.5.0_03
    For testing, I run simple java program:
    /// Start LoadTest.Java
    import java.util.*;
    public class LoadTest {
    public static void main(String[] args)
    Date startDate = new Date();
    int repeat = 1000;
    HashMap map = new HashMap();
    for(int i=0; i<repeat; i++) {
    //System.out.println(i);
    ArrayList list = new ArrayList();
    long lval = (new Date()).getTime() / startDate.getTime() + ((new Date()).getTime() * startDate.getTime()) - startDate.getTime();
    String sval = Long.toString(lval);
    Long LValue = Long.valueOf( sval ).longValue();
    map.put(LValue, sval);
    String keyset = map.keySet().toString();
    list.add(0,keyset);
    list.addAll(map.values());
    for (Iterator iter = map.keySet().iterator(); iter.hasNext(); ) {
    Long lv = (Long) iter.next();
    String sv = (String) map.get(lv);
    list.add(sv);
    keyset = map.keySet().toString();
    list.add(0,keyset);
    Date endDate = new Date();
    String message = " LoadTest: " + repeat + " repetitions. Total Time: " + (endDate.getTime() - startDate.getTime());
    System.out.println(message);
    /// End LoadTest.Java
    It takes 300ms to run it on Windows and 3900ms - on Linux
    Can somebody help me?
    Shall I consider to use Windows on the web-server?
    Thanks.

    I know, it's just dirty, ugly test, but WHY WinXP handles it better?
    Actually, I did this best after I get som test results for my application.
    I't uses JBoss4.1/Tomcat5 on JDK 1.5_03. I used JMeter to test it and get some strange results - to test throughput rate on plain JSP pages, like login page (without any EJB) , Linux performs 10 times slower (for 100 requests, 24/min on Linux and 240/min on Windows). To test EJB, which retrieving datasets from database, the difference is about 3 times. Linux - FedoraCore 3 (2.6.11)
    That's why I started testing Java itself. BTW, if make this test a little "pretty" - no memory allocations in loop, less using arraylists and maps, the performance difference about 3 times or even less, if to put Thread.sleep(1) in the loop.
    Now, I have to figure out, which paltform to use for our web-application, but I'm confused.
    BTW, which app.server did you use? May be JBoss is not the best solution? The application should handle about 200 users, most of them almost simultaneously.
    Thanks

  • Large SGA issue-- insert data is very slow--Who can help me?

    I set the max_sga_size to 10G,and db_cache_size to 8G, but the value of db_cache_size is negative number in OEM, and also I found that the data inserting was very slow, I checked the OS, found no CPU consuming and no IO consuming.
    The OS is HP-UX B11.23 ia64.
    Oracle server 9.2.0.7
    Physical memory : 64G
    CPU: 8
    (oracle server and os are all 64-bit).
    If I decrease the SGA to 3G,and db_cache_size to 2G, and the same data inserting is very fast, everything is well.
    so I guess if there are some os parameters needed to set for using LARGE memory.
    Who know this issue or who has the experience of using large SGA in HP-UX ?
    Message was edited by:
    user548543

    Sounds like you might have a configuration issue on the o/s side.
    Check that kernel parameters are set as recommended in the installation guide
    The first thing that came to mind after reading the problem description was that you might have too low SHMMAX for that 10GB SGA, which would cause multiple shm segments to be created and thus explain the performance degration you're experiencing.
    A quick way to check if that's the case would be doing "ipcs -m" to see if there are multiple shm segments when SGA is set to 10GB.

  • Time Capsule HDD is very slow

    My new Time Capsule  Hard disk data transfer is very slow and taking lot of time to copy single item, I connected my time capsule thru an existing wireles network instead of creating a new network, pls suggest how can i make it fast to access data or copy data.

    You are connected the wrong way..
    Plug into the main router via ethernet with the TC in bridge mode.
    Setup wireless to roaming with the current network.. ie same wireless name = SSID and same security settings.. wpa2 AES = WPA2 Personal and same password.
    Or simply turn off wireless in the TC.. use ethernet for fast speeds.

  • Report running very slow.. taking too much time

    Dear Oracle Report experts,
    I have developed report in oracle reports bulider 10g. while running it from report builder through main menu *** data is coming very SLOW *** within 55 Minuts.
    But, If same query is executing from SQL/PLSQL deverloper it is very fast within 45 second.
    Please suggest any configuration or setting ; is having Idea.
    Thanks
    Muhammad Salim
    Query is as below: generating result in 48 second.
    select cns.consultant,
    sum(cns.nof_pat) noof_pat,
    sum(cns.opd_amnt) opd_amnt,
    sum(cns.discount_amnt) discount_amnt,
    sum(cns.net_amnt) net_amnt,
    sum(cns.dr_share) dr_share,
    sum(cns.hosp_share) hosp_share,
    sum(cns.net_dis) net_dis
    from
    select rec.consultant,
    count(distinct rec.consultant) nof_pat,
    -- rec.receipt_date, bysalim
    pay_mode,
    rec.patient_mrno,rec.patient,
    service_name,rcpt_no,
    company,rec.docno,
    sum(distinct return_amnt) return_amnt,
    sum(distinct rec.opd_amnt) opd_amnt,
    sum(distinct dis.discount_amnt) discount_amnt,
    (sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant)) net_amnt,
    round((((sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant) ) *
    max(dr_per))/100),0) dr_share,
    round((((sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant) ) *
    max(100-dr_per))/100),0) hosp_share,
    count(distinct rec.consultant) net_dis
    from
    select -- bokm_doc_dt receipt_date, bysalim
    bil_recept_no_a rcpt_no,
    fnc_org_sname(bokm_panel_comp_id) company,
    0 return_amnt,
    pr_mrno patient_mrno,pr_fname patient,
    bokm_doc_no docno,
    gcd_desc(bil_pay_mode_a) pay_mode,
    fnc_service_name(rslt_tst_code) service_name,
    dr_name consultant,
    pt_tst_rate opd_amnt,
    cons_share cons_share,
    (select max((nvl(rt_dr_share,0)*(100))/nvl(rt_amount,0))
    from hms_adm_dr_rt rt
    where dr.dr_id = rt.rt_dr_id
    and book.rslt_tst_code = rt.rt_scs_id) dr_per,
    dr_on_rent dr_rent,dr_share
    from hms_pat_pers pat,hms_lab_pat_mst pmst,hms_opd_book book,
    hms_pat_amnt amt,hms_adm_dr dr
    where pat.pr_mrno = pmst.bokm_mrno
    and pmst.bokm_mrno = book.rslt_mrno
    and pmst.bokm_doc_no = book.pt_pat_doc_no
    and pmst.bokm_mrno = amt.bil_mrnum_a
    and pmst.bokm_doc_no = amt.bil_docno_a
    and pmst.bokm_ref_conusltant_id = dr.dr_id
    and amt.bil_rcp_type_a = '075002'
    and pmst.bokm_pat_type in('PVT_OPD','CP_OPD')
    and amt.bil_void_a = 'N'
    and (pmst.bokm_user_dept_code != '039')
    and BOOK.CREATED_ON between '01-OCT-2011' and '17-OCT-2012'
    /* and (pat.pr_curr_cont_id = :P_CONT_ID or :P_CONT_ID = '000')
    and (pat.pr_curr_prvnc_id = :P_PRVNC_ID or :P_PRVNC_ID = '00')
    and (pat.pr_curr_city_id = :P_CITY_ID or :P_CITY_ID = '000')
    and (pat.pr_curr_area = :P_AREA_ID or :P_AREA_ID = '000')
    and (pat.pr_gender = :P_GENDER or :P_GENDER = 'A')
    and (pat.pr_marital_status = :P_MARITAL_STAT or :P_MARITAL_STAT = 'ALL')
    and (to_char(pmst.bokm_panel_comp_id) = :P_PANEL_COMP or :P_PANEL_COMP = 'ALL')
    and (pmst.bokm_ref_conusltant_id = :P_CONS or :P_CONS = 'ALL')
    and (decode(pmst.bokm_panel_comp_id,'1','PVT_IPD','CP_IPD') = :P_PAT_TYPE or :P_PAT_TYPE = 'ALL')
    &LPARA_RCPT_DT */
    ) rec,
    select -- bokm_doc_dt receipt_date, bysalim
    pr_mrno patient_mrno,
    bokm_doc_no docno,
    nvl(bil_disc_amont_a,0) discount_amnt
    from hms_pat_pers pat,hms_lab_pat_mst pmst,hms_opd_book book,
    hms_pat_amnt amt
    where pat.pr_mrno = pmst.bokm_mrno
    and pmst.bokm_mrno = book.rslt_mrno
    and pmst.bokm_doc_no = book.pt_pat_doc_no
    and pmst.bokm_mrno = amt.bil_mrnum_a
    and pmst.bokm_doc_no = amt.bil_docno_a
    and amt.bil_rcp_type_a = '075001'
    and pmst.bokm_pat_type in('PVT_OPD','CP_OPD')
    and amt.bil_void_a = 'N'
    and (pmst.bokm_user_dept_code != '039')
    and BOOK.CREATED_ON between '01-OCT-2011' and '17-OCT-2012'
    /* and (pat.pr_curr_cont_id = :P_CONT_ID or :P_CONT_ID = '000')
    and (pat.pr_curr_prvnc_id = :P_PRVNC_ID or :P_PRVNC_ID = '00')
    and (pat.pr_curr_city_id = :P_CITY_ID or :P_CITY_ID = '000')
    and (pat.pr_curr_area = :P_AREA_ID or :P_AREA_ID = '000')
    and (pat.pr_gender = :P_GENDER or :P_GENDER = 'A')
    and (pat.pr_marital_status = :P_MARITAL_STAT or :P_MARITAL_STAT = 'ALL')
    and (to_char(pmst.bokm_panel_comp_id) = :P_PANEL_COMP or :P_PANEL_COMP = 'ALL')
    and (pmst.bokm_ref_conusltant_id = :P_CONS or :P_CONS = 'ALL')
    and (decode(pmst.bokm_panel_comp_id,'1','PVT_IPD','CP_IPD') = :P_PAT_TYPE or :P_PAT_TYPE = 'ALL')
    and BOOK.CREATED_ON between :P_RCPT_DTFR and :P_RCPT_DTTO
    -- and BOOK.CREATED_ON between :P_RCPT_DTFR and :P_RCPT_DTTO
    &LPARA_RCPT_DT */
    ) dis
    where rec.patient_mrno = dis.patient_mrno (+)
    and rec.docno = dis.docno (+)
    --and patient = 'SHAHMEER'
    group by rec.consultant, --rec.receipt_date, bysalim
    rec.patient_mrno,rec.patient,
    pay_mode,service_name,rec.docno,
    rcpt_no,company
    order by rcpt_no,rec.consultant
    ) cns
    group by cns.consultant
    order by 1
    Edited by: user6431550 on Nov 15, 2012 3:10 AM

    Dear Oracle Report experts,
    I have developed report in oracle reports bulider 10g. while running it from report builder through main menu *** data is coming very SLOW *** within 55 Minuts.
    But, If same query is executing from SQL/PLSQL deverloper it is very fast within 45 second.
    Please suggest any configuration or setting ; is having Idea.
    Thanks
    Muhammad Salim
    Query is as below: generating result in 48 second.
    select cns.consultant,
    sum(cns.nof_pat) noof_pat,
    sum(cns.opd_amnt) opd_amnt,
    sum(cns.discount_amnt) discount_amnt,
    sum(cns.net_amnt) net_amnt,
    sum(cns.dr_share) dr_share,
    sum(cns.hosp_share) hosp_share,
    sum(cns.net_dis) net_dis
    from
    select rec.consultant,
    count(distinct rec.consultant) nof_pat,
    -- rec.receipt_date, bysalim
    pay_mode,
    rec.patient_mrno,rec.patient,
    service_name,rcpt_no,
    company,rec.docno,
    sum(distinct return_amnt) return_amnt,
    sum(distinct rec.opd_amnt) opd_amnt,
    sum(distinct dis.discount_amnt) discount_amnt,
    (sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant)) net_amnt,
    round((((sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant) ) *
    max(dr_per))/100),0) dr_share,
    round((((sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant) ) *
    max(100-dr_per))/100),0) hosp_share,
    count(distinct rec.consultant) net_dis
    from
    select -- bokm_doc_dt receipt_date, bysalim
    bil_recept_no_a rcpt_no,
    fnc_org_sname(bokm_panel_comp_id) company,
    0 return_amnt,
    pr_mrno patient_mrno,pr_fname patient,
    bokm_doc_no docno,
    gcd_desc(bil_pay_mode_a) pay_mode,
    fnc_service_name(rslt_tst_code) service_name,
    dr_name consultant,
    pt_tst_rate opd_amnt,
    cons_share cons_share,
    (select max((nvl(rt_dr_share,0)*(100))/nvl(rt_amount,0))
    from hms_adm_dr_rt rt
    where dr.dr_id = rt.rt_dr_id
    and book.rslt_tst_code = rt.rt_scs_id) dr_per,
    dr_on_rent dr_rent,dr_share
    from hms_pat_pers pat,hms_lab_pat_mst pmst,hms_opd_book book,
    hms_pat_amnt amt,hms_adm_dr dr
    where pat.pr_mrno = pmst.bokm_mrno
    and pmst.bokm_mrno = book.rslt_mrno
    and pmst.bokm_doc_no = book.pt_pat_doc_no
    and pmst.bokm_mrno = amt.bil_mrnum_a
    and pmst.bokm_doc_no = amt.bil_docno_a
    and pmst.bokm_ref_conusltant_id = dr.dr_id
    and amt.bil_rcp_type_a = '075002'
    and pmst.bokm_pat_type in('PVT_OPD','CP_OPD')
    and amt.bil_void_a = 'N'
    and (pmst.bokm_user_dept_code != '039')
    and BOOK.CREATED_ON between '01-OCT-2011' and '17-OCT-2012'
    /* and (pat.pr_curr_cont_id = :P_CONT_ID or :P_CONT_ID = '000')
    and (pat.pr_curr_prvnc_id = :P_PRVNC_ID or :P_PRVNC_ID = '00')
    and (pat.pr_curr_city_id = :P_CITY_ID or :P_CITY_ID = '000')
    and (pat.pr_curr_area = :P_AREA_ID or :P_AREA_ID = '000')
    and (pat.pr_gender = :P_GENDER or :P_GENDER = 'A')
    and (pat.pr_marital_status = :P_MARITAL_STAT or :P_MARITAL_STAT = 'ALL')
    and (to_char(pmst.bokm_panel_comp_id) = :P_PANEL_COMP or :P_PANEL_COMP = 'ALL')
    and (pmst.bokm_ref_conusltant_id = :P_CONS or :P_CONS = 'ALL')
    and (decode(pmst.bokm_panel_comp_id,'1','PVT_IPD','CP_IPD') = :P_PAT_TYPE or :P_PAT_TYPE = 'ALL')
    &LPARA_RCPT_DT */
    ) rec,
    select -- bokm_doc_dt receipt_date, bysalim
    pr_mrno patient_mrno,
    bokm_doc_no docno,
    nvl(bil_disc_amont_a,0) discount_amnt
    from hms_pat_pers pat,hms_lab_pat_mst pmst,hms_opd_book book,
    hms_pat_amnt amt
    where pat.pr_mrno = pmst.bokm_mrno
    and pmst.bokm_mrno = book.rslt_mrno
    and pmst.bokm_doc_no = book.pt_pat_doc_no
    and pmst.bokm_mrno = amt.bil_mrnum_a
    and pmst.bokm_doc_no = amt.bil_docno_a
    and amt.bil_rcp_type_a = '075001'
    and pmst.bokm_pat_type in('PVT_OPD','CP_OPD')
    and amt.bil_void_a = 'N'
    and (pmst.bokm_user_dept_code != '039')
    and BOOK.CREATED_ON between '01-OCT-2011' and '17-OCT-2012'
    /* and (pat.pr_curr_cont_id = :P_CONT_ID or :P_CONT_ID = '000')
    and (pat.pr_curr_prvnc_id = :P_PRVNC_ID or :P_PRVNC_ID = '00')
    and (pat.pr_curr_city_id = :P_CITY_ID or :P_CITY_ID = '000')
    and (pat.pr_curr_area = :P_AREA_ID or :P_AREA_ID = '000')
    and (pat.pr_gender = :P_GENDER or :P_GENDER = 'A')
    and (pat.pr_marital_status = :P_MARITAL_STAT or :P_MARITAL_STAT = 'ALL')
    and (to_char(pmst.bokm_panel_comp_id) = :P_PANEL_COMP or :P_PANEL_COMP = 'ALL')
    and (pmst.bokm_ref_conusltant_id = :P_CONS or :P_CONS = 'ALL')
    and (decode(pmst.bokm_panel_comp_id,'1','PVT_IPD','CP_IPD') = :P_PAT_TYPE or :P_PAT_TYPE = 'ALL')
    and BOOK.CREATED_ON between :P_RCPT_DTFR and :P_RCPT_DTTO
    -- and BOOK.CREATED_ON between :P_RCPT_DTFR and :P_RCPT_DTTO
    &LPARA_RCPT_DT */
    ) dis
    where rec.patient_mrno = dis.patient_mrno (+)
    and rec.docno = dis.docno (+)
    --and patient = 'SHAHMEER'
    group by rec.consultant, --rec.receipt_date, bysalim
    rec.patient_mrno,rec.patient,
    pay_mode,service_name,rec.docno,
    rcpt_no,company
    order by rcpt_no,rec.consultant
    ) cns
    group by cns.consultant
    order by 1
    Edited by: user6431550 on Nov 15, 2012 3:10 AM

  • SQL Developer Data Modeler notation

    Hi,
    I want to create a diagram for a database. While Data Modeler is very useful for generating DDL, I don't like that the only notation type you can choose from are Crows Notations and BachMan. I would like to use the Min-Max ISO notation but I don't know, is there a plugin to allow this?
    Thanks!

    Wrong forum, please try to ask your question at the SQL Developer Data Modeler forum: SQL Developer Data Modeler

  • SQL Developer Data Modeler on Windows7?

    Hello,
    Are there plans to release a Data Modeler version that runs on Windows7? Is there an ETA?
    Thank you,
    Beatriz.

    Hi Beatriz,
    it should run under Windows 7. Can you report any problems?
    Thanks,
    Philip

Maybe you are looking for

  • Can iPad be switched to a different cellular carrier?

    We are considering switching from Verizon to Consumer Cellular.  Can my iPad, currently configured for Verizon, be used on by other cellular networks if I switch sim cards?

  • DataScroller Giving problem in tomahowk JSF

    Hi All I am generating dynamic datatable content by giving following code <t:dataTable value="#{ListHolderBean.resultList}" sortable="true" var="result" id="employeeTable" rowClasses="TableLeftMaint" footerClass="TableLeftSub" headerClass="TableLeftS

  • Sorting serial packet data into queues

    I am receiving serial packet data which originates from different sensor nodes in a network. Each sensor node has a unique ID which is sent in the packet. The idea is to display the data in one chart and choose which sensor data to display by using a

  • In korea..limited website access.

    My husband is in korea with his new mac book pro, attempting to sign on to his school website so he can do classwork, but his internet connection is acting funny. It is only giving him access to some site, and not others... mainly the ones he needs t

  • Assigning object name dynamically !!!

    Dear Friends, Is it possible to assign the object name dynamically. Something like this ... class DynamicObject public static void main(String ggg[]) int runtime= ....getNumber(); // ... some method which returns int and val returned is dynamic for (