Data Base is very slow

Dear All,
Certain query on my database is very slow .
One of the query some times does not execute at all.this query involves a big table of about 5 million records.
Some Facts About my Database.
OS: SUN Solaris
DataBase: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
RAM:32GB
Dedicated oracle server
Processors:16
DB Block Size:2048
Large pool:150994944
log_buffer:10485760
shared_pool_size:150994944
There are in total 21 production Database running on the same BOX.
Previously my buffer cache hit ratio was 27%.
So I recommended an increase in DB_CACHE_SIZE from 101MB to 300MB
and SGA_MAX_SIZE to 800MB from 600MB.
As a result ,The buffer cache hit ratio increased to 75%.
But still the queries run slow.
I even tried with Partitioning the Big Table Didn't help.
My question is ,is the system over loaded ?
or increasing the db_cache_size will help ?
Regards.

By itself the buffer hit cache ratio is a meaningless statistic. It can in fact be a misleading indicator since it does not actually reflect application performance.
Tune the query. Make sure it is running as well as it can.
Then look at overall machine resources: average and peak cpu, memory, and IO loads.
If spare resources exist then considering more resources to the more important databases on the system.
Document any performance changes that occur after each change. It is possible for the database performance problem being latching, that is, shared pool access and you might need space in the shared pool more than the buffers. It depends on the application and user load.
Why are you using a 2K database block size? I would think 4k or 8k would probably be better even for a true OLTP with almost all access by index.
To get help on the query you will need to post it, the explain plan, and information on available indexes, table row counts, and perhaps column statistics for the indexed columns and filter conditions.
HTH -- Mark D Powell --

Similar Messages

  • Internal Disk to Disk Data Transfer Speed Very Slow

    I have a G5 Xserve running Tiger with all updates applied that has recently started experiencing very slow Drive to Drive Data transfer speeds.
    When transferring data from one drive to another ( Internal to Internal, Internal to USB, Internal, Internal to FW, USB to USB or any other combination of the three ) we only are getting about 2GB / hr transfer speeds.
    I initially thought the internal drive was going bad. I tested the drive and found some minor header issues etc... that were able to be repaired so I replace the internal boot drive
    I tested and immediately got the same issue.
    I also tried booting from a FW drive and I got the same issue.
    If I connect to the server over the ethernet network, I get what I would expect to be typical data transfer rates of about 20GB+ / hr. Much higher than the internal rates and I am copying data from the same internal drives so I really don't think the drive is the issue.
    I called AppleCare and discussed the issue with them. They said it sounded like a controller issue so I purchased a replacement MLB from them. After replacing the drive data transfer speeds jumped back to normal for about a day maybe two.
    Now we are back to experiencing slow data transfer speeds internally ( 2GB / hr ) and normal transfer speeds ( 20GB+ / hr ) over the network.
    Any ideas on what might be causing the problem would be appreciated

    As suggested, do check for other I/O load on the spindles. And check for general system load.
    I don't know of a good GUI in-built I/O monitor here (and particularly for Tiger Server), though there is iopending and DTrace and Apple-provided [performance scripts|http://support.apple.com/kb/HT1992] with Leopard and Leopard Server. top would show you busy processes.
    Also look for memory errors and memory constraints and check for anything interesting in the contents of the system logs.
    The next spot after the controllers (and it's usually my first "hardware" stop for these sorts of cases, and usually before swapping the motherboard) are the disks that are involved, and whatever widgets are in the PCI slots. Loose cables, bad cables, and spindle-swaps. Yes, disks can sometimes slow down like this, and that's not usually a Good Thing. I know you think this isn't the disks, but that's one of the remaining common hardware factors. And don't presume any SMART disk monitoring has predictive value; SMART can miss a number of these cases.
    (Sometimes you have to use the classic "field service" technique of swapping parts and of shutting down software pieces until the problem goes away. Then work from there.)
    And the other question is around how much time and effort should be spent on this Xserve G5 box; whether you're now in the market for a replacement G5 box or a newer Intel Xserve box as a more cost-effective solution.
    (How current and how reliable is your disk archive?)

  • Data Services Designer - Very Slow on VPN

    Hello,
    Any idea why Data Services Designer is very slow and many a times goes into Not Responding state. I'm using this client tool to connect to the Data Services Repository + Servers via VPN.
    It takes few minutes to load the jobs or to save the changes. Some times hangs.
    Wanted to know if anyone is facing similar issues, and any workaround/setup changes to eliminate these delays...
    Regards,
    Madan
    Edited by: Madan Mohan Reddy Zollu on Mar 12, 2010 9:24 AM

    Data Services Designer is communicating with the repo (to store/retrieve objects) and the jobserver (to execute jobs and get status/log files) so if there is a slow network connection, response time in the Designer could become problematic.
    One way to solve this is use CITRIX or terminal services to have your Designer close to the database and only screens are send over the slow connection. In the Windows installation guide there is a chapter that documents how to setupDesigner in a (multi-user) Citrix environment.

  • Data load becomes very slow

    Hi,after a migration from Version 5 to 6.5 the dataload becomes very slow.With V5 the dataload takes 1 hour, with 6.5 it takes about 3 hours.the calculation takes the same time.Any idea?

    To many sub VIs could not be found so I can not give you more than some advises. But I see that you run all your loops at full speed. I do not think it is very wise. Insert the "wait (ms)" functions in all while loops, but not the loops handling the daq functions. Since they are controlled by a occurrence. In Loops handling user input only you may set the wait time as high as 500ms. In more important loops use shorter time.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • Official release of data modeler is very slow on Linux (64bit)

    I was testing the beta version on Linux (64bit) and it was fast (reverse engineering from database for example), but I found out the official release of the data modeler is slower compared to the Beta version. The same JDK version, the same Linux 64bit distro. Is there a memory leak with the official release ? Has any one else observed such sluggish performance with the official release of the data modeler ?
    thanx

    By itself the buffer hit cache ratio is a meaningless statistic. It can in fact be a misleading indicator since it does not actually reflect application performance.
    Tune the query. Make sure it is running as well as it can.
    Then look at overall machine resources: average and peak cpu, memory, and IO loads.
    If spare resources exist then considering more resources to the more important databases on the system.
    Document any performance changes that occur after each change. It is possible for the database performance problem being latching, that is, shared pool access and you might need space in the shared pool more than the buffers. It depends on the application and user load.
    Why are you using a 2K database block size? I would think 4k or 8k would probably be better even for a true OLTP with almost all access by index.
    To get help on the query you will need to post it, the explain plan, and information on available indexes, table row counts, and perhaps column statistics for the indexed columns and filter conditions.
    HTH -- Mark D Powell --

  • Oracle VM 2.2.2 - TCP/IP data transfer  is very slow

    Hi, i've encountered a disturbing problem with OVM 2.2.2.
    My dom0 network setup (4 identical servers):
    eth0/eth1 (ixbe 10gbit) -> bond0 (mode=1) -> xenbr0 -> domU vif's
    Besides bonding setup, it's default OVM 2.2.2 installation.
    Problem description:
    TCP/IP data dransfer speed:
    - between two dom0 hosts: 40-50MB/s
    - between two domU hosts within one dom0 host: 40-50MB/s
    - between dom0 and locally hosted domU: 40-50MB/s
    - between any single domU and anything outside it's dom0 host: 55KB/s -
    something is definitely wrong here.
    domU network config:
    vif = ['bridge=xenbr0,mac=00:16:3E:46:9D:F1,type=netfront']
    vif_other_config = []
    I have similar installation on Debian/Xen, and everything is running
    fine, e.g. i don't have any data transfer speed related issues.
    regards
    Robert

    There is also an issue with the ixgbe driver in the stock OVM2.2.2 kernel (bug:1297057 on MoS). We were getting abysmal results for receive traffic (measured in hundreds of kilobytes!!! per second at times) compared to transmit. It's not exactly the same as your problem, so don't blindly follow what I say below!!!
    ### "myserver01" is a PV domU on Oracle VM 2.2.2 server running stock kernel ###
    [root@myserver02 netperf]# ./netperf -l 60 -H myserver01 -t TCP_STREAM
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver01.mycompany.co.nz (<IP>) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec
    87380 16384 16384 60.23 1.46
    ### Repeat the test in the opposite direction, to show TX is fine from "myserver01" ###
    [root@myserver01 netperf]# ./netperf -l 60 -H myserver02 -t TCP_STREAM
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver02.mycompany.co.nz (<IP>) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec
    87380 16384 16384 60.01 2141.59
    In my case, a workaround as advised by Oracle Support is to run:
    ethtool -C eth0 rx-usecs 0
    ethtool -C eth1 rx-usecs 0
    against the slaves within your bond group. This will give you better performance (in my case, got up to ~1.2GBit/s), although there are some fixes coming out in the next kernel which get even better speeds (in my tests, ~2.2GBit/s):
    Edited by: user10786594 on 11/09/2011 02:22

  • Master Data loading is very slow.

    Hi Experts,
    I have scheduled Master Data Attribute Process Chain daily. 0employee_attr, infoObject is having only 487315 & it is taking more than 12 hrs. There are number of infoobjects are much bigger is taking 10 minutes only. 0Employee attr in increasing 5-10 records daily only. Earlier it was taking 4-5 hours.
    Regards,
    Anand Mehrotra.

    Hi,
    You must have the following profiles to BWREMOTE or ALEREMOTE users.So add it. Bcoz either of these two users will use in background to get extract teh data from ECC, so add tehse profiels in BW.
    S_BI-WHM_RFC, S_BI-WHM_SPC, S_BI-WX_RFC
    And also check the following things.
    1.Connections from BW to ECC and ECC to BW in SM59
    2.Check Port,Partner Profiles,and Message Types in WE20 in ECC & BW.
    3.Check Dumps in ST22, and SM21.
    4.If Idocs are stuck i.e see the OLTP Idoc numbers in RSMO Screen in (BW) detials tab see in bottom, you can see OLTP Idoc number and take the Idoc numbers and then goto to ECC see the status in WE05 or WE02, if error then check the log else goto to BD87 in ECC and give the Idoc numbers and execute manually and see in RSMO and refresh.
    5.Check the LUWs struck in SM58,User Name = * (star) and run it and see Strucked LUWs and select our LUW and execute manually and see in RSMO in BW.
    See in SDN
    Re: Loading error in the production  system
    Thanks
    Reddy

  • Data load is very slow

    Hi Experts,
    I am working on CRM Analytic. I am loading address data from extractor 0BP_DEF_ADDRESS_ATTR to Business partner, with  19 lakhs records. When I execute the DTP it is taking 3 to 4 days to complete the load.
    Please provide me solution so that my data load will become fast.
    With Regards,
    Avenai

    Hi,
    Increase the Number of parallel processes.
    in order to increase parallel processes --> from menu of DTP -> goto -> "settings for batch manager" -> increase the Number of parallel processes (by default it will be 3 increase it to 6).
    Increase the Datapacket size in the DTp -extraction tab.
    Do you have any routines used in the transfromations? if yes try to debug the code where its taking time, find n try to fine tune the code with the help of ABAP person.
    Below option may also be one of the reasons while using CRM data sources...
    The data source consists of lots of fields which are not used or mapped in the transformation ... try to hide those fields or you can create a copy of your data source using BWA1 transaction in CRM system.
    Regards
    KP

  • Large SGA issue-- insert data is very slow--Who can help me?

    I set the max_sga_size to 10G,and db_cache_size to 8G, but the value of db_cache_size is negative number in OEM, and also I found that the data inserting was very slow, I checked the OS, found no CPU consuming and no IO consuming.
    The OS is HP-UX B11.23 ia64.
    Oracle server 9.2.0.7
    Physical memory : 64G
    CPU: 8
    (oracle server and os are all 64-bit).
    If I decrease the SGA to 3G,and db_cache_size to 2G, and the same data inserting is very fast, everything is well.
    so I guess if there are some os parameters needed to set for using LARGE memory.
    Who know this issue or who has the experience of using large SGA in HP-UX ?
    Message was edited by:
    user548543

    Sounds like you might have a configuration issue on the o/s side.
    Check that kernel parameters are set as recommended in the installation guide
    The first thing that came to mind after reading the problem description was that you might have too low SHMMAX for that 10GB SGA, which would cause multiple shm segments to be created and thus explain the performance degration you're experiencing.
    A quick way to check if that's the case would be doing "ipcs -m" to see if there are multiple shm segments when SGA is set to 10GB.

  • Time Capsule HDD is very slow

    My new Time Capsule  Hard disk data transfer is very slow and taking lot of time to copy single item, I connected my time capsule thru an existing wireles network instead of creating a new network, pls suggest how can i make it fast to access data or copy data.

    You are connected the wrong way..
    Plug into the main router via ethernet with the TC in bridge mode.
    Setup wireless to roaming with the current network.. ie same wireless name = SSID and same security settings.. wpa2 AES = WPA2 Personal and same password.
    Or simply turn off wireless in the TC.. use ethernet for fast speeds.

  • Report running very slow.. taking too much time

    Dear Oracle Report experts,
    I have developed report in oracle reports bulider 10g. while running it from report builder through main menu *** data is coming very SLOW *** within 55 Minuts.
    But, If same query is executing from SQL/PLSQL deverloper it is very fast within 45 second.
    Please suggest any configuration or setting ; is having Idea.
    Thanks
    Muhammad Salim
    Query is as below: generating result in 48 second.
    select cns.consultant,
    sum(cns.nof_pat) noof_pat,
    sum(cns.opd_amnt) opd_amnt,
    sum(cns.discount_amnt) discount_amnt,
    sum(cns.net_amnt) net_amnt,
    sum(cns.dr_share) dr_share,
    sum(cns.hosp_share) hosp_share,
    sum(cns.net_dis) net_dis
    from
    select rec.consultant,
    count(distinct rec.consultant) nof_pat,
    -- rec.receipt_date, bysalim
    pay_mode,
    rec.patient_mrno,rec.patient,
    service_name,rcpt_no,
    company,rec.docno,
    sum(distinct return_amnt) return_amnt,
    sum(distinct rec.opd_amnt) opd_amnt,
    sum(distinct dis.discount_amnt) discount_amnt,
    (sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant)) net_amnt,
    round((((sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant) ) *
    max(dr_per))/100),0) dr_share,
    round((((sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant) ) *
    max(100-dr_per))/100),0) hosp_share,
    count(distinct rec.consultant) net_dis
    from
    select -- bokm_doc_dt receipt_date, bysalim
    bil_recept_no_a rcpt_no,
    fnc_org_sname(bokm_panel_comp_id) company,
    0 return_amnt,
    pr_mrno patient_mrno,pr_fname patient,
    bokm_doc_no docno,
    gcd_desc(bil_pay_mode_a) pay_mode,
    fnc_service_name(rslt_tst_code) service_name,
    dr_name consultant,
    pt_tst_rate opd_amnt,
    cons_share cons_share,
    (select max((nvl(rt_dr_share,0)*(100))/nvl(rt_amount,0))
    from hms_adm_dr_rt rt
    where dr.dr_id = rt.rt_dr_id
    and book.rslt_tst_code = rt.rt_scs_id) dr_per,
    dr_on_rent dr_rent,dr_share
    from hms_pat_pers pat,hms_lab_pat_mst pmst,hms_opd_book book,
    hms_pat_amnt amt,hms_adm_dr dr
    where pat.pr_mrno = pmst.bokm_mrno
    and pmst.bokm_mrno = book.rslt_mrno
    and pmst.bokm_doc_no = book.pt_pat_doc_no
    and pmst.bokm_mrno = amt.bil_mrnum_a
    and pmst.bokm_doc_no = amt.bil_docno_a
    and pmst.bokm_ref_conusltant_id = dr.dr_id
    and amt.bil_rcp_type_a = '075002'
    and pmst.bokm_pat_type in('PVT_OPD','CP_OPD')
    and amt.bil_void_a = 'N'
    and (pmst.bokm_user_dept_code != '039')
    and BOOK.CREATED_ON between '01-OCT-2011' and '17-OCT-2012'
    /* and (pat.pr_curr_cont_id = :P_CONT_ID or :P_CONT_ID = '000')
    and (pat.pr_curr_prvnc_id = :P_PRVNC_ID or :P_PRVNC_ID = '00')
    and (pat.pr_curr_city_id = :P_CITY_ID or :P_CITY_ID = '000')
    and (pat.pr_curr_area = :P_AREA_ID or :P_AREA_ID = '000')
    and (pat.pr_gender = :P_GENDER or :P_GENDER = 'A')
    and (pat.pr_marital_status = :P_MARITAL_STAT or :P_MARITAL_STAT = 'ALL')
    and (to_char(pmst.bokm_panel_comp_id) = :P_PANEL_COMP or :P_PANEL_COMP = 'ALL')
    and (pmst.bokm_ref_conusltant_id = :P_CONS or :P_CONS = 'ALL')
    and (decode(pmst.bokm_panel_comp_id,'1','PVT_IPD','CP_IPD') = :P_PAT_TYPE or :P_PAT_TYPE = 'ALL')
    &LPARA_RCPT_DT */
    ) rec,
    select -- bokm_doc_dt receipt_date, bysalim
    pr_mrno patient_mrno,
    bokm_doc_no docno,
    nvl(bil_disc_amont_a,0) discount_amnt
    from hms_pat_pers pat,hms_lab_pat_mst pmst,hms_opd_book book,
    hms_pat_amnt amt
    where pat.pr_mrno = pmst.bokm_mrno
    and pmst.bokm_mrno = book.rslt_mrno
    and pmst.bokm_doc_no = book.pt_pat_doc_no
    and pmst.bokm_mrno = amt.bil_mrnum_a
    and pmst.bokm_doc_no = amt.bil_docno_a
    and amt.bil_rcp_type_a = '075001'
    and pmst.bokm_pat_type in('PVT_OPD','CP_OPD')
    and amt.bil_void_a = 'N'
    and (pmst.bokm_user_dept_code != '039')
    and BOOK.CREATED_ON between '01-OCT-2011' and '17-OCT-2012'
    /* and (pat.pr_curr_cont_id = :P_CONT_ID or :P_CONT_ID = '000')
    and (pat.pr_curr_prvnc_id = :P_PRVNC_ID or :P_PRVNC_ID = '00')
    and (pat.pr_curr_city_id = :P_CITY_ID or :P_CITY_ID = '000')
    and (pat.pr_curr_area = :P_AREA_ID or :P_AREA_ID = '000')
    and (pat.pr_gender = :P_GENDER or :P_GENDER = 'A')
    and (pat.pr_marital_status = :P_MARITAL_STAT or :P_MARITAL_STAT = 'ALL')
    and (to_char(pmst.bokm_panel_comp_id) = :P_PANEL_COMP or :P_PANEL_COMP = 'ALL')
    and (pmst.bokm_ref_conusltant_id = :P_CONS or :P_CONS = 'ALL')
    and (decode(pmst.bokm_panel_comp_id,'1','PVT_IPD','CP_IPD') = :P_PAT_TYPE or :P_PAT_TYPE = 'ALL')
    and BOOK.CREATED_ON between :P_RCPT_DTFR and :P_RCPT_DTTO
    -- and BOOK.CREATED_ON between :P_RCPT_DTFR and :P_RCPT_DTTO
    &LPARA_RCPT_DT */
    ) dis
    where rec.patient_mrno = dis.patient_mrno (+)
    and rec.docno = dis.docno (+)
    --and patient = 'SHAHMEER'
    group by rec.consultant, --rec.receipt_date, bysalim
    rec.patient_mrno,rec.patient,
    pay_mode,service_name,rec.docno,
    rcpt_no,company
    order by rcpt_no,rec.consultant
    ) cns
    group by cns.consultant
    order by 1
    Edited by: user6431550 on Nov 15, 2012 3:10 AM

    Dear Oracle Report experts,
    I have developed report in oracle reports bulider 10g. while running it from report builder through main menu *** data is coming very SLOW *** within 55 Minuts.
    But, If same query is executing from SQL/PLSQL deverloper it is very fast within 45 second.
    Please suggest any configuration or setting ; is having Idea.
    Thanks
    Muhammad Salim
    Query is as below: generating result in 48 second.
    select cns.consultant,
    sum(cns.nof_pat) noof_pat,
    sum(cns.opd_amnt) opd_amnt,
    sum(cns.discount_amnt) discount_amnt,
    sum(cns.net_amnt) net_amnt,
    sum(cns.dr_share) dr_share,
    sum(cns.hosp_share) hosp_share,
    sum(cns.net_dis) net_dis
    from
    select rec.consultant,
    count(distinct rec.consultant) nof_pat,
    -- rec.receipt_date, bysalim
    pay_mode,
    rec.patient_mrno,rec.patient,
    service_name,rcpt_no,
    company,rec.docno,
    sum(distinct return_amnt) return_amnt,
    sum(distinct rec.opd_amnt) opd_amnt,
    sum(distinct dis.discount_amnt) discount_amnt,
    (sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant)) net_amnt,
    round((((sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant) ) *
    max(dr_per))/100),0) dr_share,
    round((((sum(distinct nvl(rec.opd_amnt,0))-sum(distinct nvl(dis.discount_amnt,0))/count(rec.consultant) ) *
    max(100-dr_per))/100),0) hosp_share,
    count(distinct rec.consultant) net_dis
    from
    select -- bokm_doc_dt receipt_date, bysalim
    bil_recept_no_a rcpt_no,
    fnc_org_sname(bokm_panel_comp_id) company,
    0 return_amnt,
    pr_mrno patient_mrno,pr_fname patient,
    bokm_doc_no docno,
    gcd_desc(bil_pay_mode_a) pay_mode,
    fnc_service_name(rslt_tst_code) service_name,
    dr_name consultant,
    pt_tst_rate opd_amnt,
    cons_share cons_share,
    (select max((nvl(rt_dr_share,0)*(100))/nvl(rt_amount,0))
    from hms_adm_dr_rt rt
    where dr.dr_id = rt.rt_dr_id
    and book.rslt_tst_code = rt.rt_scs_id) dr_per,
    dr_on_rent dr_rent,dr_share
    from hms_pat_pers pat,hms_lab_pat_mst pmst,hms_opd_book book,
    hms_pat_amnt amt,hms_adm_dr dr
    where pat.pr_mrno = pmst.bokm_mrno
    and pmst.bokm_mrno = book.rslt_mrno
    and pmst.bokm_doc_no = book.pt_pat_doc_no
    and pmst.bokm_mrno = amt.bil_mrnum_a
    and pmst.bokm_doc_no = amt.bil_docno_a
    and pmst.bokm_ref_conusltant_id = dr.dr_id
    and amt.bil_rcp_type_a = '075002'
    and pmst.bokm_pat_type in('PVT_OPD','CP_OPD')
    and amt.bil_void_a = 'N'
    and (pmst.bokm_user_dept_code != '039')
    and BOOK.CREATED_ON between '01-OCT-2011' and '17-OCT-2012'
    /* and (pat.pr_curr_cont_id = :P_CONT_ID or :P_CONT_ID = '000')
    and (pat.pr_curr_prvnc_id = :P_PRVNC_ID or :P_PRVNC_ID = '00')
    and (pat.pr_curr_city_id = :P_CITY_ID or :P_CITY_ID = '000')
    and (pat.pr_curr_area = :P_AREA_ID or :P_AREA_ID = '000')
    and (pat.pr_gender = :P_GENDER or :P_GENDER = 'A')
    and (pat.pr_marital_status = :P_MARITAL_STAT or :P_MARITAL_STAT = 'ALL')
    and (to_char(pmst.bokm_panel_comp_id) = :P_PANEL_COMP or :P_PANEL_COMP = 'ALL')
    and (pmst.bokm_ref_conusltant_id = :P_CONS or :P_CONS = 'ALL')
    and (decode(pmst.bokm_panel_comp_id,'1','PVT_IPD','CP_IPD') = :P_PAT_TYPE or :P_PAT_TYPE = 'ALL')
    &LPARA_RCPT_DT */
    ) rec,
    select -- bokm_doc_dt receipt_date, bysalim
    pr_mrno patient_mrno,
    bokm_doc_no docno,
    nvl(bil_disc_amont_a,0) discount_amnt
    from hms_pat_pers pat,hms_lab_pat_mst pmst,hms_opd_book book,
    hms_pat_amnt amt
    where pat.pr_mrno = pmst.bokm_mrno
    and pmst.bokm_mrno = book.rslt_mrno
    and pmst.bokm_doc_no = book.pt_pat_doc_no
    and pmst.bokm_mrno = amt.bil_mrnum_a
    and pmst.bokm_doc_no = amt.bil_docno_a
    and amt.bil_rcp_type_a = '075001'
    and pmst.bokm_pat_type in('PVT_OPD','CP_OPD')
    and amt.bil_void_a = 'N'
    and (pmst.bokm_user_dept_code != '039')
    and BOOK.CREATED_ON between '01-OCT-2011' and '17-OCT-2012'
    /* and (pat.pr_curr_cont_id = :P_CONT_ID or :P_CONT_ID = '000')
    and (pat.pr_curr_prvnc_id = :P_PRVNC_ID or :P_PRVNC_ID = '00')
    and (pat.pr_curr_city_id = :P_CITY_ID or :P_CITY_ID = '000')
    and (pat.pr_curr_area = :P_AREA_ID or :P_AREA_ID = '000')
    and (pat.pr_gender = :P_GENDER or :P_GENDER = 'A')
    and (pat.pr_marital_status = :P_MARITAL_STAT or :P_MARITAL_STAT = 'ALL')
    and (to_char(pmst.bokm_panel_comp_id) = :P_PANEL_COMP or :P_PANEL_COMP = 'ALL')
    and (pmst.bokm_ref_conusltant_id = :P_CONS or :P_CONS = 'ALL')
    and (decode(pmst.bokm_panel_comp_id,'1','PVT_IPD','CP_IPD') = :P_PAT_TYPE or :P_PAT_TYPE = 'ALL')
    and BOOK.CREATED_ON between :P_RCPT_DTFR and :P_RCPT_DTTO
    -- and BOOK.CREATED_ON between :P_RCPT_DTFR and :P_RCPT_DTTO
    &LPARA_RCPT_DT */
    ) dis
    where rec.patient_mrno = dis.patient_mrno (+)
    and rec.docno = dis.docno (+)
    --and patient = 'SHAHMEER'
    group by rec.consultant, --rec.receipt_date, bysalim
    rec.patient_mrno,rec.patient,
    pay_mode,service_name,rec.docno,
    rcpt_no,company
    order by rcpt_no,rec.consultant
    ) cns
    group by cns.consultant
    order by 1
    Edited by: user6431550 on Nov 15, 2012 3:10 AM

  • My data base very slow

    Dear all,
    I have oracle rac db time to time its going very slow i can't recognize why this is happen how to check this . When the db load is going high then application also restarted .
    An that time load average going very high ,only one node

    user11876003 wrote:
    Dear all,
    I have oracle rac db time to time its going very slow i can't recognize why this is happen how to check this . When the db load is going high then application also restarted .
    An that time load average going very high ,only one nodeIts actually nothing that you have told us to suggest something. What's the db version and o/s? Oracle database won't restart any application if being slow. That you should take up with the application development team of yours. For the database, depending on your db version and license, get either a Statspack or AWR of not longer than 30 minutes and post here .
    Aman....

  • Data extraction very slow

    Hello All,
    I am trying to extract data from init 2lis_11_vaitm. But, extraction is very very slow. Total data in init set-up table are only 200k.
    One thing I noticed is that we changed the extractor to increase a no. of fields to 225. All these fields were available in the BI content delivered communication extract structure of 2lis_11_vaitm  in ECC.
    Is it because of large nos of fields being extracted from ECC. 200k records are being extracted..It has been 38 hrs..and so far only 140K records are extracted.
    Could you please suggest, how I can improve extraction performance.
    We will be moving this to production. I am afraid that much larger nos of records in production will take for ever to extract.
    Thanks
    Shailini

    Yes you are right, IO = input and output, and generally reference to the data loading capability
    BASIS will help to monitor the data loading capability.
    They will help to check the sizing document, prepared before system go launch,  to check the "designed capability". BASIS guys need to do some test on data loading before system go launch.
    And they will help to check the log to find some problems during data loading.
    In bi 7 statistics you can find some information about that load. Discuss with the basis guys, they can help to analysis the problem without the BI statistics.
    The system on hand do no load from 2lis_12_vcitm. But here is some information for your reference:
    1. Production Server: 26K records, load to 2LIS_03_BF, takes 58s in all
    2. Testing Server: 1.5 Million records, full load to 0FI_GL_10, Runtime 34m 12s

  • Slow response of my data base

    Hi Dears.
    I am working on a database project. i have 5 pc and i have made a pc as a database server machine. the network is on peer to peer. so you can say there is a pc as database server with 4 work station which are accessing database on the server machine. initially the database response was very good but after 4000 records the response of database is too too much slow ( you can say in hanging).
    what i do to speed up the database response, i am bothering about slow response of data base.
    on server machin (i.e. is a pc) there is
    256 RAM, 2.4 GHz processor
    Windows XP
    Oracle 8 Enterprize Edition
    and Developer 6i
    plz help men in suggestion of good resposne

    Couple of things,
    1) With 256meg of RAM, it barely can be called a client machine let alone to be called a server machine.You seriously should have minimum 1gb to have somewhat performance from the system.
    2) You are running oracle 8.Its a really really REALLY old version. Any chance to have an idea that how much data would you store over this box? If your data is limited under 4gigs, than you may look at Oracle XE for your project. But again, machine's HW resource will come in between.
    3) Its not possible to have a guess about the performance degradation reasons just like that. It may be due to number of reasons. For example , in your case , the foremost reason is resource crunch. Try increasing this and than again use the box for some time. Feedback at that time how it is working.
    Aman....

  • My performance is very slow when I run graphs. How do I increase the speed at which I can do other things while the data is being updated and displayed on the graphs?

    I am doing an an aquisition and displaying the data on graphs. When I run the program it is slow. I think because I have the number of scans to read associated with my scan rate. It takes the number of seconds I want to display on the chart times the scan rate and feeds that into the number of samples to read at a time from the AI read. The problem is that it stalls until the data points are aquired and displayed so I cannot click or change values on the front panel until the updates occur on the graph. What can I do to be able to help this?

    On Fri, 15 Aug 2003 11:55:03 -0500 (CDT), HAL wrote:
    >My performance is very slow when I run graphs. How do I increase the
    >speed at which I can do other things while the data is being updated
    >and displayed on the graphs?
    >
    >I am doing an an aquisition and displaying the data on graphs. When I
    >run the program it is slow. I think because I have the number of
    >scans to read associated with my scan rate. It takes the number of
    >seconds I want to display on the chart times the scan rate and feeds
    >that into the number of samples to read at a time from the AI read.
    >The problem is that it stalls until the data points are aquired and
    >displayed so I cannot click or change values on the front panel until
    >the updates occur on the graph. What can I do to be a
    ble to help
    >this?
    It may also be your graphics card. LabVIEW can max the CPU and you
    screen may not be refreshing very fast.
    --Ray
    "There are very few problems that cannot be solved by
    orders ending with 'or die.' " -Alistair J.R Young

Maybe you are looking for

  • COPA reporting with open and closed projects

    Dear All, I am designing a COPA solution for an infrastructure providing company using project systems and posting to/settling out of projects on a monthly basis. Projects run for long periods and continually incurr costs and earn revenue untill they

  • Is it possible to make an item not have any discount at all?

    This is the situation - A customer wants to turn off the ability for all users to give any discount for certain items. Rest of the items may have discounts based upon BP discount or any other price hierarchy discount. Users can also override this dis

  • I recently bought a WRT54G wireless router along with a W...

    I recently bought a WRT54G wireless router along with a WUSB54GC compact wireless adapter and set up both products as directed.I get a message saying connected to the access point but cannot find internet. While connecting to my internet directly,(us

  • Purchase Orders Summary, Currency Rate Field

    Recently we applied Patch INV.RUP 14, after that users notice that Purchase Order summary screen => Purchase Order Headers Block => Rate field, it shows the Foreign Currency Rate, while it used to show Local Currency rate before applying the patch. D

  • Program to activate the process chain in 3.5 version

    HI Experts, Like we are activatie jng the transfer strcturers_TRANSTRUC_ACTIVATE_ALL. I am having some problems ...I need the program to activate the process chain by using program in 3.5 Could any respond on this ASAP. Thanks KK