Application and Database Performance Issue ?

Hi
I am designing tables, can any one suggest me which is best for database and application performace
1) One table more column and developer can work in single query
2) devided the table in two part and developer wok with two query
3) i can use table parion
also i would like to know maximum no of record stored in a table in 11g and 10g
regards

user9098698 wrote:
Hi
I am designing tables, can any one suggest me which is best for database and application performace
1) One table more column and developer can work in single query
2) devided the table in two part and developer wok with two query This decision should come from normalizing your data to 3NF. Only after that is done+ should you consider de-normalizing it for performance, and then only after careful testing and consideration other options.
3) i can use table parion ????
>
also i would like to know maximum no of record stored in a table in 11g and 10g
regards

Similar Messages

  • Database performance issue (8.1.7.0)

    Hi,
    We are having tablespace "payin" in our database (8.1.7.0) .
    This tablespace is the main Tablespace of our database which is dictionary managed and heavily accessed by the user SQL statements.
    Now we are facing the database performance issue during the peak time (i.e. at the month end) when no. of users use to run the no. of large reports.
    We have also increased the SGA sufficiently on the basis of RAM size.
    This tablespace is heavily accessed for the reports.
    Now my question is,
    Is this performance issue is because the tablespace is "dictionary managed" instead of locally managed ?
    because when i monitor the different sessions through OEM, the no. of hard parses is more for the connected users.
    Actually the hard parses should be less.
    In oracle 8.1.7.0 Can we convert dictionary managed tablespace to locally managed tablespace ?
    by doing so will the problem will get somewhat resolve ? will it reduce the overhead on the dictionary tables and on the shared memory ?
    If yes then how what is procedure to convert the tablespace from dictionary to locally managed ?
    With Regards

    If your end users are just running reports against this tablespace, I don't think that the tablespace management (LM/DM) matters here. You should be concerned more about the TEMP tablespace (for heavy sort operations) and your shared pool size (as you have seen hard parses go up).
    As already stated, get statspack running and also try tracing user sessions with wait events. Might give you more clues.

  • How to update this query and avoid performance issue?

    Hi, guys:
    I wonder how to update the following query to make it weekend day aware. My boss want the query to consider business days only. Below is just a portion of the query:
    select count(distinct cmv.invoicekey ) total ,'3' as type, 'VALID CALL DATE' as Category
    FROM cbwp_mv2 cmv
    where cmv.colresponse=1
    And Trunc(cmv.Invdate)  Between (Trunc(Sysdate)-1)-39 And (Trunc(Sysdate)-1)-37
    And Trunc(cmv.Whendate) Between cmv.Invdate+37 And cmv.Invdate+39the CBWP_MV2 is a materialized view to tune query. This query is written for a data warehouse application, the CBWP_MV2 will be updated every day evening. My boss wants the condition in the query to consider only business days, for example, if (Trunc(Sysdate)-1)-39 falls in weekend, I need to move the range begins from next coming business day, if (Trunc(Sysdate)-1)-37 falls in weekend, I need to move the range ends from next coming business day. but I should always keep the range within 3 business days. If there is overlap on weekend, always push to later business days.
    Question: how to implement it and avoid performance issue? I am afraid that if I use a function, it greatly reduce the performance. This view already contains more than 100K rows.
    thank you in advance!
    Sam
    Edited by: lxiscas on Dec 18, 2012 7:55 AM
    Edited by: lxiscas on Dec 18, 2012 7:56 AM

    You are already using a function, since you're using TRUNC on invdate and whendate.
    If you have indexes on those columns, then they will not be used because of the TRUNC.
    Consider omitting the TRUNC or testing with Function Based Indexes.
    Regarding business days:
    If you search this forum, you'll find lots of examples.
    Here's another 'golden oldie': http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:185012348071
    Regarding performance:
    Steps to take are explained from the links you find here: {message:id=9360003}
    Read them, they are more than worth it for now and future questions.

  • What happened to PDF document 22040 – "PIX/ASA: Monitor and Troubleshoot Performance Issues"?

    Hi, does anyone knows what was happened to the following PDF notes in Cisco? The PDF file is only contains 1 page compared to the original notes in html format which is about a few pages.
    If there is alternative link for this document, please let me know. Thanks.
    Document ID: 22040
    PIX/ASA: Monitor and Troubleshoot Performance Issues
    http://www.cisco.com/image/gif/paws/22040/pixperformance.pdf <PDF Notes, but 1 page only?>
    http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_tech_note09186a008009491c.shtml  < HTML Notes>

    Hi experts / marcin
    can anyone of you let me know about my question related to vpn ?
    Jayesh

  • 8.1.7 database performance issue

    Hi Guys,
    I have one issue raised by customer about performance of database. The platform is HP-UX 11i and Oracle 8.1.7
    There is 3-tier architecture with D2k as front-end , One application or file server ( Novell based ) and Database server ( 8.1.7 with HP-UX 11i )
    I don't see any probelms from hardware and OS perspective. Oracle is using raw devices in HP StorageWorks EVA 5000 ( 2 nos )with 3 HBAs for for each EVA from the HP rp7410 server ( 4x875Mhz CPU , 8GB RAM , HP-UX 11i OS ) through SAN switches.
    The systems shows users around 400. As per DBA's feedback
    SGA is 1.8GB with with total data files size around 135GB
    ( In terms of object is 90GB )
    How UNIX kernel paramter dbc_max_pct parameter would affect the oracle operation?
    Can anyone there, guide me on how to trace the cause of system slowness?
    Sam

    The DBA can apply this query to know the I/O activity of read/writes regarding the datafiles and you must check that those datafiles has a good distribution. In case of not you have to apply a good striping of them.
    SQL> r
      1  select phyrds, phywrts, ceil(phyrds*100/(phyrds+phywrts)) ||
      2  '%' "%reads", ceil(phywrts*100/(phyrds+phywrts)) || '%'
      3  "%writes", d.name
      4  from v$datafile d, v$filestat f
      5* where d.file#=f.file# order by d.name
        PHYRDS    PHYWRTS %reads       %writes      NAME
       1214231     190474 87%          14%          /oracle/SIGEP/BANDEJA1/BANDEJA_2.dbf
         39854      32963 55%          46%          /oracle/SIGEP/BANDEJA1/BANDEJA_8.dbf
       2426805     279412 90%          11%          /oracle/SIGEP/BANDEJA2/BANDEJA_1.dbf
        178777      35009 84%          17%          /oracle/SIGEP/BANDEJA2/BANDEJA_3.dbf
         22530      16284 59%          42%          /oracle/SIGEP/BANDEJA2/BANDEJA_4.dbf
         48571      24085 67%          34%          /oracle/SIGEP/BANDEJA2/BANDEJA_5.dbf
         37902      35409 52%          49%          /oracle/SIGEP/BANDEJA2/BANDEJA_6.dbf
         52154      39680 57%          44%          /oracle/SIGEP/BANDEJA2/BANDEJA_7.dbf
       2858606      76561 98%          3%           /oracle/SIGEP/DATOS1/DATOS01.dbf
       2599916      70456 98%          3%           /oracle/SIGEP/DATOS1/DATOS02.dbf
          1294       1223 52%          49%          /oracle/SIGEP/DATOS1/MEDICION1.dbf
        PHYRDS    PHYWRTS %reads       %writes      NAME
          4873       1223 80%          21%          /oracle/SIGEP/DATOS1/PUNTOS1.dbf
       7115316     121538 99%          2%           /oracle/SIGEP/DATOS2/DATOS03.dbf
       1855688      51318 98%          3%           /oracle/SIGEP/DATOS2/DATOS04.dbf
       1440753      52197 97%          4%           /oracle/SIGEP/DATOS2/DATOS05.dbf
       1162709      39525 97%          4%           /oracle/SIGEP/DATOS2/DATOS06.dbf
       1160547      42985 97%          4%           /oracle/SIGEP/DATOS2/DATOS07.dbf
         19944       1223 95%          6%           /oracle/SIGEP/DATOS2/XDB1.dbf
       1042353     125964 90%          11%          /oracle/SIGEP/INDICES1/INDICES1.dbf
       1017340     116972 90%          11%          /oracle/SIGEP/INDICES1/INDICES2.dbf
       1126702     128896 90%          11%          /oracle/SIGEP/INDICES1/INDICES3.dbf
        493673      52913 91%          10%          /oracle/SIGEP/INDICES1/INDICES4.dbf
        PHYRDS    PHYWRTS %reads       %writes      NAME
         63487      28007 70%          31%          /oracle/SIGEP/INDICES1/INDICES5.dbf
         62568      29283 69%          32%          /oracle/SIGEP/INDICES1/INDICES6.dbf
         54510      25509 69%          32%          /oracle/SIGEP/INDICES1/INDICES7.dbf
         46607      27442 63%          38%          /oracle/SIGEP/INDICES1/INDICES8.dbf
         20409      13729 60%          41%          /oracle/SIGEP/INDICES1/MONITOR.dbf
        260582     255290 51%          50%          /oracle/SIGEP/MIC1/MIC1.dbf
        305669     765111 29%          72%          /oracle/SIGEP/MIC_IDX1/MIC_IDX1.dbf
          9136      34232 22%          79%          /oracle/SIGEP/SYSTEM1/DRSYS1.dbf
         95652      27735 78%          23%          /oracle/SIGEP/SYSTEM1/SYSTEM1.dbf
       2787738      21627 100%         1%           /oracle/SIGEP/SYSTEM1/USERS9_1.dbf
         24815     736947 4%           97%          /oracle/SIGEP/UNDO1/UNDO1.dbf
        PHYRDS    PHYWRTS %reads       %writes      NAME
         33128     678687 5%           96%          /oracle/SIGEP/UNDO1/UNDO2.dbf
         38029     859205 5%           96%          /oracle/SIGEP/UNDO1/UNDO3.dbf
         30881     796939 4%           97%          /oracle/SIGEP/UNDO1/UNDO4.dbf
         29158     710228 4%           97%          /oracle/SIGEP/UNDO1/UNDO5.dbf
         26750     674284 4%           97%          /oracle/SIGEP/UNDO1/UNDO6.dbf
         28553     703524 4%           97%          /oracle/SIGEP/UNDO1/UNDO7.dbf
         24657     664093 4%           97%          /oracle/SIGEP/UNDO1/UNDO8.dbf
    40 rows selected.
    Elapsed: 00:00:00.57
    SQL>Joel Pérez
    http://www.oracle.com/technology/experts

  • CAT4900M and NetApp - Performance issue

    Hi,
    I'm struggling with a performance issue between our two NetApp Fas3170-devices.
    The setup is quite simple: Each NetApp is connected via two TenGig interfaces to a CAT4900M. The 4900M's are also connected via two TenGig interfaces. Each pair of connections are bundled into an Layer2-etherchannel, configured as a dot.1q trunk. Mode is set to 'ON' on both the 4900 and the NetApp. According to NetApp documentation, this configuration is supported. Across each etherchannel, the vlans 219 and 220 are allowed. Two partitions are configured on the NetApp's, one being active in our primary datacenter and another in our secondary datacenter. Vlan219 and Vlan220 are configured for each the two partitions, using HSRP for gateway redundancy.
    None of the interfaces nor the etherchannels shows any signs of misconfiguration. All links are up and etherchannels working as expected, well almost. Nothing indicates packet loss, crc-errors, Input/Output queue-drops or anything the would impact performance. Jumboframe is not configured, although this has been discussed.
    The problem is, that we're unable to achieve satisfactory performance, when for instance, performing a volume copy between the two NetApp partitions. Even though we have a teoretical bandwidth of 20Gbps end-to-end, we never climb above 75-80 Mbytes of actual transfer-rate between the two NetApps. So performancewise, is almost looks as if we're "scaled" down to a 1Gig link. No QoS or other kind of ratelimiting has been implemented on the 4900's, so from a network point of view, the NetApps can go full-throttle. NetApp sw has been updated and configurations for both NetApp and 4900's have been revised by NetApp engineers and given a "clean bill of health".
    The configuration for the 4900->NetApp etherchannel/interfaces is as follows:
    interface TenGigabitEthernet1/5
    description *** Trunk NetAPP DC1 ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    udld port aggressive
    channel-group 2 mode on
    spanning-tree bpdufilter enable
    interface TenGigabitEthernet1/6
    description *** Trunk NetAPP DC1 ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    udld port aggressive
    channel-group 2 mode on
    spanning-tree bpdufilter enable
    interface Port-channel2
    description *** Trunk Etherchannel DC1 ***
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    spanning-tree bpdufilter enable
    spanning-tree link-type point-to-point
    Configuration for 4900->4900 interfaces/etherchannel is as follows:
    interface TenGigabitEthernet1/1
    description *** Site-to-Site trunk ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    udld port aggressive
    channel-group 1 mode on
    interface TenGigabitEthernet1/2
    description *** Site-to-Site trunk ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    udld port aggressive
    channel-group 1 mode on
    interface Port-channel1
    description *** Site-to-Site trunk ***
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    spanning-tree link-type point-to-point
    Vlan10 used for mngt-purpose.
    Does anyone have similar experiences or suggestions as to why we're having theese performanceissues?
    Thanks
    /Ulrich
    Message was edited by: UHansen1976

    Hi,
    Thanks for your reply.
    I take it, that you mean baseline performance between the two NetApp's. Well, that's really out of my hands, as another department is responsible for the NetApp's. I'm not aware of any baseline performance, nor have I seen any benchmark tests or anything, that could give me hint.
    Just as you suggest, I've gone through the switch-setup systematically. Basically, starting with the physical layer and working my way up. So far, I've found nothing that would indicate a physical problem. The switchport/etherchannel setup has been verified by my peers and also verified by NetApp according to the configuration on the NetApps, as well as the various best-practice documentation availible. Futhermore, I haven't seen any signs of packets drops, crc-errors, massive re-transmissions or anything like not, neither on the switches nor the NetApps.
    Recently we had a status-meeting with our NetApp-partner and it looks to me like they're persuing the logical setup on the NetApps, as the're apparently a number of settings etc. that needs adjustment. Also, we're waiting for NetApp tech-support to comment on the traces, config-dump etc. we've send to them.
    /Ulrich

  • Application and Database on the same server

    I have a Java Application and a db2 database(SWDTEST) that reside on the same server. In the Application I want to connect to the database. What would I use to do this? When the application resides on a client machine I use the "sun.jdbc.odbc.JdbcOdbcDriver" driver and can get a connection. Code looks like:
    try{
    //load the driver class
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    //Define the data source for the driver               
    String wdURL = "jdbc:odbc:SWDTEST";
    String username = "UNTEST";
    String password = "PTEST";
    wdConnection = DriverManager.getConnection(wdURL, username, password);
    wdStatement = wdConnection.createStatement();
    catch(SQLException e)
    System.out.println( e.toString() );
    But when I move the same application to the server and run it I get the following error message:
    java.sql.SQLException: [IBM][CLI Driver] SQL1013N The database alias name or database name "SWDTEST" could not be found. SQLSTATE=42705
    Do I need to set something on the server so that SWDTEST is a recognized database name or connect to it some other way?
    Thanks in advance.

    Does the server have ODBC installed on it? (Windows boxes will, unix will likely not.)
    Do you have a ODBC driver installed on the the server. This has nothing to do with java.
    Have you created a DSN on the server?

  • AIR application and database connectivity (using JAVA)

    Hi
    I am creating AIR application and I want to connect with the database using java database connectivity (JDBC).
    Can any body give me the some suggestion on how to how to do that.
    Please give me any reference for creating AIR application for database connectivity with mysql/access.
    Thanks
    Sameer

    lots of serching on the google and found that For AIR applications either you use Webservices(JAVA/PHP/.Net) or you can use SQLLite.
    Not found any method for direct connectivity with the database using JDBC.
    If any one found direct connecivity withe database using JAVA then please reply.
    Thanks

  • For Patching full backup of application and database

    Hi
    we are using oracle11i (11.5.10.2) and database version 10.2.0.3 on windows 2003
    every time of patching usually i take full backup of database and application (DATA_TOP,DB_TOP,APPL_TOP,COMMON_TOP,ORA_TOP)
    But can i take only DATA_TOP and APPL_TOP,COMMON_TOP,ORA_TOP for patching (WithOut DB_TOP)?
    Thanks
    Regards
    AUH

    Hi,
    But can i take only DATA_TOP and APPL_TOP,COMMON_TOP,ORA_TOP for patching (WithOut DB_TOP)?For the application patches, you do not need to take a backup of the database ORACLE_HOME. Backup this ORACLE_HOME only if you apply database patches (using opatch or OUI).
    Regards,
    Hussein

  • How to transfer an application and Database from Pre-Prod to Prod

    Hi,
    Could any please help by giving the method to transfer the Application and Dabase from Pre-Production to Production.
    Thanks

    Depends on what version you are on. If it is 11.1.2 then you can use LCM to transfer the atrifacts including data. in 11.1.1.X you can also use LCM but you would have to export and then import the data itself.
    You could also use the migration wizard in EAS to migrate the artifacts and then export and import data.
    Or you could go the file system route. First create the app and DB from EAS, Stop both the source and target apps and then do a file system copy to copy the artifacts.
    I've not tried it, but if this is a BSO cube, you could possibly back it up through the backup utilities in EAS the restore it to the new server

  • Database performance issue

    Hi all,
    I am having a system with 64gb ram with oracle 11g installed in it with solaris 10 as Operating system, i have assigned 16gb to memory_target with dynamic memory management, but the performance of the system not as expected can any guied me on the following issues:
    1. what are parameters should i take care while tunning my database.
    2. In my application(java), it is taking more time for insertions into database
    can one guide me to resolve above issue.
    Regards
    harry

    I think you meant "log file sync" rather than "db file sync", is that right?
    Log file sync is a client event caused by too many and frequent commits making checkpoint and log writer process do overtime. You may check if the application code or the adhoc DML PL/SQL blocks have commits within loop.
    You may rewrite such code to commit after n number of iterations (rows) rather than committing after every iteration.
    Bad code (*Assume there are 5 million customers in the region NEWYORK*)
    Begin
      For i in (select cust_id, cust_name, sales from customer where region='NEWYORK')
      Loop
        update today_sales set customer = cust_name, amount=sales where customer_id = i.cust_id;
        commit;
    End Loop;
    End;
    /You can rewrite as
    Declare
    v_commit_count Number := 0;
    Begin
      For i in (select cust_id, cust_name, sales from customer where region='NEWYORK')
      Loop
        v_commit_count := v_commit_count + 1;
        update today_sales set customer = cust_name, amount=sales where customer_id = i.cust_id;
       If (v_commit_count >= 10000) Then
         commit;
         v_commit_count := 0;
       End If;
    End Loop;
    commit;
    End;
    /This will reduce the log file sync waits

  • WHERE LIKE% and ASP Performance Issue

    Hi,
    i am facing an issue with my ASP application as i use it as front end web application to connect to a huge oracle Database.
    Basically i use my queries within the ASP pages, one of them uses Where LIKE to more than one column
    Example: i have Col1, Col2 i have created the following indexes:
    Index1 on Col1, Index2 on Col2 and Index3 on Col1,Col2
    From the ASP page i have field 1, field 2 and would like to use LIKE on both fields (Field1,Field2) but the process take so much time to get result not to mention the resources it takes.
    My ASP Query:
    sqlstr = "Select * From TABLE Where COL1 Like '"&field1&"%' And COL2 Like '"&field2&"%' ORDER BY Num ASC"
    Set Rs = Conn.Execute(Sqlstr)
    What to use instead of this query to get same result but much faster (optimized)?
    Thanks.

    if the ratio of the data returning is appropriate for index access Oracle optimizer should choose to use it, but for further commenting;
    a. I couldn't see your query in the output you provided?
    b. I need to know the data distribution; what is the ratio of the data coming over all table's data with the literals you use? you can check it by taking a count of the columns you indexed with a group by query.
    c. I assume that your indexes are in VALID status and you collected statistics with dbms_stats and cascaded to the indexes, and depending on the question above if your data is not skewed which may cause extra need for histograms,
    d. I also assume if like is starting with '%', which in this case Oracle does not use indexes and Text option is what you need to read as advised, or for another smart idea on making “like ‘%xxxx’" use index in Oracle you may check - http://oracle-unix.blogspot.com/2007/07/performance-tuning-how-to-make-like.html
    After you supply the query with literals included and the data distribution, maybe as a last resort we need to force index access with a hint and compare the statistics provided by timing and autotrace options of SQL*Plus.
    ps: Also you may produce a 10053 event trace to understand the optimizer decision - http://tonguc.wordpress.com/2007/01/20/optimizer-debug-trace-event-10053-trace-file/

  • Can't access "Network" and other performance issues

    Hi all,
    I'm facing a catch-22 and am not sure what to do to get myself and my mac out of this downward spiral. Any help you can offer would be greatly appreciated (fyi I am admittedly not great with macs and probably don't maintain my MacBook Pro well enough).
    I've had a serious slowdown in performance starting a couple weeks ago. the system moves at a glacial pace and most of the time is spent watching the spinning rainbow.
    To add to my issues, my ISP recently performed a system upgrade which requires me to change some settings under "Network" in my system preferences. Well, when I try to access "Network", the machine thinks for a while, then an error message appears telling me that the network preferences has shut down unexpectedly. When I hit "retry" the error message eventually reappears.
    So, because of the upgrade, I can't access the internet to download any repairs for the machine, and because of the problems with the machine, I can't modify the preferences to access the internet. Obviously a vicious cycle which someone of my expertise level is struggling to solve.
    Side notes
    - I've passed the 90-day phone support period so can't call for help.
    - I'm on my work computer now. Could I download some repair/diagnostic tool here, then run it on my machine at home?
    If anyone can throw me a lifeline I would be grateful!
    MacBook Pro 15"   Mac OS X (10.4.3)  

    OZ 99,
    For logic's sake, I'm gong to take these out of order a bit:
    2) A "disk error" occurs when the "file system" (sometimes called the "disk directory") becomes damaged. This is data that is written to the HD, so yes, it could be considered a software error. Your file system is, basically, a map of your physical HD, and it indexes the location on the drive of all the other files. When it is damaged for whatever reason, your disk "forgets" where some amount of data lives. Because of this, the associated files become damaged, or "corrupt." If those files happen to be critical components of the OS, bad things can happen. At worst, the disk will become unmountable, and all of your files unrecoverable.
    1) Disk errors can be caused by several things. Sometimes, one or more "blocks" (let's call them physical locations on the disk) on your HD can become physically damaged. Whether this is because of a slight flaw in manufacturing, a scratch, magnetic particles that lose their "oomph," whatever, matters not. What is important is that some data is lost. Because the file system still believes there is data living in this location, it (the file system) is no longer reliable; it is damaged. While the initial loss of data could be considered hardware-related, the disk error is not. I'll come back to this.
    Another potential cause is some random error in the process of writing data to the disk. Again, this is a software problem, not a hardware problem. The most common cause for this occurs when your computer is shut down improperly, either a forced shutdown or a power loss. Journaling, which is the default for an OS X boot volume, goes a long way toward automatically fixing these types of disk errors, but it is not always a guarantee.
    If you have had your MBP for only a short period of time, it is not surprising that a disk error has occurred, and probably because of a bad block. Absolutely flawless drives are rare, and many computers ship with incipient disk errors. For this reason, many people like to format any new drive, even one in a new computer, right out of the box (I'll get to reasons why this is a good thing to do).
    3) Yes. Disk Utility can check or repair your file system. Any repairs must be made using Disk Utility while booted to the OS X install disk. Your HD can be "verified," however, while booted to the HD. Simply open Disk Utilty (in the "Utilities" folder), select your startup disk, then click "Verify" in the "First Aid" pane.
    4) Yes, you will have to reinstall all of your applications after formatting and reinstalling. Formatting erases everything on the volume or drive selected. Settings and data for all of those applications can be saved, however, then transferred back to the MBP after reinstalling OS X. Once the applications, themselves, have been reinstalled, you will be right back where you started. I can talk about making a comprehensive backup in another post, if you like.
    DISK UTILITY: In my first post, I recommended that you select your entire drive, then using the "Zero All Data" option. This process takes a considerably longer amount of time (as much as an hour and a half, depending on the size of your drive), but it has one big advantage. When this option is used on an entire physical drive (also called a "device"), it will scan for those pesky bad blocks, and "map out" any it finds. Since these bad spots on the disk will not be included in the new file system's list of "useable" locations, your chances of encountering another disk error in the near future is drastically reduced. So, even though a bad block could be considered a hardware error, management of them is handled by software.
    Scott

  • Oc4j and database connections issue (way too many created)

    I seem to be experiencing a serious issue every time someone connects to my application. New connections are made to the database instead of using existing connections (i.e. pooling doesn't seem to work). In my data-sources.xml, the pools are set to have a max-connections of 3, yet when I query the v$sessions view, there are additional sessions created every time the application is hit which has sent it way over the max-connections parameter in the data-sources.xml file. If oc4j is pooling the connections how can this happen? Is there any way of tracking where this is occurring or what could be causing this? (SQLNET.EXPIRE_TIME has been set, database is 10g, oc4j is 101202 and oc4j on linux platform)
    Thanks
    Paul

    Paul,
    According to the documentation for OC4J 10.1.2, if you use the value of the "ejb-location" attribute (from the "data-sources.xml" file) when doing a JNDI lookup of your datasource, then you should be using connection pooling.
    It's probably also an idea to not define a "pooled-location" attribute in your "data-sources.xml" file.
    Also, you can get database connection information via the "Spy" [Web] application. Use the following URL:
    http://oc4jHost:oc4jPort/dmsoc4j/Spy
    Of-course you need to replace "oc4jHost" and "oc4jPort" with the appropriate values. For example if you open your Web browser on the same machine that OC4J is running on, and if you are using the default port, then the URL will be:
    http://localhost:8888/dmsoc4j/Spy
    Good Luck,
    Avi.

  • Infoview Reports timing out, Freezing and other performance issues

    Hi all,
    New user and been tasked with a project to try and get to the bottom of a problem our users get using infoview on our network. Now i have very little experience in these kind of things and this is more of a research and learn experience at the same time. I have highlighted in bold below what one of my colleagues has sent me in order to research.
    You may have noticed Reports timing out, Freezing etc.
    We need recognise whatu2019s causing it and any solutions we can apply.
    It maybe down to Memory issues, on local machines. Cost of Queries on the databases or even Java versions used when either editing or viewing Webi reports.
    Other considerations could be Network issues u2013 Are the problems just site specific
    u2022     Identify reports that are causing problems. Is it down to the Queries used and can these be optimised to run faster u2013 Our DBA can assist with this using his Query profiler (ORACLE).
    Or reports that have many tabs on, appear to take longer to open/edit u2013 is this Memory or Java.
    u2022     Research the web for known issues/fixes,
    u2022     Raise Topics on Forums such as BOB and the SAP forum u2013 explaining the above, Can we find out what causes it, which Java versions we should be using, and recommended amount of Memory (RAM) etc.
    At the moment im busy just getting a list  of reports our users are having a problem with along with any error messages etc. Now in order to get help from some gurus on here i obviously would need to supply more information on our setup so if anyone can help just tell me what information you need and i will get this for you.

    Hi,
    Here's my best suggestion, use a monitoring tool like Remote Support Component: www.service.sap.com/remote-supportability
    you can use this utlity to diagnose your system and all aspects of its latency
    regards,
    H

Maybe you are looking for

  • Printing from linux machine to Mac shared printer?

    I have a brand new Brother MFC 6800 that is connected to my Mac (10.4.4) via USB. I have a Linux box at home (Mandriva 2006), and I'd like to be able to print through the network to this printer. I've been searching the web for instructions for how t

  • Query to fetch the output in the given form.

    Hi, I have the following values in the table; CURRENCY_MASTER --> Curr_id Curr_name 1 Yen 2 Dollar 3 Rupee 4 Australian Dollar 5 Rubel 6 Taka 7 New Zealand dollar 8 Dinar 9 Euro Output- I want an output such that "Dollar" is at the top and the rest o

  • How to remove all in and out points from extendscript code?

    Hi,      I am creating a sequence from a clip from extendscript code to automateToSequence function.      But if there are some in and out points in the clip only the select clip part is converted as a sequence, what is the solution to remove all tho

  • Same content gets printed in copies

    Hi , I am trying to print one original and 2 copies for a PO form. There are few texts specific to the copies. I am using copies window to achieve this. All these are getting displayed correclty in the spool request. When I take the hardcopy/print th

  • Errors in syslog

    Hello. I have CUCM 8.6.2.24901-1 There's messages in syslog every 10-20 seconds as follow: ccm: 239607: CCM: Aug 06 2014 11:36:05.223 UTC : %UC_CALLMANAGER-3-DbInfoError: %[DeviceName=][ClusterID=CCM-Cluster][NodeID=CCM]: Configuration information ma