How to improve class-loading performance for missing classes

Hi,
do you have any ideas how we can improve the class-loading performance on a weblogic 12c running on jrockit? We spend about 20-30 ms per request in class-loading due to the way hibernate criteria API works (it tries to load quite a lot of missing classes). The only thing I came up with was to invert the class-loading using the weblogic descriptor for those missing classes (which are actually no real classes but some parts of a generated JPQL) which does improve performance a bit.
Thanks
Dimo

Hi,
do you have any ideas how we can improve the class-loading performance on a weblogic 12c running on jrockit? We spend about 20-30 ms per request in class-loading due to the way hibernate criteria API works (it tries to load quite a lot of missing classes). The only thing I came up with was to invert the class-loading using the weblogic descriptor for those missing classes (which are actually no real classes but some parts of a generated JPQL) which does improve performance a bit.
Thanks
Dimo

Similar Messages

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • How to improve the load performance

    can any body tell me how to improve the load performance

    Hi,
    for all loads: improve your ABAP code in routines.
    for master data load:
    - load master data attributes before the charateristic itself
    - switch number range buffering on for initial loads
    for transactional loads:
    - load all your master data IObjs prior loading your cube / ODS
    - depending on the ratio No.Records loaded / No.Records in Cube F fact table, drop / recreate indexes (if ration is mor than 40/50%
    - switch on number range buffering for dimensions with high number of records for initial loads
    - switch on number range buffering on master data IObjs which aren't loaded via master data (SIDs always created while transactional loads; eg document, item....)
    these recommendations are just some among others like system tuning, DB parameters...
    hope this helps...
    Olivier.

  • How to improve the OpenGL performance for AE

    I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
    I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
    Any idea how to improve the OpenGL performance please ?
    Regards

    juskocf wrote:
    But to scrub the timeline smoothly, I think OpenGL plays an important role.
    No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
    juskocf wrote:
    Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
    It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
    Mylenium

  • How to Improve DSO loading performance

    Hello,
    I have a DSO having 3 infosources. This DSO is Generic means based on generic Data Sources. Daily we have a full upload (last 2 months data). Initially it was taking around 55 mins to load the data but now a days all are taking 2.5 Hrs daily.
    Can u please tell me how can i improve the performance in other word how can i reduce the time.
    Please give some solution or document to resolve this.
    amit

    Hi,
    Genearl tips you can try to improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    4. Check whether the system processes are free when this load is running
    5. Try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. Goto manage ODS -> activate -> activate in parallel -> increase the number of processes from there.for direct access try TCode RSODSO_SETTINGS
    7. Remove Bex Reporting check box in ODS if not required.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Use InfoPackages with disjoint selection criteria to parallelize the data export.
    Complex database selections can be split to several less complex requests.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    SAP Business Intelligence Accelerator : A High - Performance Analytic Engine for SAP Ne tWeaver Business Intelligence
    http://www.sap.com/platform/netweaver/pdf/BWP_AR_IDC_BI_Accelerator.pdf
    BI Performance Audit
    http://www.xtivia.com/downloads/Xtivia_BIT_Performance%20Audit.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10564d5c-cf00-2a10-7b87-c94e38267742
    ODS Query Performance  
    Thanks,
    JituK

  • How to improve this load performance

    Hi Experts,
    There is load to infoobject   which is full and having selections on sales document (4 ranges ) 
    AUART  Sales Document Type (10) and PSTYV  Sales document item category  (12)
    The zdatasource is based on view which is built on three tables  VBAK, VBAP and VBKD
    what are the possibilities of improving the performance , Please help
    Thanks in Advance,
    Nitya

    Why you had created a Generic data source for tables VBAP<VBAK<VBKD
    You have standard data source
    2LIS_11_VAITM   -
    > VBAP, VBUP, VBAK, VBKD, VBAJP, T001, VBUK, PRPS.
    Make use of standard data source if at all there are any fields needed other than provided by SAP enhance the data source.
    VIEW is nothing but combination of tables with keys in the tables. it will degrade the performance.... try to check the view design once again....
    Are you running the info object full load for the first time? then it will take time...
    Try running the delta from next  load for info object....
    Regards
    KP

  • Improve data load performance using ABAP code

    Hi all,
             I want to improve my load performance using ABAP code, how to do this?. If i writing ABAP code in SE38 how i can call
    in BW side? if give sample code to improve load performance it will be usefull. please guide me.

    There are several points that can improve performance of your ABAP code:
    1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
    2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
    3. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
    4.Avoid using nested SELECT and SELECT statements within LOOPs.
    5. Avoid using INTO CORRESPONDING FIELDS OF. Instead use INTO TABLE.
    6. Avoid using SELECT * and select only the required fields from the table.
    7. Avoid Executing a SELECT multiple times in the program.
    8. Avoid nested loops when working with large internal tables.
    9.Whenever using READ TABLE use BINARY SEARCH addition to speed up the search.
    10. Use FIELD-SYMBOLS instead of a work area when there are more than 200 entries in an internal table where some fields are being manipulated.
    11. Use MOVE with individual variable/field moves instead of MOVE-CORRESPONDING.
    12. Use CASE instead of IF/ENDIF whenever possible.
    13. Runtime transaction code se30 can be used to measure the application performance.
    14. Transaction code st05 can be used to analyse the SQL trace and measure the performance of the select statements of the program.
    15. Start routines can be used when transformation is needed in the data package level. Field/individual routines can be used for a simple formula or calculation. End routines are used when you wish to populate data not present in the source but present in the target.
    16. Always use a WHERE clause for DELETE statement. To delete records for multiple values, use SELECT-OPTIONS.
    17. Always use 'IS INITIAL' instead of equal to '' because null for a character is '' but '0' for an integer.
    Hope it helps.

  • Check data load performance for DSO

    Hi,
        Please can any one provide the detials, to check the data load performance for perticular DSO.
       Like how much time it took to load perticular (e.g 200000) records in DSO from R/3 system. The DSO data flow is in BW 3.x version.
    Thanks,
    Manjunatha.

    Hi Manju,
    You can take help of BW statistics and its standard content.
    Regards,
    Rambabu

  • How to improve database link performance?

    Hello all,
    We use db links to do DML operations on remote databases. For OLTP applications we are facing performance problems for transactions dependent on data on remote database.
    For legal and business reasons we cannot state all the data locally.
    Could anybody suggest how to improve database links performance or suggest methods/procedures/techniques to enhance speed of OLTP applications going against remote databases ?
    Thanks
    Sky

    AQ is as reliable as Oracle-- the guarantees about delivery of queued messages are the same as the guarantees about committed transactions (i.e. ACID). AQ is designed for asynchronous operation, though. If you are batching transactions, it sounds like you are already doing some sort of asynchronous operations-- I've generally found AQ a lot easier to administer & maintain than rolling your own batching system.
    If you want to tune the Oracle side of things, you'll need to explain more about the system(s) involved here. Architecture, data flow, operations that involve the dblink, etc. If you're not comfortable posting that sort of information to a public forum, feel free to send me mail directly [email protected]
    As an aside, I'm interested in how you can legally pull data from the remote system to display to your users but that you can't legally cache that data in your system via replication. Sounds like an odd constraint.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • HOW TO USE A SINGLE PERFORM FOR VARIOUS TABLES ?

    perform test TABLES t_header.
    select
           KONH~KNUMH
           konh~datab
           konh~datbi
           konp~kbetr
           konp~konwa
           konp~kpein
           konp~kmein
           KONP~KRECH
           FROM konh INNER JOIN konp
                  ON konpknumh = konhknumh
           into table iTABXXX
            "ANY TEMPERARY INTERNAL TABLE.
           for all entries in t_header
           where
                 konh~kschl = t_header-kschl
             AND konh~knumh = t_header-knumh.
    endform.
    how can I use above perform for various internal tables of DIFFERENT LINE TYPES but having the fields KSCHL & KNUMH.

    u can use single perform....
    just see this example......hope this is what u r expecting....
    tables : pa0001.
    parameters : p_pernr like pa0001-pernr.
    data : itab1 like pa0001 occurs 0 with header line.
    data : itab2 like pa0002 occurs 0 with header line.
    perform get_data tables itab1 itab2.
    if not itab1[] is initial.
    loop at itab1.
    write :/ itab1-pernr.
    endloop.
    endif.
    if not itab2[] is initial.
    loop at itab2.
    write :/ itab2-pernr.
    endloop.
    endif.
    *&      Form  get_data
          text
         -->P_ITAB1  text
         -->P_ITAB2  text
    form get_data  tables   itab1 structure pa0001
                            itab2 structure pa0002.
    select * from pa0001 into table itab1 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
    select * from pa0002 into table itab2 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
    endform.                    " get_data
    Regards
    vasu

  • Extractor Designing to improve the Load performance.

    Hi all,
    I am extracting the data from MM application for this i am using the LO  2lis_02_itm extractor and i had enhanced it with 32 field and its happering my data load performance.
    Could u pls let me know,  how can i improve the data load performance.
    Do i need to create the different Generic Extractors instead of enhancing the LO.
    The DSO is also having many fields in it. Should i split it into 2 and create the Multiprovider for reporting.
    Regards
    KK

    Hello,
    my suggestion would be to create another generic DS for the logcal set of fields to be required in BI.
    then you can load then seperately to different DSOs and then to single IC or to two IC and use MP to report on them.
    Further you can check the below links:
    Extraction-Enhancement-Performance problem
    Increase dataload performance
    Dataload Performance
    Performance Enhancement for Custom Data Extractor
    Regards,
    Dhanya

  • How to Improve Report View performance

    Hi All, i have a webi report which runs about 3 minutes. But when i click on view the report takes about 21 seconds(average) or so to open up. Any ideas on how to improve the report view performance? Does it have anything to do with server load? Any server settings to tweak to speed it up? Any ideas are appreciated.
    The requirement is that my web team has to strip off the Business Objects logo etc(using sdk), and display the report in my company web page, so its
    looking sort of ugly as the web page is taking about 21 seconds just to display the report.
    Some Report statistics:
    Report size is about 90 MB, as it has about 300 k rows of data(which i am aggregating using formulas)
    Report has about 15 simple division formulas
    Report is in Drill Mode. There are about 5 drill filters
    Thanks,
    Kon

    Hi Larry,
    I'll assume you are scheduling this report and viewing the instance in ~21 seconds.  Is that correct?
    We definitely need some environment info to go along with this post.  Like Simone said, Product Version, Patch Level, and other OS, Hardware, App Server details would help as well.
    There are certain properties of a document that can slow down the rendering of a report but we generally have to look at the logs to determine what part of the report is taking the longest time to process.  Assuming this is an instance, I would be curious to know if it is quicker to come up if you immediately view it a second time?
    If you were to turn on a trace, you would see a number of lines like this:
    2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut
    2011/06/15 20:11:54.153|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut: 0
    2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:QTDP_StreamUnit_SerializeOut: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveMe_Serial: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveAll_Serial: 0.015
    The numbers at the end are how long the function took to run.  Generally the function gives us an idea of what the engine was doing.
    When evaluating performance issues, you can occasionally find a function that is taking long to run within the logs and based on the function and module names, it can sometime lead you to the reason it is taking longer than expected.
    Another good test might be to run a very basic report to see how long it takes to come up.  Even a report without a datasource would suffice as that will give you your baseline time on how long it takes to load the viewer, convert the WID file to XML and send it up through the application server to your browser.  If a test report takes 15 seconds to view, then you are really only looking at 6 seconds for this other report.
    Hope this helps and gets you started.  More environment info would help take it further.
    Thanks
    Jb

  • To improve data load performance

    Hi,
    The data is getting loaded into the cube. Here there are no routines in update rules and transfer rules. Direct mapping is done to the infoobjects.
    But there is an ABAP routine written for 0CALDAY in the infopackage . Other than the below code , there is no abap code written anywhere. For 77 lac records it is taking more than 10 hrs to load. Any possible solutions for improving the data load performance.
      DATA: L_IDX LIKE SY-TABIX.
      DATA: ZDATE LIKE SY-DATUM.
      DATA: ZDD(2) TYPE N.
      READ TABLE L_T_RANGE WITH KEY
           FIELDNAME = 'CALDAY'.
      L_IDX = SY-TABIX.
    *+1 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) = '12'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE+4(2) = '01'.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 1.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ENDIF.
    *+3 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) => '10'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE4(2) = ZDATE4(2) + 3 - 12.
        ZDATE+6(2) = '01'.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 3.
        ZDATE+6(2) = '01'.
      ENDIF.
      CALL FUNCTION 'FIMA_END_OF_MONTH_DETERMINE'
        EXPORTING
          I_DATE                   = ZDATE
        IMPORTING
          E_DAYS_OF_MONTH          = ZDD.
      ZDATE+6(2) = ZDD.
      L_T_RANGE-HIGH = ZDATE.
      L_T_RANGE-SIGN = 'I'.
      L_T_RANGE-OPTION = 'BT'.
      MODIFY L_T_RANGE INDEX L_IDX.
      P_SUBRC = 0.
    Thanks,
    rani

    i dont think this filter routine is causing the issue...
    please implement performance impovement methods..
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap

Maybe you are looking for

  • Crystal reports 11: Values displayed in crystal report not showing in VB6

    Hi! I updated an existing lower version of crystal report 7 to crystal report 11. Changes are the following: 1. sql query in VB6 to add fields 2. sql query in crystal report to manipulate values and not use formula. crx11 doesn't allow summary/total

  • SCOM agent install fails Error 80070035 - the network path was not found

    I have deployed SCOM 2012 agents to most servers (2008, 2008 R2, 2012, 2012R2) in our estate but some either fail to install or are not monitored. I assume the two problems are related, but for now I am concentrating on the servers that fail complete

  • Help centering divs

    Okay I'm tearing my hair out here! I'm working on a Web design for a class.  I'm trying to make a fixed header that is centered on the screen.  The block containing the header is 950 pixels wide and should auto-center. Here's a link: http://pjutter.c

  • HT201210 How do I resolve error 6?

    How do I resolve error 6 on my IPhone 4?

  • Subscription issue

    I can't see the subcription i paid for and i am being notified in my mail that is has been delievered and this has occured twice with the payment being deducted from my account while there is no calling minutes. what could have been the problem??