Avg read time exceeded on data volume

Dear experts,
We are currently running MaxDB 7.7.04.29 on Red Hat Linux 5.1 for our BW instance.
The data and log volumes and SAP binaries are all utilizing NetApp storage, configured on a
single LUN within a single FlexVol running on a single filer.  I realize this is not the desired
configuration as both SAP and NetApp recommend separating data, log, and binaries on separate
LUNs.  We intend to reconfigure per said recommendation.
That said, do you think this single LUN configuration to be the source of our I/O performance problems?
If not, what other areas should I investigate?
We are getting this error from DBAN.prt, results of the db analyzer:
W2  Avg read time 32.49 ms for 1378 reads on volume /sapdb/BWP/sapdata/DISKD0001
--Erick

>  I realize this is not the desired
> configuration as both SAP and NetApp recommend separating data, log, and binaries on separate
> LUNs.  We intend to reconfigure per said recommendation.
>
> That said, do you think this single LUN configuration to be the source of our I/O performance problems?
> If not, what other areas should I investigate?
> We are getting this error from DBAN.prt, results of the db analyzer:
>
> * W2  Avg read time 32.49 ms for 1378 reads on volume /sapdb/BWP/sapdata/DISKD0001
Hi there!
From DB point of view, there is no way to tell why the IO performance is bad.
The best the database can do is to tell that it is bad (or good).
Anyhow, the setup you described had been one of the 'usual suspects' for IO issues - so, yes it's quite possible that your IO issues are caused by this.
I'd propose to implement the planned change and check the IO performance again afterwards.
regards,
Lars

Similar Messages

  • Routine to read time dependent master data

    Hi Experts,
    I got a requirement, where I have to read time dependent Master Data.
    I need to write a field level Routine on "DEPTID" and to read from EMPLOYEE Master and Dateto=31.12.9999. I need to get last department ID for each employee.
    Below are Source fields mapped to target InfoObjects in my transformation.
    EMPID            mapped to  0EMPLOYEE
    STARTDATE mapped to  0DATEFROM
    ENDDATE      mapped to  0DATETO
    EMPID            mapped to  DEPTID
    Could any one please help out with the abap code to fulfil the above requirement?
    Appreciate for your quick response.

    Cris,If you follow the approach given above will also give you the latest dept id only.I would suggest to give this a try in dev system with some test data and see if that works or not.
    Still if you feel that code has to be written then map end date and dept id to the dept id.
    Code should be something like this:
    IF source_fields-enddate = '99991231'.
    Result = deptid.
    Regards,
    AL

  • How InfoSpoke reads time dependent master data ?

    Hello Experts !!
    How InfoSpoke reads the time dependent master data ?
    What key date it reffers to ?
    Can you please explain, I want to use this concept in writing master data lookup for time dependent attributes of 0MATERIAL.
    Thank a lot !

    You can either specify the time period in the filtering area of infospoke or you can implement a transformation BAdI -OPENHUB_TRANSFORM to manipulate the data whichever way that suites your requirement. All time dependent infobjects have datefrom and dateto fields which you can use to choose your data range accordingly.
    Hope this helped you.

  • HT3275 I'm receiving an error message that reads "The backup disk image '/Volumes/Data/my name's iMAC.sparesebundle" is already in use.  The time capsule is not backing up any files.

    The Time Machine is not completing thte backup to my Time Capsule.  I am receiving a message which reads "The backup disk image "/Volumes/Data/Mac Jet's iMAC.sparsebundle" is already in use."  I cannot locate the source of the problem or what steps need to be taken to get the Time Capsule operating.

    Hi, see if it shows here...
    In Finder's Menu, select Go menu>Go to Folder, and go to "/volumes". (no quotes)
    Volumes is where an alias to your hard drive ("/" at boot) is placed at startup, and where all the "mount points" for auxiliary drives are created for you to access them. This folder is normally hidden from view.
    Drives with an extra 1 on the end have a side-effect of mounting a drive with the same name as the system already think exists. Try trashing the duplicates with a 1 or 2 if there are no real files in them, and reboot.
    If it does contain data...
    http://support.apple.com/kb/TS2474

  • I have not been able to back up for2 days.  Time machine error message: /Volumes/Data/David's MacBook Air.sparsebundle" is already in use.

    I have not been able to back up for2 days.  Time machine error message: /Volumes/Data/David’s MacBook Air.sparsebundle” is already in use.
    What now?
    Macbook Air.
    Was working fine for the past year until now.

    See more like this on the right.. this is the most common error reported here since mountain lion came out..
    Or read C12.. http://pondini.org/TM/Troubleshooting.html
    Or just restart the whole network.. in right order from off.. modem.. router / TC.. clients 2min gap.

  • Time Capsule "Data" Volume won't Mount

    I've been using a Time Capsule (4th Gen/2TB) for both my rMBP Time Machine backups and as a NAS device. After upgrading to Mountain Lion, I've experienced some problems. For the past few weeks, my Time Machine backups have been frequently interrupted with an error message stating that my Backup Disk is non-journaled and non-extended, and to please choose an alternate disk.  Although unsettling, after pressing return - and upon connecting the power cord (as I opted only to backup when using the power cord), Time Machine has always propmtly resumed.  This has probably occurred daily.
    Today however, I've been unable to actually mount the Data volume, although my network is functioning properly, and I can access the Internet.  The Time Capsule status indicator is green, its icon appears in the Finder sidebar, and Airport Utility recognizes it and displays its correct settings.  Upon trying to connect (via Finder), however, an error message states that the server (in this case, the Time Capsule, I guess) can't be located, and suggets I check my network settings and try again.  Needless to say, the settings appear fine.
    An admittedly brief search within this forum yielded no discussions concerning this specific problem, but I'm hoping the community's more knowledgable members will be able to at least provide some helpful insights, if not a solution to this problem.
    The inability to back up my rMBP, access my spasebundle, or manage my externally stored files is very disconcerting.  Any solutions, or insights regarding this issue will be gratefully received. 
    Michael Henke

    There is clearly a bug in Lion that was exacerbated in Mountain Lion, which Apple is yet to fess up to. This loss of connection happens and there is no fix I can give you other than reboot to get the network running again.
    I would strongly recommend against using the TC as a NAS.. it is designed as a backup target for Time Machine.
    If you want a NAS then buy a true NAS, synology or Qnap, being the top products in that field. They have raid and automatic backup with units where you can access hard disks and can replace them. TC is a sealed unit without any way to back itself up. The design was never as a NAS.
    Since your TC is an older one, can you try running 7.5.2 firmware. None of the TC made in the last year or more can go back that far.. they are all stuck on 7.6, the earlier Gen4 did have 7.5.2 which I think has better stability..
    Now exactly what issues come up with ML I am not sure, but I would be interested to hear your experience.
    Please do a backup particularly of your itunes before you start fiddling.
    Is the TC main router in the network or is it bridged? I have a few suggestions for each type of setup, which may keep it running longer.. but nothing is absolute fix.
    Note also the TC seems to be the main problem device.. but the OS does still have issues.. some changes were made with AFP security at Lion, which have not worked very well.

  • Long time to load data from PSA to DSO -Sequential read RSBKDATA_V

    Hi ,
    It is taking long time to load data from PSA to DSO. It is doing Sequential read on RSBKDATA_V and table contents no data .
    we are at - SAPKW70105. It started since yesterday . There is no changes in system parameters.
    Please advice. 
    Thanks
    Nilesh

    Hi Nilesh,
    I guess the following SAP Note will help you in this situation.
    [1476842 - Performance: read RSBKDATA only when needed|https://websmp107.sap-ag.de/sap/support/notes/1476842]
    Just note that the reference to Support Packages is wrong. It is also included in SAP_BW 701 06.

  • HT3275 Time capsule message reads: The backup disc image "/Volumes/(my name)'s Time capsule/(My name)'s macbook pro sparsebundle" is already in use.

    My time machine won't back up....The message reads : The backup disc image"/volumes/(my name)'s time capsule/(my name)'s macbook pro. sparsebundle" is already in use........I'm not using anything else that I know of.....?.......Help!

    ATXMacMan wrote:
    I use OS X server for maintaing Time Machine backups for several macs
    Ah, that's a new wrinkle to me.  I haven't tested the Server product, but there are a number of differences.  As far as I can tell, the firmware update solved this for most, but not all, users of standard OSX.
    Are you still on Snow Leopard server?  It's quite different from later versions, I understand.
    Have you tried selecting the base station, then Base Station > Restart via Aiport Utility's menubar?  That still disconnects everyone, but might work by itself.
    Otherwise, sorry, but I'm out of my comfort zone. 
    Hopefully one of the networking gurus will have an idea or two.

  • Navigation attribute behaviour during time dependent master data read

    Hi All,
    My report is based on infocube. There is one time depenedent master data characteristic named 0EMPLOYEE in the cube.
    My report is run for a specific time period. For example Monthly and quaterly level.
    There are many infoobject form part of dimension in cube and few are part of navigation attribute folder.
    Now in the transformation there are some fields which needs to be read from master data depending upon some time characteristic for ex:-0CALMONTH, 0CALDAY. This rule is mapped correctly and data is also being stored correctly as per the specific time period in the cube.
    Now there are some navigation attirbute which reads the data from 0EMPLOYEE master data itself.
    My doubt is will navigation attribute read the latest record for the employee or is it intelligent enough to read the right records depending upon specific time char?
    With navigation attribute we dont have option to specify any rule as compared to normal objects in transformation.
    What will navigation attribute read ? Latest record or speicific records as per time period?
    Thanks & Regards,
    Anup

    Hi Anup,
    Let me give you one small example about time dependent attribute work,
    let us say we have 0COSTCENTER as time dependent attribute of 0CUSTOMER. Now in your transaction data you have loaded values of 0CUSTOMER and in the query you have used 0CUSTOMER_COSTCENTER attribute.
    Transaction Data,
    Tran. no.      Customer Number       Amount
      1                      123                          500
       2                      125                         450
       3                     126                          900
    Master Data:
    Customer      Cost Center       Valid From             Valid To
      123                COST1                1st Jan                 15th Jan
       123               COST2                 16th Jan              30 March
      123                 COST3                31st March          30 June   
    In the above example the data loaded for valid to and valid from has came from source system and for this data you will have direct mapping done in transformation. Now data will reside as above in your system.
    When you use Key date as 20th Jan then the cost center for customer 123 in the query will be shown as COST2. So this assignment of time dependent data is done at runtime only and will not have any impact on underlying data.
    Regards,
    Durgesh.

  • Maximum Time Exceed.

    Hi Guru's,
    I executed the MB5B report for finding the opening and closing stock providing the Material, Plant, Company code, Posting date but it took long execution time and ended in 'Maximum Time Exceed'.. I also tried in Background but the job didnt collected in spool, the spool list is empty. Before a week the report ran fine.. even i checked for a particular date..Please give me a solution.
    Thanks,
    Satish.M.R

    Dear,
    Check note: Note 25528 - Parameter rdisp/max_wprun_time and also consult your BASIS team.
    Regards,
    Syed Hussain.

  • How to measure and limit the data volume?

    Morning.
    I need a tool to measure and limit the data volume of internet usage. My internet tariff allows a maximum of 5 GB data volume per month. Exceeds my usage that amount the bandwidth will reduce to only 64 kB/s or the exceeding data volume must be paid extraordinarily expensive.
    Do you know a tool that measures the data volume in a given time period and can alert or limit the internet connection for instance if the data volume at the half of the months has exceeded more than the half of the data volume for the entire month?
    Kind regards, vatolin

    You could generate large amount of data and then use any SNMP Viewer (BMC Dahsboard, Solarwinds, Nagios, CiscoWorks etc.) to see the throughput of the interfaces at peak. But why bother? Cisco has been commented by numerous research firms (Gartner etc.) to be very precise about their stated throughputs.
    Regards
    Farrukh

  • Icmp and concept of their messages (echo-reply, time-exceeded,,etc)

    If we have these type of ACLs
    permt icmp any any echo-reply
    permt icmp any any time-exceeded
    permt icmp any any port-unreachable
    As we know that an echo-reply means, if I send an echo-request, I am going to expect to receive an ech-reply,,,,(i.e. an echo-reply is response (result) to an echo-request.,,,to my knowledge an echo-reply can not be initiated unless there is an echo-request,,,Am I right ? )
    1- Does all other types of icmp messages relay on an echo-request as well (i.e. behave as an echo-reply ) ? or they are independent ?
    2- Does an ACL statement "deny icmp any any ", deny all types of icmp messages ?

    Suppose I have got R1 with these reflexive ACL
    R1:
    ip access-list extended FILTER-IN
    permit icmp 10.0.0.0 0.0.0.255 any reflect GOODGUYS
    permit ip any any
    ip access-list extended FILTER-OUT
    deny udp any any eq snmp
    permit icmp any any time-exceeded
    permit icmp any any port-unreachable
    evaluate GOODGUYS
    deny icmp any any
    permit ip any any
    interface Ethernet0/1
    ip access-group FILTER-IN in
    ip access-group FILTER-OUT out
    FILTER-IN list monitors packet data as it is sent into the E0/1 interface. The data is captured and put into a temporary list called GOODGUYS.
    The FILTER-OUT list looks at the data stored in GOODGUYS and monitor TCP/IP traffic being delivered out the E0/1 interface.
    Any TCP/IP traffic that originated from the 10.0.0.0 network is allowed to come back into the network.
    1- Does the traffic will be filtered only on base of icmp protocol ?
    2- How does the reflaxive ACL check the originate of packet ? does it compare the destination ip address of a returned packet with the source ip address of a dispatched packet ?
    3- What will the case be if I replaced the "permit icmp 10.0.0.0 0.0.0.255 any reflect GOODGUYS" with "permit ip (instead of icmp) 10.0.0.0 0.0.0.255 any reflect GOODGUYS" ? Am I going to include permission of an icmp packet as well?

  • Preconversion: Reducing Data Volume

    Hi,
    During Unicode Preconversion we have this strongly recommended step "Reducing Data Volume". If any one has implemented this step can you tell me how much time (average) it will take to finish this step?
    Regards,
    Ravikanth

    Hi Ravikanth,
    This step is optional step. It is recommended by SAP because it can reduce Unicode Conversion runtime and downtime since data in tables is reduced.
    Also, Data archiving is itself a separate project and time of this activity depends on tables you want to archive and how much data you want to archive. You can reduce data yourself by deleting old background job logs, BDC logs, old spools, old abap dump logs.

  • Unable to insert date and time when using date datatype

    Hi
    I am hitting a bit of a problem when using the date datatype. When trying to save a row to the table where the field it throws an error ora 01830 and complains about converting the date format picture ends...etc. Now when I do the insert, I use the to_date function with the format of "dd-mon-yyyy hh24:mi:ss". Of course, when I remove the time element, everything is perfect.
    Checking sysdate, I noticed that the time element wasn't be displayed, and I used alter session set nls_date_format to set the date and time I want to save to the table, which worked!
    Then based on advice in a previous thread to permanently fix the problem, I used alter system set nls_date_format ="dd-mon-yyyy hh24:mi:ss" scope=spfile; This showed that it was altered, and I can see the setting in the em. In sqlplus, I shutdown the database, and restarted with startup mount; alter database open; and then selecting sysdate, it still shows the date as dd-mon-yy, and still no time! Checking the em, and looking up the nls_date_format the setting is still shown as "dd-mon-yyyy hh24:mi:ss".
    So, my question is this - what am I doing wrong? Why can't save date and time using date in Oracle 11g?????
    Thanks

    user633278 wrote:
    Hi
    I am hitting a bit of a problem when using the date datatype. When trying to save a row to the table where the field it throws an error ora 01830 and complains about converting the date format picture ends...etc. Now when I do the insert, I use the to_date function with the format of "dd-mon-yyyy hh24:mi:ss". Of course, when I remove the time element, everything is perfect.
    Checking sysdate, I noticed that the time element wasn't be displayed, and I used alter session set nls_date_format to set the date and time I want to save to the table, which worked!
    Then based on advice in a previous thread to permanently fix the problem, I used alter system set nls_date_format ="dd-mon-yyyy hh24:mi:ss" scope=spfile; This showed that it was altered, and I can see the setting in the em. In sqlplus, I shutdown the database, and restarted with startup mount; alter database open; and then selecting sysdate, it still shows the date as dd-mon-yy, and still no time! Checking the em, and looking up the nls_date_format the setting is still shown as "dd-mon-yyyy hh24:mi:ss".
    So, my question is this - what am I doing wrong? Why can't save date and time using date in Oracle 11g?????
    ThanksYou most certainly can save the time. A DATE column, by definition stores date and time. What you describe is a presentation problem, and setting nls_date_format at the system as an init parm is the weakest of all settings as it is overridden by several other locations.
    without seeing the exact sql that produced the error (not just your description of what you think you were doing) it is impossible to say for sure.
    However, I'd suggest you read http://edstevensdba.wordpress.com/2011/04/07/nls_date_format/

  • Function modules to read Time clusters B1 and B2 from PCL1 and PCL2

    Hi All
    Are there any function modules or macros to read time clusters B1 & B2?
    I want to read time data in the clusters for reporting purpose.
    Regards,
    Rupesh Mhatre

    You can also call the FM HR_TIME_RESULTS_GET and get the exact cluster you need from B2 like WPBP, ZE, SALDO etc.
    Otherwise if you want to use the older FM declare the GET_TBUFF and GET_BUFFER_DIR as of below structure.
    DATA: BEGIN OF TBUFF OCCURS 5000.                           "XPMK014785
            INCLUDE STRUCTURE PCL1.
            DATA: SGART(2),
          END OF TBUFF.
    DATA: BEGIN OF BUFFER_DIR OCCURS 2000,                      "XPMK014785
            SGART(2),
            CLIENT LIKE PCL1-CLIENT,
            RELID LIKE PCL1-RELID,
            SRTFD LIKE PCL1-SRTFD,
            NTABX LIKE SY-TABIX, "pointer auf aktuellen satz
            OTABX LIKE SY-TABIX, "pointer auf alten satz (falls vorhanden)
            NNUXT LIKE PCL1-SRTF2, "anzahl folgesaetze aktueller Satz
            ONUXT LIKE PCL1-SRTF2, "anzahl folgesaetze alter Satz
          ofset(3) type p,     "offset innerhalb eines entry
          END OF BUFFER_DIR.
    INT_TIME_RESULTS should be of type PTM_TIME_RESULTS.
    Regards
    Ranganath

Maybe you are looking for

  • MSI Ti 4200 random lockups

    Ive been trying to get a GeForce 4 to work in my computer for over 3 months now and I am getting very frustrated.  First I had a Visiontek Ti 4200 64Mb with a Soyo Dragon+ mobo and that would lockup any time I tried to play a video game.  The screen

  • Configuration of JMS receiver  Using Jboss JMS provider

    Hi all.. I am configuring JMS adapter as receiver. I am using JMS provider JBOSS . I hav done configuration as Transport protocol :Acess JMS provider with JNDI JNDI lookUp Name of q connection factory: org.jboss.naming.LinkRefPairService Name of JNDI

  • Photo book uploading problem

    I have created a photo book in Elements 7 and cannot get it to upload to Kodak.  My OS is Windows 7 and I am using Noton 360 security software.  I make it through the ordering of the photo book and the uploading seems to go to completion, approximate

  • Will my 2009 macbook pro support a higher operating system than OS 10.5

    Will my 2009 macbook pro support a higher operating system than OS 10.5.8?

  • Using MathScript in LabVIEW eval for Mac

    Our university has been slow in getting the LabVIEW 2010 site licensed copy, so we have been forced to use LabVIEW 2010 eval for mac to work with some externally sourced 2010 code that we had to rewrite. Recently, I requested and received a copy of t