IP SLA Request size (ARR data portion) UDP Jitter

Looking through the documentation for ip sla udp jitter, Cisco says these are the default values -
"By default, ten packet frames (N), each with a payload size of 10 bytes (S), are generated every 10 ms (T), and the operation is repeated every 60 seconds (F)."
however, looking at sh ip sla configuration on my router I get different values -
ip sla 1001
udp-jitter 2.2.2.2 20000 source-ip 1.1.1.1
ip sla schedule 1001 life 3000 start-time now
sh ip sla configuration
Entry number: 1001
Owner:
Tag:
Operation timeout (milliseconds): 5000
Type of operation to perform: udp-jitter
Target address/Source address: 1.1.1.1/2.2.2.2
Target port/Source port: 20000/0
Type Of Service parameter: 0x0
Request size (ARR data portion): 32
Packet Interval (milliseconds)/Number of packets: 20/10
This is confusing to me. According to this, the operation is sending 10 frames, at 32 bytes each with 20 ms between frames. Is this correct? Or are these default parameters for UDP Jitter? If so, this definitely does not match up with the data presented on cisco.com
thanks,
Alex

Hi Alex ,
for UDP-jitter ,  Request Data size is :32( By Default) ,however you can change it .
NMS-6500(config-ip-sla-jitter)#request-data-size ?
  <16-1500>  Number of bytes in payload
for ICMP ECHO ,
Request Data size is :28  ( By Default)
Thanks-
Afroz
[Do rate the useful post]

Similar Messages

  • Maximum package size for data packages was exceeded?

    Hi Experts,
    I am facing this problem "Maximum package size for data packages was exceeded". When I am trying to laod. I even tried to reduce data packet and change DTP settings in EXtraction to Get All New Data Request by Request but still same error is occuring. Can u Plz focus light on this.
    Thanks,
    Krishna

    You can refer to the below OSS note:
    Note 1144332 - Consulting note: Message RSBK 250: Package size exceeded
    And other related notes: 352038, 417307
    Hope this helps.
    Murali

  • What's the maximum size of data a coherence cluster can hold?

    What's the maximum size of data a coherence cluster can hold before it starts noticing a degradation in performance?
    Assume a partitioned topology is used with only one backup for each partition.

    Hi,
    Coherence partitioned cache is designed for linear scalability and it does it quite well. I don't see any reasons of performance degrations with increase in data size given, you have enough cores and memory for processing the requests and managing the data.
    Cheers,
    _NJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to extract the size and date from a given file

    Hi,
    I want to extract the size and date of a file (it can be either a video, audio or text file) that the user points to the location of. But I am not sure how. Does Java have an api that can do this? If not is there some other way of doing this? Can anyone help? Thanks in advance.

    Have a look at java.io.File, specifically
    public long lastModified()
    This format returned (I find) is nasty, so then use java.util.Date (or java.sql.Date, look the same on the surface to me) to format it.
    Cheers,
    Radish21

  • Variable Size Item Data in Purchase Requisition

    Hi
    When ever the Purchase Requisition of the Variable size material is
    raised, how do i view,for reference,the Variable Size Item data from
    Bill of materials .
    Steps for the Reconstruction 
    1.Create a Variable size Item Bill of material.
    2.Run MRP agianst the Variable size material's dependent demand.
    3.View the Purchase requisition of the Variable size Item, to take a
    note of the Item Data.

    Thread closed

  • Error "cannot load request real time data targets" for new cube in BI 7.

    Hi All,
    WE have recently upgarded our SCM system from 4.1 to SCM 7.0 which incorporated BI 7.0.
    I am using BI 7.0 for first time and ahve the following issue:
    I ceated a new infocube and data source of flat file and succesfully created transformation, and Data Transfer Process. Everything looked fine. I added flat file and checked preview and could see data. Now when I start job to load data in infocube the follwing error is shown "cannot load request real time data targets". 
    I checked cube type in setting in infcune is shows as Standard.  When I doube clicked on error the following message showed up
    You are trying to load data into a real-time InfoCube using a DTP.
    This is only possible if the correct load settings have been defined for the InfoCube.
    Procedure
    In the object tree of the Data Warehousing Workbench, call Load Behavior of Real-Time InfoCube from the context menu of the InfoCube. Switch load behavior to Transactional InfoCube can be loaded; planning not allowed.
    I did not understand what it is meant and how to set changes. Can someone advice and follow me through.
    Thanks
    KV

    Hi Kverma,
    Real-time InfoCubes can be filled with data using two different methods: using the transaction for entering planning data, and using BI staging, whereby planning data cannot be loaded simultaneously. With Real time cube you can select the method you want to use for update as
    Real Time data Target can be loaded With Data; Planning not allowed &
    Real Time data Target can be Planned; Data loading not allowed
    You can change this behaviour by right clicking on cube and selecting Change real time load behaviour and select first option. You will be able to load the data then
    Regards,
    Kams

  • Customizing tables not asking for Customizing Request while saving data

    Hi,
    I have some customizing tables in my development server (Delivery Class = 'C').
    These always used to ask for a Customizing Request whenever data was saved in them.
    Suddenly, I have noticed they are no more asking for any Customizing Request. I cross-checked in the Transport Organizer and confirmed that there are no customizing requests of mine which may be already holding any data entries of these tables.
    I wonder why this may be happening (believe it to be some basis configuration issue. also asked my basis guy but he has no clue).
    Kindly suggest.
    Thanks,
    Z

    Thanks Navneet and Gautham.
    My problem is now solved. Let me summarize the problem and the solution now.
    -> The customization tables suddenly stopped asking for a request while saving data.
        Somehow the settings had been reset, and as Gautham pointed out, it was corrected in the tcode SCC4
    -> Most of the tables now worked fine, but still few of them didnt ask for requests
        Here, I found out that the developers had chosen "no, or user, recording routine" instead of  "standard recording routine". For such tables, when i check in the tcode SE54, menu path Environment -> Maintenance Objects -> Change, I find the Transport category 'No Transport'. Regenerating the maintenance generator choosing "standard recording routine" fixes this and the tables now ask for a customizing request.
    Thanks, both, for the quick response.

  • Listing File, path, sizes and dates

    Hi all
    Case: I have 6 machines on one network (some with 2 drives), another on another network (connected by vpn) and 3 external drives.
    These all have accumulated stuff, some repeated as I've upgraded and re-purposed machines but left directories behind as a safeguard.
    I need to rationalise the space and develop a more systematic approach to backup and archiving.
    My first step is do an inventory of what's where and my natural approach is work with a data base of file name, path, size, created date and last modified to allow me to do some maths on archiving to dvd.
    Using find as follows
    find [Start Somewhere Directory] -print > [workspace path]/TestOutput.txt
    gives me a nice list of file name I can parse out in a database
    But I don't get size and dates.
    Adding -ls produces that info but creates header lines for parent directories and adds permissions, node and other info I don't need.
    I got to here
    find [Start Somewhere Directory] -type df -exec ls -lshk {} \; -print > [workspace path]/TestOutput.txt
    but still has the hierarchical output rather than flat paths.
    Am I doing this all the hard way ? Is there a tool that returns just what I'm looking for ?
    Or what command will allow me to take just the relevant columns form ls to the print parameter ?
    Or can I extend find to add the size and date info to the output ?
    Kind Regards
    Eric

    Eric
    I just so happened to have done something similar before!
    It relies on mdls so isn't exactly speedy, but produces a full path, size, modification date, modification time, creation date and creation time as a comma separated list. mdls is not exactly predictable as to which order you get its output, so basically you have to try first without any editing.
    Anyway, here it is:
    sudo find ~/testfolder -type f \! -name ".*" -exec echo 'mdls -name kMDItemFSSize -name kMDItemFSCreationDate -name kMDItemFSContentChangeDate "{}" | tr "\n" "," | sed "s%^\(/.*\) .*ChangeDate = \(....-..-..\) \(.*\) .CreationDate.= \(....-..-..\) \(.*\) ...00,.FSSize.= \(.*\),$%\"\1\",\6,\2,\3,\4,\5%"' \; | shI'm sure it could be improved!
    You could also do it with AppleScript, since that can access the creation date easily.

  • Is it possible to save a data portion starting from when a "save" button is pressed?

    Hi,
    The below attached vi acquires data from a sensor. The sensor signal output can be reset to zero and the data saved into a file. However, the saved data comprises the data portion starting from when the vi starts running until the "stop" button is pressed. Is it possible to save the data portion starting from when the "save" button until the "stop" button are pressed?
    Best regards,
    Ninjatovitch
    Solved!
    Go to Solution.
    Attachments:
    Sensor Signal Acquisition.vi ‏52 KB

    this should be in 8.2
    Harold Timmis
    [email protected]
    Orlando,Fl
    *Kudos always welcome
    Attachments:
    RandomNumberSubVI.vi ‏8 KB
    savefilewhenselected.vi ‏11 KB
    f.txt ‏1 KB

  • Strange error in requests returning huge data

    Hi
    Whenever a request returns huge data, we get the following error, which is annoying because there is no online source or documentation about this error:
    Odbc driver returned an error (SQLFetchScroll).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 46073] Operation 'stat()' on file '/orabi/OracleBIData/tmp/nQS_3898_715_34357490.TMP' failed with error: (75) ¾§ . (HY000)
    Any idea what might be wrong?
    Thanks

    The TMP folder is also "working" directory for the cache management (not exclusively). OBIEE tries to check if the report can be run against a cache entry using the info in this file.
    Check if MAX_ROWS_PER_CACHE_ENTRY , MAX_ROWS_PER_CACHE_ENTRY and MAX_CACHE_ENTRY_SIZE are set correctly.
    Regards
    John
    http://obiee101.blogspot.com

  • Maximum package size for data packages was exceeded and Process terminated

    Hello Guru,
    When i am execute the process chain i got this message Maximum package size for data packages was exceeded and Process terminated,any body help to me in this case how can i proceed.
    Thanks & Regards,
    Suresh.

    Hi,
    When the load is not getiing processed due to huge volume of data, or more number of records per data packet, Please try the below option.
    1) Reduce the IDOC size to 8000 and number of data packets per IDOC as 10. This can be done in info package settings.
    2) Run the load only to PSA.
    3) Once the load is succesfull , then push the data to targets.
    In this way you can overcome this issue.
    You can also try RSCUSTV* where * is an integer to change data load settings.
    Change Datapackage size for extraction, use Transaction RSCUSTV6.
    Change Datapackage size when upload from an R/3 system, set this value in R/3 Customizing (SBIW -> General settings -> Control parameters for data transfer).
    IN R/3, T-Code SBIW --> Genaral settings --> Maintain Control Parameters for Data Transfer (source system specific)
    Hope this helps.
    Thanks,
    JituK

  • "Maximum package size for data packages was exceded".

    Hi,
    We are getting the below error.
    "Maximum package size for data packages was exceded".
    In our scenario we are loading the data product key wise (which is a semantic key as well) to the DSO thro’ a start routine.
    The logic in the start routine is such a way that it calculates the unique product counts , product key wise. Hence we are trying to
    group  the product key thro’ semantic groups.
    Ex: In this example the product counts should be A = 1,B=2 ,C = 1
      Product Key
      Products
      A
      1000100
      B
      2000100
      C
      3000100
      B
      2000300
      C
      3000100
    For some product keys the data is so huge that we could not load the data & we are getting the error.
    Please suggest any alternate way to  handle this thro’ code or introducing any other flow.
    Regards,
    Barla

    HI
    we can solve the issue by opening the system setting of data packer size
    like below we have create 2 programs, 1 for open the system settings,2 for
    close the settings .
    1 start program
    data: z_roidocprms like table of
    roidocprms.
    data: wa like line of z_roidocprms.
    wa-slogsys = 'system_client' . wa-maxsize = '50000'. wa-statfrqu = '10'.
    wa-maxprocs = '6'. wa-maxlines = '50000'.
    insert wa into table z_roidocprms.
    Modify roidocprms from table z_roidocprms .
    2 close program
    data: z_roidocprms like table of roidocprms.
    data: wa like line of z_roidocprms.
    wa-slogsys = 'syetm_client' . wa-maxsize = '50000'. wa-statfrqu = '10'.
    wa-maxprocs = '6'. wa-maxlines = '50000'.
    insert wa into table z_roidocprms.
    modify roidocprms from table z_roidocprms .
    data load infopakage settings we have to maintain like
    below
    we have create the process chain like as below
    1 start progarm
    data load infopakage
    2 close program.
    This might fix the problem.
    Regards,
    Polu.

  • No of requests in a Data Target

    Hi
      How/Where i can see the count of Requests loaded in a Data Provider in BI 7.0
    Thanks

    Hi,
    RSICCONT is table used to check the requests in the Data Target level ( Cube and DSO) and also if you are not able to delete the request from the data targets in PRODUCTION (Sometimes you will not have authorization for deleting Request) by going to the mentioned you can delete the request.
    Regards
    Ram.

  • Repl. Partioning BSO to ASO: increase of size of .dat file in temp-folder

    Hello,
    we are shifting data from a BSO Cube to an ASO cube via replicated partitioning. The partitioning takes about 50 minutes to execute.
    Size of .dat in metadata-folder: 8 mb
    Size of .dat in default-folder: 150 mb
    Size of .dat in temp-folder: 38 gb
    Does anyone have an explanation for the enormous size of the .dat file in temp-folder?
    Many thanks in advance!
    Michael

    I am doing the same BSO to ASO. My ess00001.dat in default is 1.9GB, in metadata it is 8.2MB, the OTL file in <db> is 18MB and the outline has about 10,000 members (rough guess). Our partition replication script looks like this:
    login <user> identified by <password> on <server>;
    spool on to <logfile>;
    refresh replicated partition <srcBSO_App>.<srcBSO_DB> to <tgtASO_App>.<tgtASO_db> at <server> updated data;
    Exit;
    I have a second process running in a task scheduler that is continuously updating the aggregates in the ASO cube. Perhaps that is cleaning out my temp .dat. The MaxL command it calls is:
    execute aggregate selection on database <tgtASO_App>.<tgtASO_db> based on query_data;
    Please check out the post I put on the other thread about how we run MaxL from a calc script and other thoughts on "round tripping" Planning-ASO-Planning. Another trick: Retrieve speed is dramatically improved by disabling and working around the @XREFs.

  • IP-sla udp-jitter / one-way delay no output

    Hi *,
    i have a question regarding "ip sla udp-jitter".
    On some connectins i get an output for the "show ip sla stat" for the _one-way delay_
    on other links i don't get an output. The Configuration is always the same and the Probes are running.
    NTP is configured but in my opinion the fact weather i get output for the _one-way delay_
    or not depends on the ntp root despersion.
    Is there a max allowed time differances between the two routes ?
    Here one working / one not working output of the same Router but different peers:
    Not working::
    Latest operation return code: OK
    RTT Values:
    Number Of RTT: 100RTT Min/Avg/Max: 11/11/13 milliseconds
    Latency one-way time:
    Number of Latency one-way Samples: 0
    Source to Destination Latency one way Min/Avg/Max: 0/0/0 milliseconds
    Destination to Source Latency one way Min/Avg/Max: 0/0/0 milliseconds
    Working:
    Latest operation return code: OK
    RTT Values:
    Number Of RTT: 100RTT Min/Avg/Max: 12/13/14 milliseconds
    Latency one-way time:
    Number of Latency one-way Samples: 100
    Source to Destination Latency one way Min/Avg/Max: 6/7/8 milliseconds
    Destination to Source Latency one way Min/Avg/Max: 5/6/7 milliseconds
    I hope one of you can help me to find / fix the problem,
    Thanks in advance / Emanuel

    Hi everyone,
    I have the same doubt.
    I did a ip sla configuration on 1841 and 7206VXR and don't show nothing in one-way delay.
    ----------------------7206---------------------
    -ip sla monitor responder
    -ip sla monitor 1
    - type jitter dest-ipaddr 10.9.105.14 dest-port 16384 source-ipaddr 10.8.20.102  codec g711alaw
    - tos 184
    -ip sla monitor schedule 1 start-time now
    -ntp peer 10.9.105.14
    HOST)#show ip sla sta
    Round Trip Time (RTT) for       Index 1
            Latest RTT: 507 milliseconds
    Latest operation start time: 10:57:36.619 UTC Sun Oct 10 2010
    Latest operation return code: OK
    RTT Values:
            Number Of RTT: 1000             RTT Min/Avg/Max: 125/507/846 milliseconds
    Latency one-way time:
            Number of Latency one-way Samples: 0
            Source to Destination Latency one way Min/Avg/Max: 0/0/0 milliseconds
            Destination to Source Latency one way Min/Avg/Max: 0/0/0 milliseconds
    Jitter Time:
            Number of Jitter Samples: 999
            Source to Destination Jitter Min/Avg/Max: 1/1/6 milliseconds
            Destination to Source Jitter Min/Avg/Max: 1/5/23 milliseconds
    Packet Loss Values:
            Loss Source to Destination: 0           Loss Destination to Source: 0
            Out Of Sequence: 0      Tail Drop: 0    Packet Late Arrival: 0
    Voice Score Values:
            Calculated Planning Impairment Factor (ICPIF): 17
            Mean Opinion Score (MOS): 3.84
    Number of successes: 38
    Number of failures: 0
    Operation time to live: 1347 sec
    -------------------------1841-------------------------------
    -ip sla monitor responder
    -ip sla monitor 1
    - type jitter dest-ipaddr 10.8.20.102 dest-port 16384 source-ipaddr 10.9.105.14 codec g711alaw
    - tos 184
    -ip sla monitor schedule 1 start-time now
    -ntp peer 10.8.20.102
    3383)#show ip sla monitor statistic
    Round trip time (RTT)   Index 1
            Latest RTT: 614 ms
    Latest operation start time: 10:50:50.491 UTC Wed Oct 27 2010
    Latest operation return code: OK
    RTT Values
            Number Of RTT: 999
            RTT Min/Avg/Max: 347/614/867 ms
    Latency one-way time milliseconds
            Number of one-way Samples: 0
            Source to Destination one way Min/Avg/Max: 0/0/0 ms
            Destination to Source one way Min/Avg/Max: 0/0/0 ms
    Jitter time milliseconds
            Number of SD Jitter Samples: 997
            Number of DS Jitter Samples: 998
            Source to Destination Jitter Min/Avg/Max: 0/6/19 ms
            Destination to Source Jitter Min/Avg/Max: 0/1/3 ms
    Packet Loss Values
            Loss Source to Destination: 1           Loss Destination to Source: 0
            Out Of Sequence: 0      Tail Drop: 0    Packet Late Arrival: 0
    Voice Score Values
            Calculated Planning Impairment Factor (ICPIF): 20
    MOS score: 3.72
    Number of successes: 32
    Number of failures: 0
    Operation time to live: 1668 sec

Maybe you are looking for

  • Will not power on nor boot up - Presario CQ5320F, M2N68-LA (Narra6), 250W, Window 7 64-bit

    Boot up problem - Compaq Presario CQ5320F Desktop PC, M2N68-LA (Narra6) mobo, 250W psu  Window 7 64-bit ok here it is: Computer used to shut off intermittently After pressing the power switch a few times or holding it for a bout 5 seconds it would co

  • How to run command on all machine in AD?

    Hi, I need to run few commands on different OU's in my Active directory environment, and want to make sure commands did run successfully on all machines. Please guide me for the same.

  • All Messages in Apple Mail Now Blank

    Hi, I use Apple Mail with my Yahoo email account. Starting today, every I open an email in Apple Mail (including every existing email that I have ever downloaded) gives me an error message, saying that the message was not downloaded from the server a

  • Websphere

    Hi I am using websphere5.1.Now I have a war file to install I am trying to access localhost:9090 but not working. one thing is important that in start menu under IBM WEBSPHERE I have ONLY ONE thing that is Application Developer 5.1.2. so i have doubt

  • [ANN] Book examples available

    In case you're interested in examples of a fairly complex "real-world" internationalized JSF application with scrollable, sortable and editable tables, and custom components/renderers for tabbed panels, a tree control, and a few other cool things, yo