Fragment problem

Hi
Our xdp's are rendered on our site as rendered html form  or rendered pdf form depending on which the client wants.
we've started using simple fragments for the address page a the end of the form.
let's say there are 30 different xdp's all with the same fragment xdp which are on the server in the same directory.
when we made a change to the fragment we ran into a problem with the rendered pdf form not showing the revised
fragment.
i've read that previewing the pdf in designer would work. but that doesn't seem to be the case with a rendered pdf.
also would i have to open and preview all 30 xdp's for this to work.
if so ,why use fragments if you have to open and upload every xdp.
thank you
Karl Nowak

Fragments are resolved at runtime. If yor 30 forms are saved as XDPs then rendered when the user requewtes them, then the
fragments will be resolved and any changes to them will also be picked up. If you saved the 30 forms as PDFs , then they already are rendered and the fragment has been resolved (at teh time of PDF creation).
Make sense?
Paul

Similar Messages

  • Tablespace fragmentation problem

    All,
    I am working in Oracle 9i. Developers are facing some problem. Oracle is throwing ORA-1654 error. There is enough space in the tablespaces. To me it seems to be fragmentation problem. I found one query on some site and executed in my environment.
    SQL> select substr(ts.name, 1,10) TableSpace,to_char(f.file#,990) "file #",tf.blocks blocks,sum(f.length) free,to_char(count(*),9990) frags,max(f.length) bigst, to_char(min(f.length),999990) smllst,round(avg(f.length)) avg,to_char(sum(decode(sign(f.length-5), -1, f.length,0)),99990) dead from sys.fet$ f, sys.file$ tf, sys.ts$ ts where ts.ts# = f.ts# and ts.ts# = tf.ts# group by ts.name, f.file#, tf.blocks;
    TABLESPACE file BLOCKS FREE FRAGS BIGST SMLLST AVG DEAD
    GAP_ARC 7 15360 8239 108 6099 20 76 0
    GAP_BILD 8 256000 223804 3655 77909 5 61 0
    GAP_DATA 9 230400 48267 211 3937 5 229 0
    GAP_GEN 10 192000 86156 902 52178 1 96 3
    GAP_IMP 11 38400 37669 14 37399 10 2691 0
    GAP_INDEX 12 230400 96408 3557 506 1 27 5335
    GAP_INDEX 12 409600 96408 3557 506 1 27 5335
    GAP_INDEX 13 230400 56120 1611 495 2 35 2627
    GAP_INDEX 13 409600 56120 1611 495 2 35 2627
    GAP_ZNZL 14 1920 1919 50 515 10 38 0
    RBS 3 51200 30479 871 35 29 35 0
    RBS_BIG 4 76800 63929 157 43649 130 407 0
    SYSTEM 1 25600 25952 2 12976 12976 12976 0
    SYSTEM 2 25600 22728 20 11321 2 1136 12
    TEMP 5 64000 58279 252 25649 130 231 0
    TOOLS 6 6400 6384 1 6384 6384 6384 0
    16 Zeilen ausgewählt.
    From the output it seems GAP_INDEX has some fragmentation problem. Can somebody suggest, what the output means particularly the "DEAD" one.
    And also, how to do the fragmentation ??
    Thanks for help.
    Regards,
    Rajeev

    I haven't taken the time to study the query in depth but I think 'dead' is an attempt to identify free exents that are too small to be used.
    The original post leaves out some very important information such as the type of tablespace space allocation in use: dictionary vs local with auto-allocate or uniform extents.
    As one poster noted if you use locally managed tablespaces wtih uniform extents then free space fragmentation by definition cannot exist.
    With auto-allocate and dictionary mangement it can. Auto-allocate generally extents the time till a tablespace's free space becomes fragemented such that the space is not usuable and contains logic to reclaim via reuse but with the right combination of very small and very large objects the problem can still exist.
    Dictionary management is almost bound to lead to free space fragmentation conditions for an active system though the adoption of a proper extent sizing policy can help: initial = next with pctincrease = 0 for all objects in a tablespace and the extent sizes for the larger objects being an even multiple of the small object extent size.
    For auto-allocate and especially dictionary management when free space gets low the options are re-create the objects to consolidate all free space into a few large contiguous chunks or add another file. Sometimes you can relocate an object whose extents can then be reused by other objects. With uniform extents moving an object to another tablespace always creates fully reusable free space. Since objects should be where you want them the only real option then is to add free space is to add a file. Managing free space in a uniform extent environment then is genarally very straight forward.
    HTH -- Mark D Powell --

  • Fragmentation problem with Wlan Controler 4402 due to LWAPP header overhead

    Hello there,
    I have following situation.
    network---wlan controller4402----FW1---MPLS---FW2---Access points----networks
    I can see very big amount of fragmented packets coming to FW2 and getting dropped in one moment. After those packets get dropped, access points get disconnected
    I wonder, is there any documentation, or maybe some of you guys can give me some tip/trick how to set up WLC4402 to use TCP MSS, PMTU or any other settings that can help that IP Datagrams between 4402 and access points don't go over 1300B
    Thanks a lot for any help
    Regards
    Milos

    Hello,
    Thanks for your response.
    I'm a bit confused watching this document you gave me. I don't know which part of it can be related to prevention of IP Fragments between controller and AP-s.
    Problem is that controller is located on one side of the network, and we have FW (non cisco), MPLS network, other FW and then APs
    other FW across MPLS receive a lot of fragments, which of course should be defragmented for session inspection, so this leads to large CPU utilization and fragment drops.
    If we can instruct controller to set MSS bits or to be involved in PMTU, then we could try to prevent end hosts to ever create large TCP/UDP segments/datagrams to be sent via LWAPP tunnel
    Thanks for any advice
    Cheers
    Milos

  • Fragmentation problems with PSE 11

    I have just installed PSE & Premier Elements 11 on Vista Ultimate. When I try to edit my photos they become fragmented and look like a jigsaw puzzle. This does not happen with PSE 10 on the same computer. Can anyone tell me why this occurs?  Could it be a fault with the software?

    Thank you for your information about my problem with PSE 11. I have since tried the programme on 2 different computers and it works fine. Seems like the problem is with my computer although my old PSE 10 works okay. I'd appreciate any ideas about what could be wrong with my computer to cause my photos to break up into various segments which don't join together particularly when I move them into 'before and after horizontal' or '......vertical' and even when zooming them. ( HP Pavilion Elite) Vista Ultimate.
    Regards,
    Stervogel.

  • Fragmentation problems

    Hello everyone.
    I'm having problems transfering big packets on some devices.
    It seems that when I try to use packets bigger than 5128 some are not reassembled.
    I'm getting these errors:
    IP: s=10.222.176.1 (GigabitEthernet0/3), d=10.10.31.17, len 752, input feature, Access List(8), rtype 0, forus FALSE, sendself FALSE, mtu 0
    ICMP: time exceeded (reassembly) sent to 10.222.176.1 (dest was 10.10.31.17)
    show ip traffic didnt show any issues on the 3 catalyst in the path:
    IP statistics:
    Rcvd: 78928204 total, 3464197 local destination
    0 format errors, 0 checksum errors, 1043585 bad hop count
    0 unknown protocol, 0 not a gateway
    0 security failures, 60 bad options, 9794 with options
    Opts: 68 end, 32 nop, 0 basic security, 60 loose source route
    9694 timestamp, 0 extended security, 40 record route
    0 stream ID, 0 strict source route, 0 alert, 0 cipso, 0 ump
    0 other
    Frags: 40274 reassembled, 1752 timeouts, 0 couldn't reassemble
    168815 fragmented, 14 couldn't fragment
    Any clues? what is this ACL 8 that is not configured on the device where I set the debug ip packet?
    Vlad

    Hi Vlad,
    According to RFC792:
    If a host reassembling a fragmented datagram cannot complete the reassembly due to missing fragments within its time limit it discards the datagram, and it may send a time exceeded message.
    I would say that you are losing some packets (fragments) along the way...
    Paresh

  • Fragmentation Problem In L2MPLS

    hi
    In case of l3 mpls CE packets without DF bit gets fragmented but in case of l2mpls fragmentation never occurs. Whats the reason for this?
    regards
    shivlu jain

    Hello Shivlu,
    >> but in case of l2mpls fragmentation never occurs. Whats the reason for this?
    Shortly:
    neither the L2 carried PDU neither the carrying MPLS PDU provide a fragmentation mechanism.
    L2 MPLS services require MTU tuning on the data path the MPLS MTU has to be big enough to host the carried L2 PDU and all the MPLS stack.
    For making a comparison if you put an 1518 ethernet frame inside an L2TPv3 PDU it will be fragmented in travelling in an IP only network with IP MTU 1500 bytes.
    But in this case the IP fragmentation services are used.
    Hope to help
    Giuseppe

  • Disk fragmentation problem

    I noticed some applications are fast and others slow. I suspected fragmentation because I had been running with an almost full disk for some time. I have now deleted some large home movie files. I then ran IDefrag which tells me the disk is badly fragmented.
    So using Disk Utility and my firewire drive I erased a partition of the firewire drive did a restore to the firewire drive and can now boot off the firewire drive. I then ran IDefrag on the firewire drive and it gives me the same fragmentation pattern and files - I had expected it to have no fragmentation.
    I was hoping to be able to erase my computer's hard drive and then restore from the firewire drive and have no fragmentation, but if the firewire drive has the same fragmetation, this will leave me no better off.
    How do I get it restored to my drive with no fragementation? I have plenty of space on the external drive.
    (BTW running IDefrag resulted in IDefrag hanging for hours and not progressing and showing up red in Force Quit, so it did not solve the issue - I placed a support call with them)

    Hi Paul, download Carbon Copy Cloner. Wipe your internal HDD and clone back. If iDefrag finds any fragmentation trash it
    -mj
    [email protected]

  • Aironet 1200 fragmentation problems?

    We have a linux (poptop) PPTP server, and after a while, server starts to discard GRE packets.
    Clients are connnected through Cisco AP, and I'm thinking about that something is wrong with MTU and/or MSS on it. With our other APs everything works just fine (linksys, pheenet, ovislink)...
    Any ideas?
    Thanks in advance!

    System Software Version: IOS (tm) C1200 Software (C1200-K9W7-M)
    Product/Model Number: AIR-AP1231G-E-K9
    Top Assembly Serial Number: FHK0818J0XX
    System Software Filename: 1200-k9w7-tar.122-13.JA3
    System Software Version: 12.2(13)JA3
    Bootloader Version: 12.2(8)JA
    I think that this IOS version is not so old?

  • Two Nearly Identical Workstations--1 with 'not responding' problems on PS Startup

    We have two similar editing workstations on which we run Adobe CS3 products as follows:
    SYSTEM 1
    Gigabyte GA-P35-DQ6
    XFX PVT80GTHF9 GeForce 8800GTS 640MB
    Intel Core 2 Quad Q6600
    Mushkin Hp2-6400 Ddr2 4gb Kit
    Western Digital Caviar HD 500G|WD 7K 16M SATA2 WD5000AAKS
    SILVERSTONE TEMJIN SST-TJ06S-W Silver Aluminum
    Seasonic S12 Energy Plus SS- 650HT Power Supply
    LG GGW-H20L Blu-ray Burner
    HP LP3065 30" LCD Display
    Turtle Beach Montego DDL 7.1 Dolby Digital
    Pioneer DVR-112D
    NEC Beige 1.44MB 3.5" Floppy Drive Model FD1231H
    ZALMAN 9700 LED 110mm 2 Ball CPU Cooler
    Black Magic Design Blackmagic Design BINTSPRO Intensity Pro
    Contour Shuttle-Pro
    SYSTEM 2
    Gigabyte GA-P35-DQ6
    Quadro FX4600 768MB 384-bit GDDR3 PCI Express x16
    Intel Core 2 Quad Q6600
    Mushkin Hp2-6400 Ddr2 4gb Kit
    Western Digital Caviar WD1001FALS 1TB 7200 RPM 32MB Cache SATA Hard Drive
    SILVERSTONE TEMJIN SST-TJ06S-W Silver Aluminum
    Seasonic S12 Energy Plus SS- 650HT Power Supply
    HP LP3065 30" LCD Display
    AuzenTech Auzen X-Fi Prelude 7.1 7.1 Channels PCI Interface Sound Card
    Pioneer DVR-116D
    NEC Black 1.44MB 3.5" Floppy Drive Model FD1231H
    ZALMAN 9700 LED 110mm 2 Ball CPU Cooler
    Contour Shuttle-Pro
    Both systems are running Win XP SP2 and are behind 'walled gardens' on our LAN, a private sub-net of workstation computers with no internet access (we have office computers for browsing and e-mail).
    While the two PCs are very similar, note that sound and video cards are different. Also system 2 does not yet have a Blu-ray writer.
    System 1 works great with all of the Adobe apps.
    System 2 experiences lengthy 'hangs' upon launch of several Adobe apps, including PhotoShop, Acrobat and, to a lesser extent, Premiere. What exactly happens is upon launch, on system #2, PhotoShop loads to the workspace, but hangs just as the palettes are being created. Time to this point, 3 seconds. Then, 60 seconds passes. The application's status in Task Manager changes to Not Responding. More time passes. Then the application comes back to life and is responsive again.
    On system #1, PhotoShop is ready to use in 3 seconds following the command to launch. The 90-second delay/hang does not occur.
    Logical deduction would suggest that the possible culprits are the video or sound card, these being different.
    I also note that when recording and playing audio in other applications, like SoundForge and WinAMP, momentary pauses or gaps in the audio playback and recording are noted. It takes about 30 minutes of listening to encounter one such hiccup. It was also discovered that during recording, samples were dropped, during an hour-long studio recording session. There is something not quite right with system #2.
    I was curious if anyone found a correlation between the Auzentech sound card and similar behavior where the app hangs and stops responding. Or anyone with the FX4600 graphics card having this problem?
    What could PhotoShop, Acrobat and Premiere be doing during startup that is resulting in the apps to stop responding for a while, then return to responsive state? I have been going nuts for 4 months troubleshooting and trying to fix this problem, with no results. I've turned off nearly every Windows service, checked Adobe preferences between the two machines, speed tested the RAID 0 arrays and system drives--all seems well--in fact, the 2nd workstation has faster hard drives than the first, so maybe it is a problem with some exotic timing issue where the data is available to PhotoShop too soon and it gets confused and hangs up for a while? I'm at a loss!

    I don't like the way OpenType looks on screen. I know it has some advantages, like a larger character set, but the hinting, or lack thereof, makes low resolution imagers like screens and inkjet printers produce less than great output. And my investment in Adobe fonts is in the many thousands of dollars, purchased back around 1991. I prefer PostScript fonts by a wide margin over OTF and TTF.
    WRT audio card, I removed the drivers, and installed RC8, the current version, but that did not help. I did note something curious though: rebooting with no audio drivers shaved 30 seconds off the boot time! The blue 'loading your personal settings screen' that usually stays up for 35 seconds, was there for 5 seconds then right to the desktop. It was the fastest I've ever gotten to the desktop on this machine. Once I installed RC8 drivers, the loading time became long again.
    I can be certain there's no file fragmentation problems. The temp drive has about half a terabyte of free space on it, and in fact, of the 2TB of system storage, only about 10% is in use (new system).
    But the latency is really noticeable with Sony SoundForge 9. Start playback and the wait can be anywhere from 1-3 seconds before play begins!
    On the other workstation with the TurtleBeach Montego DDL, playback is instant when I hit Play. Never a delay that's perceivable. I just don't like the way the Auzentech works. It may even be responsible for the problems I had with Premiere CS4, which I gave up on in March after a concerted struggle to solve the playback issues there. But I seem to be not the only one suffering with that anomaly. Though my underlying feeling is that something is not right with this system's hardware combination. One or more peripherals is just not playing well with the software, I suspect.

  • Problem with voiceover recording

    When I try to do a voiceover on a completed movie, it abruptly stops and I get the message: "The disk responded slowly. It may have been interrupted by something, or it may not be fast enough for your movie." The movie is short, about 8 minutes, but quite complicated with much editing, many photos, special effects, transitions, music, etc. I have done many, many movies very successfully, but this is the first time I have tried voiceover.
    iMovie Help suggests a disk fragmentation problem. Although I have about 50GB free on my hard drive, would defragging help? Or, could the problem be something else?

    GEH,
    Another option is to record your voice over outside of iMovie.
    The downside is that you can't see picture as you record. But I used this method with a short documentary that I had a written script for and it worked quite well.
    I used Audio Recorder* to record the voice over doing multiple "takes" of each section. I then brought the tracks into GarageBand for editing - putting the beginning of take 1 together with the end of take 4 etc.
    I next exported the Voice Over in a several short pieces which I brought into and positioned in iMovie.
    Might be overkill for what you are doing, but I thought I'd mention it.
    Matt
    * I used the mic on my iSight camera which can't be used directly with GarageBand so I needed the intermediate step of using Audio Recorder.
    You can find audio recorder here:
    http://www.versiontracker.com/dyn/moreinfo/macosx/17392

  • Memory increase problem on Solaris

    We need your help in understanding and solving a memory problem in one of our products running on Soalris. The server memory keeps growing and there is a fear of a server crash.
    We've run our product with Purify and libumem to identify memory leaks. Both of these products didn't report any leak. So, we are very much confident that there are no memory leaks in the product. Still the memory increase is seen on Unix systems. We've done extensive analysis on this subject and absorbed some key points regarding memory management in Unix.
    1) Memory doesn't shrink on Unix systems. This is for the libc to retain the released memory and reuse it for further memory requirements.
    2) Based on the memory requirements to our application there can be a stabilization point where in the application can serve all memory requests.
    After reaching this point the Server memory doesn't increase.But the fear is that it may exceed the hardware resources constraint.
    3) Rapid increase in memory can be because of Memory Fragmentation problem. There are environmental variables like Small Block Allocator (_M_SBA_OPTS) and Arena (_M_ARENA_OPTS) on HP-Unix machines which can be used to address Fragmentation issues. We've performed some tests using these variables and found significant improvement in the usage of memory. This confirms that there are no memory leaks. These variables are specific to HP-Unix machines and we are unaware of similar variables/mechanism on Sun Solaris Platform. Please recommend some environmental variables/mechanism on Sun Solaris platform to solve fragmentation problem.
    We are performing some tests to observe memory stabilization over a period of time .
    The memory bloat is following a pattern. Memory is increasing in chunks of (MB) 2,4,8,16,32,64,128,256,512,1024 MB pattern.
    For example if Memory increase is 64 MB, then it stabilizes for some time and the next increase would be 128 MB at a time.
    Then it stabilizes for double the time consumed earlier for 64 MB and after that only increases by 256 MB.
    So the memory increase and stabilization time are getting doubled each time. Please validate our analysis and kindly suggest what can be done on Unix platforms for solving this memory increase problem.

    Thanks MaximKartashev for your inputs.
    Ours is a Multithreading application. We are testing it with Hoard for the last three days. The observation so far has been that the performance of Hoard is high but the bloat pattern is not avoided.
    Even with Hoard, memory bloat is following a pattern. Memory is increasing in chunks of (MB) 2,4,8,16,32,64,128,256,512,1024 MB pattern.
    We are searching for other ways to control this pattern and stabilize the application by making it to reuse the fragmented memory completely.
    We have also tried Solaris libgc memory allocator (http://developers.sun.com/solaris/articles/libgc.html). It worked well initially and memory drop was seen. But on heavy load, it crashed the application with 'Can't allocate header' error. So this can't be used for heavy load.
    We are trying to use Boehm-Demers-Weiser Garbage Collector (http://www.hpl.hp.com/personal/Hans_Boehm/gc/). Do any one have idea, if it controls memory fragmentation ?

  • How to solve the problem of "promotion failed"

    Hello,
    I am new to JVM tuning, could anybody help me and explain how to solve a problem of  "promotion failed"  as following?  (Java 1.6 and the JVM options: -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxPermSize=256m ):
    [GC 96800.304: [ParNew (promotion failed): 287552K->287552K(287552K), 0.8694459 secs]96801.174: [CMS: 5229568K->457193K(5971968K), 19.7176579 secs] 5499429K->457193K(6259520K), [CMS Perm : 66540K->66458K(110904K)], 20.5878599 secs] [Times: user=22.50 sys=0.03, real=20.59 secs]
    bevor:  96755.515: [GC 96755.515: [ParNew: 287552K->31936K(287552K), 0.1900583 secs] 5491052K->5243813K(6259520K), 0.1905515 secs] [Times: user=2.21 sys=0.00, real=0.19 secs]
    after:    96885.774: [GC 96885.774: [ParNew: 255484K->31936K(287552K), 0.0808239 secs] 712678K->490703K(6259520K), 0.0812894 secs] [Times: user=0.97 sys=0.01, real=0.08 secs]
    my questions:
    1. why did the promotion failed when the old generation apears to have enough space (5229568+287552=5517120 < 5971968)?   muss "promotion failed" by all means be handelt?
    2. [CMS: 5229568K->457193K(5971968K), 19.7176579 secs]:  is this full GC and is the time of 19.7176579 really pause (stop the world) time?
    3. [CMS Perm : 66540K->66458K(110904K)], 20.5878599 secs]: why ist Permanent space collected too? dit it take a time of 20.5878599-19.7176579=0.870202 ?  this action apears to be unnessesary (66540K->66458K),  is es possible to disable such aktion?
    4. how to solve the problem? I read a post of fixing such problem with XX:SurvivorRatio=4  -XX:CMSInitiatingOccupancyFraction=75, But this did not work for my case.
    could this problem be due to the fragmentation of old generation space? could the reduce size of PLAB improving reuse of small chunks like -XX:OldPLABSize=16 help in my case? what ist the default value of OldPLABSize of JAVA 6 ? is there any side-effect for the reduce OldPLABSize?
    thanks a lot for any help!
    Tian

    Looks like your old generation has become fragmented and the old space is almost full, or maybe a promoted object is too huge, and there is not enough continue space. Try following:
    1>  increase the size of old generation.
    2>  If increase the size of old generation doesnt work , you may try to use G1 algorithm, it may reduce the fragment problem of old generation.
    Thanks
    Amrit

  • What is  tablespace fragmentation?

    hello all,
    please,give me more detail about tablespace fragmentation .i want to know when tablespace fragmentation occur? thanks all in advance for response.

    first thanks u and all for give me response and this wonderful link but i have a doubt which i want to ask ,when table is dropped,or truncated or raw of table is deleted then free extents is available for another request, if a request is made for an extent allocation that cannot be found in one free contiguous extent even though the total free space in the tablespace may be significantly larger than the requested size. is this a fragmentation problem?

  • What is Fragmentation, how to avoid this?

    Dear All,
    Please help me to get the complete documents on fragmentation and the solutions(remedies) to fragmentation in Oracle 9i and 10g.
    Thanks,
    Mahipal Reddy

    > So, how can i avoid this fragmentation?
    There is no such fragmentation on Oracle 10G when tablespaces are created as locally managed.
    In basic terms - you will only get an extend failing when the tablespace is between 99.9% and a 100% full. This is unlike Oracle v7 for example where there could be 20% free tablespace, but it is unusable.
    In other words - there are no tablespace fragmentation problems with 10G.
    Table data block fragmentation? There is not really fragmentation. There are simply used data blocks below the high water usage mark that can still be used for new rows. There are used data blocks with used space above the high water that cannot be used for new rows. The PCTUSED and PCTFREE parameters govern these high and low water marks.
    The important factors here are data block size, sizes of rows and the sizes of the PCTFREE and PCTUSED. An additional factor could be how bulk inserts are handles for example (as direct path inserts or not).
    Having a table that has allocated for example 1GB of space, and is only using 800MB of that space, is not an error or a problem or a sign of fragmentation. It is to be expected. And that 200MB space not used at the moment, will be used for new inserts or updates - depending on whether those data blocks are on the free list for that table or not.
    I suggest that you read about Oracle space management in the [url http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/logical.htm#i8531]Oracle® Database Concepts guide.

  • 6500 problem

    we are having a problem nailing down someone who is burying a 6509 ,sup 720 . It gets to the point we can't even do anything with the mgt interface . The high process is IP input at the time of the problem . The thing is none of the interfaces seem to be running very high so we can't really pin it on anything . If it's hitting the supervisor it must mean it is not being hardware switched for some reason (broadcasts ????) but have no idea why .Very few acl's on the box . Running code 12.2.18SXD1 . We have tried looking at the netwflow cache but that isn't really giving us much so far . Anybody have any ideas on how to pin this down ? Also on the new 720's can you still use the scheduler allocate command to give you more managebility when things like this happen and what would you set it too???

    In cases where extremely high network load presents itself on the interfaces, it is possible that other tasks will not be able to run. By default, the Cisco IOS allocates 5% of the CPU time to the lower priority tasks.
    I recommend capturing traffic (e.g. using span ports) to learn more about the kind of problem you have.
    You can also issue the command
    show ip traffic | include assembled
    If the counters are high and/or increasing rapidly, you have most likely a fragmentation problem.
    To answer your second question:
    Default values for the global command "scheduler allocate" are:
    scheduler allocate 4000 200
    Where 4000 is the maximum number of microseconds to allocate to fast switching any single network interrupt context, and 200 is the minimum guaranteed number of microseconds to allocate to process level tasks while network interrupts are masked.
    While the default settings are largely adequate for most steady-state operation, you may find that modifications are necessary to ensure console access during periods of extreme network stress. Be careful, however.

Maybe you are looking for