COMPSUMM.BOX Slow to Process Larger SUM Files

Hey Everyone,
We have a large SCCM 2007 environment (over 2000 sites, 80,000+ clients) and we are finding the Central Site doesn't appear to be able to keep up with the influx of messages coming into COMPSUMM.BOX.  We can move all of the files out and feed
them back in slowly, but even if we do nothing but clear out the files the inbox gets backlogged again.
In this Inbox there are a combination of SVF and SUM files.  The SVF files are small and process without any issues.  The problem is the SUM files.  The SUM files up to about 500KB or less appear to process OK and don't hold
up the overall inbox processing too much, but the problem is when the 2MB, 3MB, 4MB and 5MB SUM files come in.  As soon as they hit, the inbox comes to a crawl.  It takes a significant amount of time to process each of these larger files (15mins+
each) and while it's processing one, more come in, then it gets to the next one, and then more come in and by the end of the day we can end up with as many as 300,000 files in this folder.  Eventually, it seems to process a lot of them and occasionally
even gets caught up (down to only 10,000 files), but in busier periods it needs help by us moving some of the files out.  Also, the large SUM files seem to process at a higher priority and leave other items sitting there not moving while the system works
on the larger files?  What process generates the larger files flowing through here and how often?
My question is, is there anything we can do to reduce the amount of data flowing through this specific inbox so it is able to better keep up with the load?  Can we change the processing priority so that the larger SUM files don't automatically jump
to the front of the queue?  I know we can change the schedule for the "Site System Status Summarizer", but there doesn't seem to be such a schedule for the "Component Status Summarizer".  I don't want to turn off replicating these
messages from the child primary sites, but we also don't want to be dealing with a constant backlog situation either.
Any suggestions are much appreciated.
Thanks!
-Jeff

Hi Garth,
I wasn't saying that the larger files process because they are larger files but rather that SUM files appear to process at higher priority than SVF files in this Inbox.  Once one of the large SUM files hits it's turn in the queue, compsumm.box starts
growing.  There were 88,000 files in here when I came in this morning and it still had files from the 17th that hadn't processed.  Moved out all the large files (1000+ of them) and all of the other files in the Inbox processed.  Move the 1000
back in... backlog begins again.  We can have 300 SVF files come in and one SUM file and the SUM file will begin processing immediately and the SVF files don't appear to process until the SUM file is done.  I was thinking more along the lines that
the content of the SUM files may be prioritized higher than others.
By 2000 Sites, yes CM07 Primary and Secondary Sites.  19 Primary and 2286 Secondary to be exact.
No errors in COMPSUMM.LOG.  Just very busy processing.
HINV = Every 4 Days
SINV = Every 7 Days
SWM = Every 7 Days
Heartbeat = Every 7 Days
Discovery/Inventory isn't an issue... No DDR or MIF backlogs.  We don't run AD System Discovery, only Group Discovery and it runs on the tier two primary sites once a month and are staggered by region.
CPU is steady at about 50% with SMSEXEC using about 30%.
Memory is steady at around 10GB of the 36GB Total
SQL is interesting... SQL is a cluster on a dedicated server with 48GB physical memory.  SQL Server 2008 SP3 64-Bit SQL is configured to be able to use up to 42GB of memory leaving 6GB for the OS.  In Task Manager, SQLServr.exe is showing
it is using about 800-900MB most of the time.  However, looking in the status bar in Task Manager it shows Physical Memory: 98%, consistently.  I ran a TASKLIST and exported the results to view in Excel but when I total everything there, it is only about
2.1GB.  Hmm...  Running RAMMap.exe it shows AWE allocated with 43GB of memory.  AWE is NOT enabled in SQL.  From another Google search, this appears to be something others have seen as well, but I'm not finding any good solution, or
any really clear indication this is actually a problem as opposed to how Win2K8 is managing memory.  Don't like seeing the server showing 98% memory usage though.  Going to continue to look at this further and feed the info back to the team.
I have not performed the DBCC commands on the SQL database (yet) but will do so.
Thanks!
-Jeff

Similar Messages

  • Processing large xml file (500mb)? break into small part? load into jtree

    hi,
    i'm doing an assignment to processing large xml file (500mb) and
    load into jree using JAVA.
    can someone advice me on the algorithm to do this?
    how can i load a 500mb xml in a jtree without system hang?
    how to i break my file and do the loading?

    1 Is the file schema based binary XML.
    2. The limits are dependant on storage model and chacater set.
    3. For all NON-XML content the current limit is 4GBytes (Where that is bytes not characters). So for Character content in an AL32UTF8 database the limit is 2GB
    4. For XML Content stored as CLOB the limit is the same as for character data (2GB/4GB) dependant on database character set.
    5. For SB Based XML content stored in Object Relatioanl storage the limit is determined by the complexity and structures defiend in the XML Schema

  • How to process large data files in XI  ?  100 MB files ?

    Hi All
       At present we have a scenario as follows
      It is File to IDoc ....Problem is the size of the file
      We need to transfer 100mb file to SAP R/3 system ? So this huge data how to
      process ?
    Adv thanx and regards
    Rakesh

    Hi,
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
    Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
    Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
    Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    please check these links..
    /community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Input Flat File Size Determination
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    data packet size  - load from flat file
    How to upload a file of very huge size on to server.
    Please let me know , your problem is solved or not..
    Regards
    Chilla..

  • How to process Large Image Files (JP2 220MB+)?

    All,
    I'm relatively new to Java Advanced Imaging, so I need a little help. I've been working on a thesis that involves converting digital terrain data into X3D scenes for future use in military training and applications. Part of this work involves processing large imagery data to texture the previously mentioned terrain data. I have an image slicer that can handle rather large files (200MB+ jpeg files). But it can't seem to process jpeg 2000 data. Below is an excerpt from my code.
    public void testSlicer(){
    String fname = "file.jp2";
    Iterator readers = ImageIO.getImageReadersByFormatName("jpeg2000");
    ImageReader imageReader = (ImageReader) readers.next();
    try {
    ImageInputStream imageInputStream = ImageIO.createImageInputStream(new File(fname));
    imageReader.setInput(imageInputStream, true);
    } catch (IOException ex) {
    System.out.println("Error: " + ex);
    ImageReadParam imageReadParam = imageReader.getDefaultReadParam();
    BufferedImage destBImage = new BufferedImage(256, 256, BufferedImage.TYPE_INT_RGB);
    Rectangle rect = new Rectangle(0, 0, 1000, 1000);
    //Only reading a portion of the file
    imageReadParam.setSourceRegion(rect);
    //Used to subsampling every 4th pixel
    imageReadParam.setSourceSubsampling(4, 4, 0, 0);
    try {
    destBImage = imageReader.read(0, imageReadParam);
    } catch (IOException ex) {
    System.out.println("IO Exception: " + ex);
    The images I am trying to read are in excess of 30000 pixels by 30000 pixels (15m resolution at 5 degrees latitude and 6 degrees longitude). I continually get an OutOfMemoryError, though I am pumping up the heap size to 16000MB when using the command line.
    Any help would be greatly appreciated.

    Hi,
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
    Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
    Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
    Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    please check these links..
    /community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Input Flat File Size Determination
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    data packet size  - load from flat file
    How to upload a file of very huge size on to server.
    Please let me know , your problem is solved or not..
    Regards
    Chilla..

  • Process Large XML files

    Hi,
    I am working on a File-File scenarion and I have around 300 MB XML files files coming in.
    My interface cud successfully process smaller files but it is not able to handle larger files.
    Group pls direct me.
    Thanks,
    Nandini

    Nandini,
    Check out this thread...
    File Adpater: Size of your processed messages
    Other than parameter and hardware setting, use JAVA mapping rather than Message / XSLT mapping...Java mapping is much faster compare to any other sort of mapping...
    Nilesh

  • Slow starting server, large Hypersonic file

    I'm having problems with the Hypersonic database component of PDF Generator. The JMS related records grow rapidly with each converted document, which leads to a very large localDB.script. The default location of this file is here:
    C:\Adobe\LiveCycle\jboss\server\all\data\hypersonic\localDB.script
    This file seems to keep a history of JMS messages relating to completed conversion jobs, and in a site which has a particularly high conversion volume, this file had reached in excess of 120MB and was adding 8-10 minutes to the server start up time as the hypersonic JMS channels were created (and a fair bit to shutdown time as the tables are written back out).
    Manually editing this file back to zero records gets the server booting rapidly again, but doing this requires the service to be shut down.
    I'd like to make sure that this table is kept as slim as possible when the server is running, so that in the event of a system restart (e.g. updates, back-ups, etc.) services can be back up as quickly as possible.
    Has anyone had this problem, and if so what did you do to fix it?

    Sorry for the delay.
    What I was fishing for is what is your brand/kind of hardware. You are speaking about systems with a lot of CPU power, plus you are implying that it could be related to a CPU issue.
    It would not be for the first time that I hear about a OS, hardware or Oracle problem regarding performance problems with systems with a lot of resources, in this case a lot of CPU's. Most of the time is it a problem introduced regarding the overhead which is involved with the management overhead introduced to coordinate processes (for instance: involving CPU cycles, CPU parallel management).
    Beside the steps you have taken to request info via Oracle support, you could try to see what happens if you force the processes be handled by only a view CPU's. I have probably less knowlegde about such nice systems like you are maintaining, but look in the area of CPU_COUNT, the PARALLEL database parameters. Other area's you could investigate is "what is the OS doing?". Afterall it could be that you have created hotspots on your harddisks. Try simple investigating steps like using "iostat" or "sar" or "vmstat".
    If your system is linux, AIX or sun, try installing the with IBM freely available tool "nmon" . Running this "top" alternative will already give you a lot of insight in the responsiveness of your system.
    Anyway I came across http://searchsap.techtarget.com/searchSAP/downloads/chapter-february5.pdf#search=%22relinking%20single%20task%20exp%20imp%22 which is also a good read and may give you some ideas. Also have a look at the good Oracle manual about Oracle Parallel server principles or nowadays called RAC. Even if you don't use RAC it gives you an good insight how to deal with parallelism and multiple CPU systems.
    Grz
    Marco
    Message was edited by:
    mgralike

  • How would java compare to Perl for processing large txt files

    I guess what I am asking is that I write a lot of scripts in perl for processing txt files with large amounts of data in them ( >1 MB or >100K Lines of data )
    Stuff like searching for strings, deleting chuncks of the file, replacing strings, extracting strings, lines etc.
    The reason I used perl is that I was under the impression it would be the fastest for the job but I never considered java. Now I am trying to get up to speed in Java so want to use these small jobs as practice?
    Thanks ... J

    Mod (Compiled) Perl is faster than Java I think. Also, perl has been optimized for this sort of thing. Plus, if you're creating a lot of small to medium size scripts, as you mentioned, perl will be a lot easier to support for you and who ever comes after you. Stick with what you know

  • Processing large files on Mac OS X Lion

    Hi All,
    I need to process large files (few GB) from a measurement. The data files contain lists of measured events. I process them event by event and the result is relatively small and does not occupy much memory. The problem I am facing is that Lion "thinks" that I want to use the large data files later again and puts them into cache (inactive memory). The inactive memory is growing during the reading of the datafiles up to a point where the whole memory is full (8GB on MacBook Pro mid 2010) and it starts swapping a lot. That of course slows down the computer considerably including the process that reads the data.
    If I run "purge" command in Terminal, the inactive memory is cleared and it starts to be more responsive again. The question is: is there any way how to prevent Lion to start pushing running programs from memory into the swap on cost of useless harddrive cache?
    Thanks for suggestions.

    It's been a while but I recall using the "dd" command ("man dd" for info) to copy specific portions of data from one disk, device or file to another (in 512 byte increments).  You might be able to use it in a script to fetch parts of your larger file as you need them, and dd can be used to throw data from and/or to standard input/output so it's easy to get data and store in temporary container like a file or even a variable.
    Otherwise if you can afford it, and you might with 8 GB or RAM, you could try and disable swapping (paging to disk) alltogether and see if that helps...
    To disable paging, run the following command (in one line) in Terminal and reboot:
    sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist
    To re-enable paging, run the following command (in one line) in Terminal:
    sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist
    Hope this helps!

  • Upload and Process large files

    We have a SharePoint 2013 OnPrem installation and have a business application that provides an option to copy local files into UNC path and some processing logic applied before copying it into SharePoint library. The current implementation is
    1. Users opens the application and  clicks “Web Upload” link from left navigation. This will open a \Layouts custom page to select upload file and its properties
    2. User specifies the file details and chooses a Web Zip file from his local machine 
    3. Web Upload Page Submit Action will
         a. call WCF  Service to copy Zip file from local machine to a preconfigure UNC path
         b. Creates a list item to store its properties along with the UNC path details
    4. Timer Job executes in a periodic interval to
         a. Query the List to see the items that are NOT processed and finds the path of ZIP file folder
         b. Unzip the selected file 
         c. Loops of unzipped file content - Push it into SharePoint library 
         d. Updates list item in “Manual Upload List”
    Can someone suggest a different design approach that can manage the large file outside of SharePoint context? Something like
       1. Some option to initiate file copy from user local machine to UNC path when he submits the layouts page
       2. Instead of timer jobs, have external services that grab data from a UNC path and processes periodic intervals to push it into SharePoint.

    Hi,
    According to your post, my understanding is that you want to upload and process files for SharePoint 2013 server.
    The following suggestion for your reference:
    1.We can create a service to process the upload file and copy the files to the UNC folder.
    2.Create a upload file visual web part and call the process file service.
    Thanks,
    Dennis Guo
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Dennis Guo
    TechNet Community Support

  • How can I reprint a large # of files in adobe acrobat XI since there's no batch processing feature?

    How can I reprint a large # of files in adobe acrobat XI since there's no batch processing feature?

    One of the available commands in the Action Wizard is Print (under the More
    Tools sub-section). You create a new Action, add that command and then run
    it on your files to print them all.

  • Processing Large Files using Chunk Mode with ICO

    Hi All,
    I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
    And I know that we can not use mapping while using Chunk Mode.
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    Thanks in Advance,
    - Pooja.

    Hello,
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    According to this blog:
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    The following limitations apply to the chunk mode in File Adapter
    As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
    Only for File Sender to File Receiver
    No Mapping
    No Content Based Routing
    No Content Conversion
    No Custom Modules
    Probably you are doing content conversion that is why it is not working.
    Hope this helps,
    Mark
    Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

  • Process large file using BPEL

    My project have a requirement of processing large file (10 MB) all at once. In the project, the file adapter reads the file, then calls 5 other BPEL process to do 10 different validations before delivering to oracle database. I can't use debatch feature of adapter because of Header and detail record validation requirement. I did some performace tuing (eg: auditlevel to minimum, logging level to error, JVM size to 2GB etc..) as per performance tuing specified in Oracle BPEL user guide. We are using 4 CPU, 4GB RAM IBM AIX 5L server. I observed that the Receive activity in the begining of each process is taking lot of time, while other transient process are as per expected.
    Following are statistics for receive activity per BPEL process:
    500KB: 40 Sec
    3MB: 1 Hour
    Because we have 5 BPEL process, so lot of time is wasted in receive activity.
    I did't try 10 MB so far, because of poor performance figure for 3 MB file.
    Does any one have any idea how to improve performance of begining receive activity of BPEL process?
    Thanks
    -Simanchal

    I believe the limit in SOA Suite is 7MB if you want to use the full payload and perform some kind of orchastration. Otherwise you need to do some kind of debatching, which you stated will not work.
    SOA Suite is not really designed for your kind of use case as it needs to parocess this file in memory, when any transformation occurs it can increase this message between 3 - 10 times. If you are writing to a database why can you read the rows one by one?
    If you are wanting to perform this kind of action have a look at ODI (Oracle Data Integrator). I Also believe that OSB (Aqua Logic) can handle files upto 200MB this this can be an option as well, but it may require debatching.
    cheers
    James

  • In large PDF files my cursor moves around the page at warp speed. It does not seem to make any difference if I slow it down with Sys Prefs. This is not only in preview but also other programs to open PDFs.

    In large PDF files the cursor moves at warp speed with a mind of it's own, I've tried slowing it down in Sys Pref with no luck. On small files cursor action is normal. This happens in Safari, Preview and other programs that use PDF files. Sorry to say this does not seem to be a new problem with 7.# I've had it with prvevious versions of operating system Right now I'm using a Apple wireless mouse and key board. Using the USB keyboard and mouse the same problem does not go away.

    Please read this whole message before doing anything.
    This procedure is a test, not a solution. Don’t be disappointed when you find that nothing has changed after you complete it.
    Step 1
    The purpose of this step is to determine whether the problem is localized to your user account.
    Enable guest logins* and log in as Guest. Don't use the Safari-only “Guest User” login created by “Find My Mac.”
    While logged in as Guest, you won’t have access to any of your personal files or settings. Applications will behave as if you were running them for the first time. Don’t be alarmed by this; it’s normal. If you need any passwords or other personal data in order to complete the test, memorize, print, or write them down before you begin.
    Test while logged in as Guest. Same problem?
    After testing, log out of the guest account and, in your own account, disable it if you wish. Any files you created in the guest account will be deleted automatically when you log out of it.
    *Note: If you’ve activated “Find My Mac” or FileVault, then you can’t enable the Guest account. The “Guest User” login created by “Find My Mac” is not the same. Create a new account in which to test, and delete it, including its home folder, after testing.
    Step 2
    The purpose of this step is to determine whether the problem is caused by third-party system modifications that load automatically at startup or login, by a peripheral device, by a font conflict, or by corruption of the file system or of certain system caches.
    Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards, if applicable. Start up in safe mode and log in to the account with the problem. You must hold down the shift key twice: once when you boot, and again when you log in.
    Note: If FileVault is enabled, or if a firmware password is set, or if the boot volume is a software RAID, you can’t do this. Ask for further instructions.
    Safe mode is much slower to boot and run than normal, with limited graphics performance, and some things won’t work at all, including sound output and Wi-Fi on certain models. The next normal boot may also be somewhat slow.
    The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
    Test while in safe mode. Same problem?
    After testing, reboot as usual (not in safe mode) and verify that you still have the problem. Post the results of Steps 1 and 2.

  • Large .WAV file getting slowed/transposed down!

    Hello all! HELP!
    Why is it that I drag and drop a .WAV file into Garageband (it is a LARGE .WAV file at 779 MB) and it loads just fine and then plays the file DOWN a half-step lower??? So it slows down the whole file, in essence.
    Any ideas?
    MRuckles

    http://www.bulletsandbones.com/GB/GBFAQ.html#tooslow
    (Let the page FULLY load. The link to your answer is at the top of your screen)

  • Problem while processing large files

    Hi
    I am facing a problem while processing large files.
    I have a file which is around 72mb. It has around more than 1lac records. XI is able to pick the file if it has 30,000 records. If file has more than 30,000 records XI is picking the file ( once it picks it is deleting the file ) but i dont see any information under SXMB_MONI. Either error or successful or processing ... . Its simply picking and igonring the file. If i am processing these records separatly it working.
    How to process this file. Why it is simply ignoring the file. How to solve this problem..
    Thanks & Regards
    Sowmya.

    Hi,
    XI pickup the Fiel based on max. limit of processing as well as the Memory & Resource Consumptions of XI server.
    PRocessing the fiel of 72 MB is bit higer one. It increase the Memory Utilization of XI server and that may fali to process at the max point.
    You should divide the File in small Chunks and allow to run multiple instances. It will  be faster and will not create any problem.
    Refer
    SAP Network Blog: Night Mare-Processing huge files in SAP XI
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
    Processing huge file loads through XI
    File Limit -- please refer to SAP note: 821267 chapter 14
    File Limit
    Thanks
    swarup
    Edited by: Swarup Sawant on Jun 26, 2008 7:02 AM

Maybe you are looking for

  • Difference between enterprise metrics and interactive reporting

    All,<BR>I am looking for a tool which can handle the KPIs, traffic signalling,digital cockpit etc. Which one is better either enterprise metrics or interactive reporting.I feel interactive reporting has has more scripting on this. Does enterprise met

  • Installing New IDE Hard Drive.

    I can not get my G3 to find and IDE hard drives. I can start up from a 10.2 cd and into os 9.2 but can not fid the added hard drive. Can anyone help

  • Error during repository drop (emca -repos drop)

    Hi emca -repos drop failed in log i've the error beyond however my password is set correctly. <message from log> Enter repository user password : Getting temporary tablespace from database... Could not connect to SYS/(DESCRIPTION=(ADDRESS_LIST=(ADDRE

  • IDisposable best practice

    After reading a lock implementation I noticed they enabled GC.SuppressFinalize(this) inside Dispose() only when in debug mode, however the guides (1, 2) at msdn don't indicate this. Does anyone know why? Thanks!

  • WRT54GS V7

    Every time I try to Upgrade Firmware, I get "Upgrade are failed!".