ASR9k CGNAT ISM scalability and performance design

Hi.
I need some performance and scalability numbers and limitations of ASR9k ISM card for NAT44. Documentation says "at least 10 Gbps full-duplex bandwidth throughput". Some other presentation are mentioning 15 Gbps but I think this is probably pps limitation and not bandwidth.
So my question is:
How many pps can one ISM card handle, I found somewhere around 1.5Mpps is this correct.
Is this limitation separate for inbound and outbound connection or this is cumulative value.
I read somewhere if we want to use full potential of ISM card that 2 pair of ServiceApp must be used. Is the limitation under section 1. referring to the one ServiceApp pair or two pairs. How many pps should each ServiceApp pair handle?
How will we know when ISM can't handle any more traffic? Will "Inside to outside drops system limit reached" and "Inside to outside drops resource depletion" counters start to increase?
Thanks

Hi Omadon,
1) It would be about 5.5 - 6 Mpps per ISM. It may vary a little bit based on the packet length and other parameters like logging, etc.
2) It is cumulative value (for both I2O + O2I traffic)
3) Yes, you would need 2 ServiceApp pairs for maximum throughput. With 1 pair of ServiceApp, you would achieve 50% of throughput.
4) There will be drops at internal ports. It won't be reflected in any "show" XR CLI.
regards,
Somnath.

Similar Messages

  • Custom Portal Service Scalability and Performance

    I want to know if Custom Portal Services are scalable enough to handle 300 concurrent users ?
    We are developing custom portal services to consume web service to get response and to send request to an endpoint.
    These portal services will be accessed by portal components.
    We are expecting around 300 concurrent users.
    So whether the custom portal services are scalable ? Does SAP provide support to this kind of architecture ?
    thanks in advance

    HI,
    the portal can support several thousand concurrent users. You only have make a proper sizing (SAPS) and  setup your landscape correctly.
    As for the custom portal service: of course this is scalable, but the performance depends on your coding skills and what the service will do.
    AFAIK SAP offers support for running custom code running on the platform, but not for errors made by the custom code. There is (was, will be) a service from SAP were they check your code for known issues.
    br,
    Tobias

  • How to make OBPM 10.3g solutions highly scalable and performant?

    Hi experts,
    I would like to request some advice on how to make Oracle BPM 10.3g based solutions highly scalable. In our organisation, we are rolling out few processes (3-5) on OBPM 10.3g, with couple of 100 concurrent users on each. But going forward, we expect at least 50 processes, with concurrent users on each process reaching upto 500-1000. We understand that each process could be packaged as separate OBPM project, targeted to separate OBPM engine, where each engine itself is targeted to a separate WebLogic server cluster. Is this the correct approach?
    Apart from above, are there any additional factor/apporach that we must consider while implementing our solution?
    Is there any documentation that discusses how to improve the performance/scalability of OBPM 10.3g solutions?
    We would highly appreciate any pointers.. Many thanks.
    Brgds,
    Amit

    LoadRunner is one of the more popular means of performance testing BPM....
    [http://www8.hp.com/us/en/software/software-product.html?compURI=tcm:245-935779&pageTitle=loadrunner-software]

  • Oracle Express Tips/Whitepapers on Design and Performance ?

    Hi all,
    I was wondering if someone could please provide me with some information to
    find
    Articles / White Papers / Tips on Oracle Express, design and performance
    tuning etc.,
    Thanks for your help in advance.
    RN
    null

    Following this link i thik that you can find what you are looking for.
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showNOT?p_id=131949.1&p_showHeader=1&p_showHelp=0
    Bye Luigi

  • Important!! Improve the life and performance of the battery.

    Reduce the operating temperature and increase battery life
    The battery in your notebook PC is designed to provide the necessary amount of energy for the processor while maintaining HP high safety standards. As a result, the battery may not charge or may stop providing power to the notebook when the battery temperature exceeds the specified, design safety level.
    If the battery life appears shorter than normal, the battery stops charging before it is 99%-100% full and the battery appears warmer than usual, the battery has most likely reached its designed "no charge" safety state. The battery will no longer charge until the temperature condition is corrected.
    Try one of the following methods to correct the battery temperature:
    When charging the battery, do not use applications that require large amounts of system resources such as graphic or memory intensive applications, heavy and extended hard drive usage.
    Turn off your notebook and remove the battery to allow it to return to a safe operating temperature.
    Make sure the notebook PC is operating on a hard surface. Using the Notebook PC on a bed or sofa may block the vents causing the notebook PC to heat up and shut down.
    By taking these steps, the battery will return to its normal operating temperature range and continue to charge and discharge as designed.
    Calibrating the battery while PC not in use
    Recalibrating the battery requires a cycle of a complete charge and a complete discharge. To recalibrate the battery while using the PC is not is use complete the following steps.
    The recalibration may take 1-5 hours depending on the age of the battery and the configuration of the notebook PC you own. The PC should not be used while you perform the following steps. Completing all the following steps will also calibrate the battery so that the power meter readings are accurate.
    Shut down the notebook PC
    Connect the AC Adapter to the notebook PC and to an electrical socket.
    Charge the Notebook PC until the Battery Charge light is Green. This indicates the battery is completely charged.
    Press and release the Power Button to start the computer.
    Press the F8 key several times when the HP Logo displays.
    When the Windows Advanced Startup Menu displays, select the Startup in Safe Mode option.
    Remove the AC power adapter from the notebook PC.
    Allow the battery to discharge completely until the notebook PC turns off.
    The battery is now calibrated and the battery level reading on the power meter is now accurate.
    If you are not using the notebook regularly then please unplug the AC adapter and shut down the notebook. By following these practices will improve the life and performance of the battery. Here is a quick list of Do's and Don'ts for the care of your Li-On batteries:
    Do's
    When you receive a new Notebook or Tablet PC, leave the battery to fully charge overnight.
    Condition a new battery by using it until it is fully discharged, and then re-charge it fully. Doing this once a month will help to accurately calibrate your battery.
    Always ensure the battery is recharged as soon as possible after it becomes fully discharged. A battery will be permanently damaged if left for an extended length of time in a fully discharged state.
    Remember that a Lithium-Ion battery will slowly deteriorate; a new battery will always perform better than one that is 6-months old.
    Remember that the battery half-life is rated for a certain total number of charge/discharge cycles (see your User Manual or Quick Start Guide for the rating). For example, a battery that is rated for 3 hours and 500 charge/discharge cycles, will still be considered as within specification, even if it only lasts for 1 hour 45 minutes after 500 charge/discharge cycles.
    Heat is the worst enemy of a battery. Allow plenty of air to circulate around the Notebook/Tablet PC, so that the battery is kept as cool as possible when charging and also when in use. If provided, use the integrated 'legs' under the Notebook to raise the notebook and improve air circulation.
    Remove the battery if storing for several months (the battery should be at approximately 50% charge or higher).
    If you use a NoteBus or if charging your Notebooks or Tablet PCs in a confined space, allow for adequate ventilation in order to keep the batteries as cool as possible.
    Don'ts
    Do Not - Expose the battery to excessive heat or cold (i.e. outside the range of 10-35 degrees Centigrade ambient).
    Do Not - Store the battery in a fully charged state (store batteries with about 50% charge).
    Do Not - Allow a nearly flat battery to be unused for more than a month or so. The battery will slowly discharge until it becomes fully discharged and this will permanently damage the battery cells.
    Do Not - Charge your Notebook/Tablet PC inside a carry case - the battery may overheat.
    Do Not - Charge your Notebook/Tablet PC when stacked on top of each other - the battery may overheat.
    Remember: Your battery is slowly degrading all the time, even if it is not used. Keeping your battery as cool as possible will slow down this degradation considerably.
    For more information please visit the following links:
    How to Improve the Performance of the Battery
    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01297640&cc=us&lc=en&dlc=en
    10 Tips to make your Laptop Battery last longer
    http://labnol.blogspot.com/2006/03/10-tips-to-make-your-laptop-battery.html
    Disclaimer: By clicking on the link above, you will be leaving HP.com to visit a web site that is not maintained by HP and where the HP privacy policy does not apply. This link is provided to you for convenience and does not serve as an endorsement by HP of any information or contacts that you may find on this non-HP site.
    ||-Although I am working on behalf of HP, I am speaking for myself and not for HP.-||
    //Click on Kudos if my reply was helpful and answered your question//
    ||-If my answer solved the problem please mark the topic as the accepted solution-||

    I hope the above article will help you guys..
    ||-Although I am working on behalf of HP, I am speaking for myself and not for HP.-||
    //Click on Kudos if my reply was helpful and answered your question//
    ||-If my answer solved the problem please mark the topic as the accepted solution-||

  • Business Intelligence and performance point problem

    Hi Everyone,
    Please does anyone know why creating a dashboard from a Sharepoint list is such a hassle. I have configured performance point services, a secure store and a business intelligent website that has a data connection library and  Performance Point content
    list . Unfortunately anytime I try to use dashboard designer to create a new data source and specify the site settings(using unattended account method) I can't save it. It gives me an error claiming that I either don't have permissions or the data source
    doesn't exist.
    A search around the web for answers doesn't seems to solve my problem as it suggests that the problem is associate with many things which I'm not clear about. Please could someone who is good at this at least tell me the critical things to check and the
    clearest way to set this up.
    Thanks,
    Dominic

    Hi Dominic,
    According to your error " I either don't have permissions or the data source doesn't exist", it should be related to the unattended account permission.
    Once the unattended service account has been configured, you must grant that account access to your data sources:
    For SQL Server data, the account must have a SQL logon with db_datareader permissions on each database that you want to access.
    For SQL Server Analysis Services data, the account must have read access to the cube or an appropriate portion of the cube, depending on your needs.
    For Excel Services data, the account must have access to the Excel workbook in a SharePoint document library.
    For data in a SharePoint list, the account must have read access to the list.
    Reference:
    https://technet.microsoft.com/en-us/library/ee836145.aspx
    Also you can have a look at the blog:
    http://www.chrismcnulty.net/blog/Lists/Posts/Post.aspx?ID=74
    https://yossidahan.wordpress.com/2012/08/14/cant-get-ssas-databases-to-appear-in-performance-point-dashboard-designer-check-you-adomd-net-version/
    Best Regards,
    Eric
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • Layer 3 to the Access Layer and MPLS Design Considerations

    Hi,
    We are about to install a new network consisting of Cat 4500s with Sup7E at the Access Layer, with Nexus 7000 at the Distribution and Core layers.
    We have 14 floors with at least three 4500s on each floor. Within the office block where the Access Layer and Distribution Layer reside we need to support secure borderless networking using 802.1x to place users from different parts of the business into segregated networks at layer 3.
    All switches will have the feature sets to support MPLS/ VRF / OSPF / EIGRP / BGP etc.
    We quickly dismissed the idea of using VRF-Lite due to the sheer number of Vlans we would need to managage and maintain,  the point to point links alone just to get one additional VRF on each floor required far too many Vlans.
    As a result we are now considering deploying MPLS. The obvious benefits include scalability and manageability, the fact that all switch to switch links can now be routed, instead of having to using SVIs.
    My query is one of design surrounding MPLS and how this maps to an enterprise network with a routed access layer. Do Cat 4500s become the CEs and take part in MPLS / BGP and Label Distribution, or does the BGP peering and Label Distribution only occur between the Distrubtion - Core - Distrubtion layers, mapping to the PE - P - PE topology in an ISP environment, the access layer simply uses the IGP (OSPF in this case) to learn routes ?
    Any help would be greatly appreciated.
    Chris.

    Hi Andy,
    Thanks for your response.
    I have been doing a little bit more research it seems the Cat 4500s do not support MPLS!! Nor do Cisco have any plans to support it on this platform. I find this a little rediculous considering the level that Cisco are pitching this platform. With the Sup 7E only VRF Lite is supported, with plans to support EVN (which still uses trunk links for logical separation).
    So it looks like we are going to have to go back to the drawing board.
    (perhaps we should have gone HP or Juniper!)
    Chris.

  • Which one is the best way to collect config and performance details in azure

    Hi ,
    I want to collect the information of both configuration and performance of cloud, virtual machine and web role .I am going to collect all these details using
    java.  so Please suggest which one is the best way. 
    1) REST API
    2) Azure SDK for java
    Regards
    Rathidevi
    rathidevi

    Hi,
    There are four main tasks to use Azure Diagnostics:
    Setup WAD
    Configuring data collection
    Instrumenting your code
    Viewing data
    The original Azure SDK 1.0 included functionality to collect diagnostics and store them in Azure storage collectively known as Azure Diagnostics (WAD). This software, built upon the Event Tracing for Windows (ETW) framework, fulfills two design requirements
    introduced by Azure scale-out architecture:
    Save diagnostic data that would be lost during a reimaging of the instance..
    Provide a central repository for diagnostics from multiple instances.
    After including Azure Diagnostics in the role (ServiceConfiguration.cscfg and ServiceDefinition.csdef), WAD collects diagnostic data from all the instances of that particular role. The diagnostic data can be used for debugging and troubleshooting, measuring
    performance, monitoring resource usage, traffic analysis and capacity planning, and auditing. Transfers to Azure storage account for persistence can either be scheduled or on-demand.
    To know more about Azure Diagnostics, please refer to the below article ( Section : Designing More Supportable Azure Services > Azure Diagnostics )
    https://msdn.microsoft.com/en-us/library/azure/hh771389.aspx?f=255&MSPPError=-2147217396
    https://msdn.microsoft.com/en-us/library/azure/dn186185.aspx
    https://msdn.microsoft.com/en-us/library/azure/gg433048.aspx
    Hope this helps !
    Regards,
    Sowmya

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • GUI Performance/design  of new Dimension()

    My two fold question is performance/design relate.
    When creating a GUI there seems to be a need for a lot of creation of Objects like
    blahblah.setPreferredSize(new Dimension(500,500));
    My first question is is there a hi performance cost if say in a particular GUI you do twenty different object creation of new Dimension
    If so, what is the best way to handle that in desgin. Should one object be created, then just set the value on the one object and use that? I am sure there are multiple ways, but what is the best performance, and best design tactic to handle this.
    Thanks

    My two fold question is performance/design relate.
    When creating a GUI there seems to be a need for a lot
    of creation of Objects like
    blahblah.setPreferredSize(new Dimension(500,500));
    My first question is is there a hi performance cost if
    say in a particular GUI you do twenty different object
    creation of new Dimension
    ()?Twenty Dimension objects is nothing. Too small to worry about.
    >
    If so, what is the best way to handle that in desgin.
    Should one object be created, then just set the value
    on the one object and use that? I am sure there are
    multiple ways, but what is the best performance, and
    best design tactic to handle this.
    ThanksFor the sake of exercise, you can reuse a dimension object. Methods like setPreferredSize copy data out of the Dimension rather than copying the reference itself, so there are no consistency issues here:
    Dimension sz = new Dimension();
    sz.setSize(w1,h1);
    comp1.setPreferredSize(sz);
    sz.setSize(w2,h2);
    comp2.setPreferredSize(sz);

  • IBook for Graphic and Web Design

    Hello,
    I'm thinking of buying my sister's iBook (since she doesn't really use it). I was talking to a mac rep about using the iBook for Graphic and Web Design purposes and I wanted to get other people's opinions on how Adobe Creative Suite and Dreamweaver have performed in other people's iBooks.
    Thanks!

    Running a lot of applications simultaneously, or a few high processing type applications such as Adobe's is what takes up RAM. OS X utilizes Hard Drive available space to make up for memory (RAM) deficiencies. This will "slow down" a system's ability to manage the files. So if you like to multiple applications opened at the same time, buy more RAM. Personally, 1 gb ought to be the minimum for today's systems.
    By contrast, a computer's processor, including logic board, video board, etc. are the engine of the system. The iBook has a small engine. So no matter how much RAM you feed it, it's still going to operate like a small engine. If you want more processing power, you would need to purchase a more powerful notebook (eg Power Book) or a more processor oriented machine such as the Intel iMac or Mac Pro.

  • Robohelp scalability and speed

    I am encountering scalability/speed issues with a 4000 file
    Robohelp 6 project. This has 2000 HTML files and 2000 images.
    This appears to be the result of Robohelp performing a number
    of background file scanning/parsing threads that prevent the user
    interface refreshing in a timely manner. The problem is most acute
    at start up when it checks every HTML file in the project - the
    user interface only becomes responsive after about 90 seconds.
    There are also problems at other times particularly after saving a
    file that has multiple hyperlinks. At the current authoring rate I
    am adding about 1000 HTML files per year. Will Robohelp be usable
    in 12 months from now?
    As a test I created an artificial project containing 10,000
    empty HTML files and it was totally unusable. Does Robohelp 7 have
    the same scalability and speed issues?

    This has happened before for me too - I just recreated the
    folders, imported the topics back into them, and all was well. This
    happened after a CPD rebuild as well.
    I considered it a good thing - I am convinced that CPD bloat
    and the attempt to reconcile file and folder locations was slowing
    down the program significantly. The CPD just needed to be rebuilt.
    Of course, I have a large project, and it took a few hours to get
    back to running speed, but yours is about twice the size, so you
    will be down for awhile.
    If I were you, I wouldn't necessarily consider returning to
    the backed up dataset right away. You'll be back to the same
    performance issues. In your shoes, I would suck it up - just
    rebuild what needs to be rebuilt, and move on. And keep an eye on
    your CPD file size occasionally. I find that if the file size
    increases significantly all of a sudden, it's probably time to
    rebuild it.
    You may want to consider Peter's suggestion (reminiscence?)
    of breaking up your project into usable chunks and merging them.
    That's my two cents, take it for what it's worth.

  • Difference Between Web application design and Report Designer

    Hi,
    I have searched in the forum as well i Google it but still i am not clear the difference between Web application Designer and report Designer , why especially we go for WAD ? and  what is not possible in Report designer which we can do in WAD  Please some one explain me that would be great helpful to me. i am having lot of confusion in this issue.
    thanks,
    Gal

    Hi,
    The Report designer is completely for formating of your query and results. Here you have the flexibility to design your query(one/more) results in your own way.
    The WAD is for displaying various web items at one short for a single/multiple query. And here you have the flexibility to do the XML coding on the web items, but not as much you have the flexibility on Report Designer.
    But, Report designer will have more performance issues than WAD. So, generally it helps when you need extreme formating of your reports.
    Hope this clears your doubt.
    Regards,
    Srinivas.

  • Web design reporter and Report designer is not working in SAP BI

    Hello,
    I am facing JAVA RFC error in Web reporting designing and reporting designer tools in BEx application. For more details please check the attachment in mail  Any one help me for this issue.
    Please check the my system details in the following
    SAP ECC 6.0 EHP 6 with ABAP stack
    SAP BI 740 with ABAP stack
    BO 4.1
    Thanks & regards,
    Surendra

    Hi,
    Check with basis team , either they have configured rfc connection properly or not.
    Refer below also:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b0a5216a-349c-2a10-9baf-9d4797349f6a?QuickLink=index&…
    Thanks.

Maybe you are looking for

  • Problem with colours and tonalty in LR 1.1 conversions

    Im having hard times trying to get great colour and tonalty out of LR conversions. Some times it seems that its just impossible to get there where I want with LR. No matter how much I tweak sliders in Calibration Tab or in Colour Adjustment Tab, subt

  • BPM process withour initialize task?

    Hi, Is it possible to have a BPM process without initiate task? My Use Case: Pass some data to the BPM process during the instance creation. I don't find any way to pass the data during instance creation. I'm thinking of having a webservice call as f

  • Unable to Start EHP Installer Server on Windows 2008 -

    Hi, I have an SAP ECC 6.0 running on Windows 2008 x64 (64 bit)with MS SQL 2005. I have downloaded latest EHP Installer and the recommended JCE Policy and followed SAP Note 1245473. After extracting the SAR file I tried to start EHP Installer in <sid>

  • XMLDOM + XMLTYPE

    Hello. I need to generate XML document with special structure from Database 11g. I am using DBMS_XMLDOM to ganerate structure. <GenericData xmlns="http://www.SDMX.org/resources/SDMXML/schemas/v1_0/message" xmlns:common="http://www.SDMX.org/resources/

  • SO FUSTERATED PLEASE HELP!!!!

    I keep getting a message to contact ITUNES supoport under payment type when I enter my credit card which is a visa card!  I go to the web site but I do not understAND WHAT TO LOOK FOR!  pLEASE HELP