Best practice for optimizing processing of big XMLs?

All,
What is the best practice when dealing with large XML files ... (say couple of MBs).
Instead of having to read the file from the file system everytime a static method is run, what would be the best way in which the program reads the file once and then keeps it in memory. So the next time it would not have to read and parse it all over again?
Currently my code just read the file in the static method like ...
public static String doOperation(String path,...) throws Exception
try{           
String masterFile = path+"configfile.xml";
Document theFile = (Document)getDocument(masterFile);
Element root = theFile.getDocumentElement();
NodeList nl = root.getChildNodes();
// ... operations on file
Optimization tips and tricks most appreciated :-)
Thanks,
David

The best practice for multi-megabyte XML files is not to have them at all.
However if you must, presumably you don't need all of the information in your XML, repeatedly. Or do you? If you need a little bit of it here, then another little bit of it there, then yet another little bit of it later, then you shouldn't have stored your data in one big XML.
Sorry if that sounds unhelpful, but I'm having trouble imagining a scenario when you need all the data in an XML document repeatedly. Perhaps you could expand on your design?
PC²

Similar Messages

  • What are the Best Practices for Optimizing Images in InDesign Files

    Is there a best practice for using images InDesign to optimize the document before converting to a PDF? Specifically, what I'm asking is, will the PDF file compress better if the images are cropped prior to placing them in Indesign? I'd like to know the answer for both creating PDF files for printing using images that are 300dpi and for creating PDF files for online delivery using images that are 72dpi. I have an employee that insists images need to be cropped to actual dimensions before placing in the InDesign document. I've never done it that way and believe that her recommended process is way too time consuming and leaves you with no leeway to tweak your page design since the images are tightly cropped.

    As for absolute cropping, I agree with your stance. Until the layout is fixed, preserving your ability to easily manipulate photo size and positioning is key.
    Some clever image management methods have been described in the discussion forums, and one that appealed most to me was the use of duplicate linked image folders. Having a high-res (CMYK) folder and a low-res (RGB) folder to switch between for different output enables you to use both to your advantage. Use the low-res images for layout, for internal proofing, and for EPUB/online PDF/HTML output. Then it's simply a quick switch to the high-res image folder for print purposes. You can easily prepare the alternate collection of images with a Photoshop batch convert script or with the Photoshop Image Processor. Save your presets!

  • Best Practices for Optimizing CPU and Memory

    It would be really helpful if someone would summarize the best way to set things up between Concert Level, Set Level, and Patch level, along with using aliases, to make sense of the most efficient ways to set up a new concert in MainStage. I have spent quite a bit of time messing with it all, and have a pretty good understanding, but a summarized document and "rules" would be very helpful.
    As an example, I have two guitars that plug into separate inputs in my hardware interface. But I want to use the same channel strip for both (with plug-ins for effects and lead sounds). So, I have some patches that select Input 3 for one guitar, and Input 4 for the other. Am I duplicating all of my plug-ins and effects by doing this, or is there a more efficient method?
    I have come across similar "puzzles" when using MainStage, and can always make things work, but am never quite sure what is most efficient. Has anyone seen such a document or best practices guide? Or can you just share your learnings here?

    As far as I know, the only load on the CPU is the active patch. All the others are 'bypassed' according to the manual. My own experience backs up this claim. When I experiment with sounds - several synths with complex FX plugins - I usually put a large CPU meter on my layout to monitor the load. The load seems to be fairly similar whatever channel strips & synths I have. (NB normally I only have one synth in each set. If I have several synths in one set, that does change the load.)
    FX including Amp Designer don't put as much load on the processor as synths as far as I can see.
    Amp settings are a bit of a nuisance as you can't call up Amp Designer settings from a controller yet. That means if you need more than one amp setting, you will need more than one instance of Amp Designer. In that case, you might as well load it in each set's channel strip. You shouldn't notice any gain in CPU load.
    Pedal Board: if you have a consistent set of pedals that you move between all the time, then put the board in an Aux channel. That way, you will only have to set up the controls once. My GP guitar pedal board layout is Wah, Overdrive, Delay, Chorus & Flange. All have Bypass buttons allocated & some have extra controls set e.g. Chorus & Flange parameters & Delay times. The Overdrive usually has a fairly comprehensive set of controls depending on the pedal I choose.
    This allows me to change the sounds quite a lot with a minimum of controls.
    Do you put your sound into a guitar amp or a PA system? I put my rig into a PA from the computer. I have found that I get the best, most 'natural' electric sound by not using Amp Designer, but by using Channel EQ instead. I have an EQ patch that sounds very close to a Peterson clean channel (Peterson amps are an English made transistor amp that sounds rather like a Mesa Boogie 20 watt valve amp). When I add FX to that, I can get a very wide range of amp sounds very easily. It might be worth while exploring EQ settings.

  • Best practices for batch processing without SSIS

    Hi,
    The gist of this question is in general how should a C# client insert/update batches records using stored procedures. The ideas I can think of are:
    1) create 1 SP with a parameter of type XML, and pass say 100 records at a time, on 1 thread.  The SP reads the XML as a table and does a single INSERT.
    2) create 1 SP with many parameters, that inserts 1 records.  I can either build a big block of EXEC statements for say 100 records at a time, or call the SP 1 and a time, on 1 thread.  Obviously this seems the slowest.
    3) Parallel processing version of either of the above: Pass 100 records at a time via XML parameter, big block of EXEC statements, or 1 at a time, and use PLINQ to make multiple connections to the database.
    The records will be fairly wide, substantial records.
    Which scenario is likely to be fastest and avoid lock contention?
    (We are doing batch processing and there is not a current SSIS infrastructure, so it's manual: fetch data, call web services, update batches.  I need a batch strategy that doesn't involve SSIS - yet).
    Thanks.

    The "streaming" option you mention in your linked thread sounds interesting, is that a way to input millions of rows at once?  Are they not staged into the tempdb?
    The entire TVP is stored in tempdb before the query/proc is executed.  The advantage of the streaming method is that it eliminates the need to load the entire TVP into memory on either the client or server.  The rowset is streamed to the server
    and SQL Server uses the insert bulk method is to store it in tempdb.  Below is an example C# console app that streams 10M rows as a TVP.
    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Collections;
    using System.Collections.Generic;
    using Microsoft.SqlServer.Server;
    namespace ConsoleApplication1
    class Program
    static string connectionString = @"Data Source=.;Initial Catalog=MyDatabase;Integrated Security=SSPI;";
    static void Main(string[] args)
    using(var connection = new SqlConnection(connectionString))
    using(var command = new SqlCommand("dbo.usp_tvp_test", connection))
    command.Parameters.Add("@tvp", SqlDbType.Structured).Value = new Class1();
    command.CommandType = CommandType.StoredProcedure;
    connection.Open();
    command.ExecuteNonQuery();
    connection.Close();
    class Class1 : IEnumerable<SqlDataRecord>
    private SqlMetaData[] metaData = new SqlMetaData[1] { new SqlMetaData("col1", System.Data.SqlDbType.Int) };
    public IEnumerator<SqlDataRecord> GetEnumerator()
    for (int i = 0; i < 10000000; ++i)
    var record = new SqlDataRecord(metaData);
    record.SetInt32(0, i);
    yield return record;
    IEnumerator IEnumerable.GetEnumerator()
    throw new NotImplementedException();
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Best practice for reducing the number of XML files in an IDML file for translation?

    Our engineering team is looking for ways for us to reduce the number of XML files produced when a (relatively) simple 2-page INDD file is saved out as IDML for translation?

    IDML contains quite a few XML files, but I suspect you're only interested in the Stories folder if you're working on a translation. The way to do that is... to reduce the number of stories. If it's a two-pager, chances are that you have a whole bunch of unthreaded text frames. Thread them in logical reading order. This will help the translator(s) as well - by threading frames in logical reading order, they don't have to work to read the document in the same order as the target audience.

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practice for the Vendor Consignment Process

    Hiii ,,,
    Can anybody have the best practices for the Vendor consignment process???
    Please Send me the document.
    Please explain me the Consignment process in SAP??
    Thanx & Regards,
    Kumar Rayudu

    Hi Kumar,
    In order to have Consigment in SAP u need to have master data such as material master, vendor master and purchase inforecord of consignment type. U have to enter the item category k when u enter PO. The goods receipt post in vendor consignment stock will be non valuated.
    1. The intial steps starts with raising purchase order for the consignment item
    2. The vendor recieves the purchase order.
    3. GR happens for the consignment material.
    4. Stocks are recieved and placed under consignment stock.
    5. When ever we issue to prodn or if we transfer post(using mov 411) from consignment to own stock then liability occurs.
    6. Finally comes the settlement using mrko. You settle the amount for the goods which was consumed during a specific period.
    regards
    Anand.C

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best Practice for a Print Server

    What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers.
    Hardware
    At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also our main Open directory server, with about 400+ clients.
    I want to order a new server and want to know the best type of setup for the optimal print server.
    Thanks

    Since print servers need RAM and spool space, but not a lot of processing power, I'd go with a Mac Mini packed with ram and the biggest HD you can get into it. Then load a copy of Xserver Tiger on it and configure your print server there.
    Another option, if you don't mind used equipment, is to pick up an old G4 or G5 Xserve, load it up with RAM and disk space, and put tiger on that.
    Good luck!
    -Gregg

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • Best practice for loading config params for web services in BEA

    Hello all.
    I have deployed a web service using a java class as back end.
    I want to read in config values (like init-params for servlets in web.xml). What
    is the best practice for doing this in BEA framework? I am not sure how to use
    the web.xml file in WAR file since I do not know how the name of the underlying
    servlet.
    Any useful pointers will be very much appreciated.
    Thank you.

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best Practice for SUP and WSUS Installation on Same Server

    Hi Folks,
    I have a question, I am in process of deploying SCCM 2012 R2... I was in process of deploying Software Update Point on SCCM with one of the existing WSUS server installed on a separate server from SCCM.
    A debate has started with of the colleague who says that the using remote WSUS server is recommended by Microsoft because of the scalability security  that WSUS will be downloading the updates from Microsoft and SCCM should be working as downstream
    server to fetch updates from WSUS server.
    but according to my consideration it is recommended to install WSUS server on the same server where SCCM is installed... actually it is recommended to install WSUS on a site system and you can used the same SCCM server to deploy WSUS.
    please advice me the best practices for deploying SCCM and WSUS ... what Microsoft says about WSUS to be installed on same SCCM server OR WSUS should be on a separate server then the SCCM server ???
    awaiting your advices ASAP :)
    Regards, Owais

    Hi Don,
    thanks for the information, another quick one...
    the above mentioned configuration I did is correct in terms of planning and best practices?
    I agree with Jorgen, it's ok to have WSUS/SUP on the same server as your site server, or you can have WSUS/SUP on a dedicated server if you wish.
    The "best practice" is whatever suits your environment, and is a supported-by-MS way of doing it.
    One thing to note, is that if WSUS ever becomes "corrupt" it can be difficult to repair and sometimes it's simplest to rebuild the WSUS Windows OS. If this is on your site server, that's a big deal.
    Sometimes, WSUS goes wrong (not because of ConfigMgr)..
    Note that if you have a very large estate, or multiple primary site servers, you might have a CAS, and you would need a SUP on the CAS. (this is not a recommendation for a CAS, just to be aware)
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

  • Best practice for partitioning 300 GB disk

    Hi,
    I would like to seek for advise on how I should partition a 300 GB disk on Solaris 8.x, what would be the optimal size for each of the partition.
    The system will be used internally for running web/application servers and database servers.
    Thanks in advance for your help.

    There is no "best practice" regardles of what others might say. I depends entirely on how you plan on using and maintaining the system. I have run into too many situations where fine-grained file system sizing bit the admins in the backside. For example, I've run into some that assumed that /var is only going to be for logging and printing, so they made it nice and small. What they didn't realize is that patch and package information is also stored in /var. So, when they attempted to install the R&S cluster, they couldn't because they couldn't put the patch info into /var.
    I've also run into other problems where a temp/export system that was mounted on a root-level directory. They made the assumption that "Oh, well, it's root. It can be tiny since /usr and /opt have their own partitions." The file system didn't mount properly, so any scratch files in that directory that were created went to the root file system and filled it up.
    You can never have a file system that's too big, but you can always have a file system that's too small.
    I will recommend the following, however:
    * /var is the most volatile directory and should be on its own several GB partition to account for patches, packages, and logs.
    * You should have another partition as big as your system RAM and assign that parition as a system/core dump for system crashes.
    * /usr or whatever file system it's on must be big enough to assume that it will be loaded with FOSS/Sunfreeware tools, even if at this point you have no plans on installing them. I try to make mine 5-6 GB or more.
    * If this is a single-disk system, do not use any kind of parallel access structure, like what Oracle prefers, as it will most likely degrade system performance. Single disks can only make single I/O calls, obviously.
    Again, there is no "best practice" for this. It's all based on what you plan on doing with it, what applications you plan on using, and how you plan on using it. There is nothing that anyone here can tell you that will be 100% applicable to your situation.

  • Best practice for simply invoking a web service

    Hello,
    We have web services deployed and accessible as wsdl documents in the SOA service manager/UDDI product. What is the best practice for simply calling a web service method deployed without regard to whether it is deployed on Tomcat, WebSphere, Oracle, etc.
    I'd just like to create a java client to make a web service call and JAXRPC has me confused because you seem to have to have custody of the web service implementation code to call a service. Can someone point me in the right direction?
    Thanks,
    Sean

    Thanks. Here is my wsdl, how would you say this is encoded?
    <?xml version="1.0" encoding="UTF-8"?>
    <definitions
         name="OracleProcess"
         targetNamespace="http://xmlns.oracle.com/OracleProcess"
         xmlns="http://schemas.xmlsoap.org/wsdl/"
         xmlns:tns="http://xmlns.oracle.com/OracleProcess"
         xmlns:plnk="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
         xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
         xmlns:client="http://xmlns.oracle.com/OracleProcess"
        >
        <types>
            <schema attributeFormDefault="qualified" elementFormDefault="qualified" targetNamespace="http://xmlns.oracle.com/OracleProcess"
                 xmlns="http://www.w3.org/2001/XMLSchema">
                <element name="OracleProcessProcessRequest">
                    <complexType>
                        <sequence>
                            <element name="input" type="string"/>
                            <element name="input2" type="string"/>
                        </sequence>
                    </complexType>
                </element>
                <element name="OracleProcessProcessResponse">
                    <complexType>
                        <sequence>
                            <element name="result" type="string"/>
                        </sequence>
                    </complexType>
                </element>
            </schema>
        </types>
        <message name="OracleProcessRequestMessage">
            <part name="payload" element="tns:OracleProcessProcessRequest"/>
        </message>
        <message name="OracleProcessResponseMessage">
            <part name="payload" element="tns:OracleProcessProcessResponse"/>
        </message>
        <portType name="OracleProcess">
            <operation name="process">
                <input message="tns:OracleProcessRequestMessage"/>
                <output message="tns:OracleProcessResponseMessage"/>
            </operation>
        </portType>
        <binding name="OracleProcessBinding" type="tns:OracleProcess">
            <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
            <operation name="process">
                <soap:operation style="document" soapAction="process"/>
                <input>
                    <soap:body use="literal"/>
                </input>
                <output>
                    <soap:body use="literal"/>
                </output>
            </operation>
        </binding>
        <service name="OracleProcess">
            <port name="OracleProcessPort" binding="tns:OracleProcessBinding">
                <soap:address location="http://st4s:9700/orabpel/default/OracleProcess/1.0"/>
            </port>
        </service>
      <plnk:partnerLinkType name="OracleProcess">
        <plnk:role name="OracleProcessProvider">
          <plnk:portType name="tns:OracleProcess"/>
        </plnk:role>
      </plnk:partnerLinkType>
    </definitions>

Maybe you are looking for

  • How easy would it be to do something like this in FB4

    I was looking at a cool aniumation http://www.hbosdeal.com/Demos/bos-treasury-options-demo.swf Would it be easier to do the animations in flash and use Flex4 (importing the flash stuff using the flash component in flex 4) for the interface stuff like

  • Determine/read the value of parameter rdisp/max_wprun_time within ABAP

    Hi, I need to avoid program breaks caused by exceeding run time longer than defined in parameter rdisp/max_wprun_time. I will check the time since start of report in the critcal loop to bring the current data changes to a controlled end before gettin

  • How do I import photos from Adobe Revel to Photoshop Elements 12?

    I have Photoshop Elements 12 on my PC, and the Adobe Revel app on my Samsung Galaxy Tablet. How do I import photos taken with the tablet and stored in Revel to Elements?

  • Second hand blackberry 9320

    My sister's friend gave me a second hand blackberry curve 9320.According to her, the phone was originally set in Arabic (she found the phone whens she was in Saudi). As a matter of fact, I never had BB before. So everything abount BB is new to me. So

  • PDF FORM CONFIGURATION

    Hi All, I am facing some problem for configuring ADOBE form i've IDES 6.0 , so please suggest me how to configure PDF form in IDES  6.0 or which sap notes we have to apply for ADOBE form developments? With Regards Navin.