Server design (multithreading, serialization, and performance)

(I'm not asking for anyone to design my software for me, I'm just looking for a response along the lines of "That's called XYZ server design, look for books on this topic" sort of thing.)
Summary:
I have a Server application (S) that accepts connections from many Clients (C). The clients request pieces of a large internal data structure, *"Data"* (D). The clients are totally passive with respect to Data, they only read (from the Server) and do not initiate any modification.
D is really a structure of structures: it's hashtables of hashtables, with objects that hold other hashtables and vectors, etc. etc., down a few levels. The clients don't read the entire structure, just parts of it.
The Server is multi-threaded, with threads handling client communications, and a very important thread that modifies Data by processing messages from an external source. I call this part of the software the Message Processor (P). These messages are what drives manipulation of the data structure.
[**CLICK HERE FOR A DIAGRAM**|http://imgur.com/sb5ZU]
There are a couple of design questions I'm wondering about:
The data structure D is a shared resource between the Client threads and the Message Processor thread within the Server, with the Client threads only reading from the data structure (and writing over TCP/IP), and the Message Processor both reading and modifying it.
Right now I am using locks to lock the structure when a client requests data, so that the processor cannot modify the data while it is being serialized.
I also lock the data structure when a message is received and the structure has to be modified by P, to prevent the structure from being serialized while it is being modified.
My question is, is this the only design pattern I can use in this situation? It looks like the only way to improve performance is to
a) make sure I only lock when necessary (to prevent data corruption or inconsistency)
b) lock the data for as short a time as possible
c) make sure the parts of the data structure being sent to clients are serialized as fast as possible (write my own writeObject/readObject methods)
Any insight is appreciated, the shorter and more candid, the better. Don't be afraid to say I'm in over my head and should read a few books by author so-and-so, that's a good starting point :)

jta23 wrote:
(I'm not asking for anyone to design my software for me, I'm just looking for a response along the lines of "That's called XYZ server design, look for books on this topic" sort of thing.)
Summary:
I have a Server application (S) that accepts connections from many Clients (C). The clients request pieces of a large internal data structure, *"Data"* (D). The clients are totally passive with respect to Data, they only read (from the Server) and do not initiate any modification.
Are you using Servlets? That should facilitate the development of your server immensely (e.g., maintains sessions, handles multi-threading, implements HTTP out of the box, dozens of additional frameworks available, etc.)
D is really a structure of structures: it's hashtables of hashtables, with objects that hold other hashtables and vectors, etc. etc., down a few levels. The clients don't read the entire structure, just parts of it.
You can get away with using Map of Map or Map of List or whatever level of nesting you want. Generally, however, it is better to implement a canonical and rich domain model. [http://www.eaipatterns.com/CanonicalDataModel.html]. [http://www.substanceofcode.com/2007/01/17/from-anemic-to-rich-domain-model/].
The Server is multi-threaded, with threads handling client communications, and a very important thread that modifies Data by processing messages from an external source. I call this part of the software the Message Processor (P). These messages are what drives manipulation of the data structure.
[**CLICK HERE FOR A DIAGRAM**|http://imgur.com/sb5ZU]
There are a couple of design questions I'm wondering about:
The data structure D is a shared resource between the Client threads and the Message Processor thread within the Server, with the Client threads only reading from the data structure (and writing over TCP/IP), and the Message Processor both reading and modifying it.
Right now I am using locks to lock the structure when a client requests data, so that the processor cannot modify the data while it is being serialized.
I also lock the data structure when a message is received and the structure has to be modified by P, to prevent the structure from being serialized while it is being modified.
My question is, is this the only design pattern I can use in this situation? It looks like the only way to improve performance is to
a) make sure I only lock when necessary (to prevent data corruption or inconsistency)This can easily be handled by a Servlet. I think the best way to do this would be to create a Singleton. [www.javacoffeebreak.com/articles/designpatterns/index.html]. Be careful, however. Singletons are like global variables. They can easily be abused. If you did not want a singleton, create a lock table in your database. The RDBMS will handle synchronization for you, and it is an elegant solution. You can perform a similar feat using a filesystem lock that you create. Up to you. Whether in the JVM, in the database or in the filesystem.
b) lock the data for as short a time as possibleWrite an efficient method to insert or update or delete the data. If you are dealing with a large amount of data, consider using a native tool like Oracle's SqlLoader or using vendor-specific JDBC syntax. If you need to support multiple types of databases, use bulk JDBC operations.
c) make sure the parts of the data structure being sent to clients are serialized as fast as possible (write my own writeObject/readObject methods)
Take a look at JBoss serialization. It is much more compact that Java's. Or do some experimenting. JSON is much more compact than XML normally, and it can be read by a Javascript client to facilitate any Ajax you might want to use for some flash and sizzle.
Any insight is appreciated, the shorter and more candid, the better. Don't be afraid to say I'm in over my head and should read a few books by author so-and-so, that's a good starting point :)No, take it in bite sized pieces. Start with the server. Then work on the client. Play around with your locking strategy. Optimize your update of the data. Don't do everything at once.
- Saish

Similar Messages

  • Forums server restting TCP connections and performance numbers

    I have a packetsniffer running on my network because I was testing some
    issues with the webservice interface when I noticed something strange.
    The server accepted a request and started processing it, then after 72
    seconds it suddenly closed the connection. My client obviously retried,
    and the server immediately closed the connection again. I have attached
    the TCP dump summary as "Server connection resets.txt" to this message.
    Later on I captured the TCP dump of a server timeout. Or maybe not
    really a timeout, but after 630 seconds waiting for a list of all forums
    I decided it was enough. The TCP dump summary is attached as "Server
    connection timeout.txt". If necessary I have the full dump, but I would
    have to sanitize it.
    This kill after 630 seconds is the worst case I have seen so far. On
    average a call to the webservice API to download the list of all
    communities takes approximately 110 seconds until the first response
    bytes arrive. A full login cycle (request to forum, request to login
    service, request to forum, request to API) consistently takes 17 seconds.
    Jochem
    Jochem van Dieten
    http://jochem.vandieten.net/

    I have a packetsniffer running on my network because I was testing some
    issues with the webservice interface when I noticed something strange.
    The server accepted a request and started processing it, then after 72
    seconds it suddenly closed the connection. My client obviously retried,
    and the server immediately closed the connection again. I have attached
    the TCP dump summary as "Server connection resets.txt" to this message.
    Later on I captured the TCP dump of a server timeout. Or maybe not
    really a timeout, but after 630 seconds waiting for a list of all forums
    I decided it was enough. The TCP dump summary is attached as "Server
    connection timeout.txt". If necessary I have the full dump, but I would
    have to sanitize it.
    This kill after 630 seconds is the worst case I have seen so far. On
    average a call to the webservice API to download the list of all
    communities takes approximately 110 seconds until the first response
    bytes arrive. A full login cycle (request to forum, request to login
    service, request to forum, request to API) consistently takes 17 seconds.
    Jochem
    Jochem van Dieten
    http://jochem.vandieten.net/

  • Essbase 64bit VM Server and performance with calcs

    Hello,
    I have an issue with my Essbase server running on Win 2003 Server 64 bit VM and performance with calcs. I like to apply the changes as in Essbase DBAG guide on page 827 for 64 bit related to agentthreads and serverthreads. Has anyone experienced this before ? Did changing the values in essbase.cfg file worked.
    Thanks,
    Azmat Bhatti

    Hi John,
    I have spoken to our VM Admin and he has told me that he has given our servers high performance on the SAN storage. The issue is that we see quite high page faults under the task manager when the calcs are running. We have 9 Planning/Essbase apps running. I am also playing with the cache setting in essbase.cfg file.
    This is what i have right now:
    CALCLOCKBLOCKHIGH 1500
    CALCLOCKBLOCKDEFAULT 1000
    CALCLOCKBLOCKLOW 100
    CALCCACHEHIGH 1000000
    CALCCACHEDEFAULT 300000
    CALCCACHELOW 200000
    Any suggestions ?
    Thanks,
    Azmat Bhatti

  • Analyzing design and performance activities of infocube

    Hi
    I have one InfoCube(ZICSAST) of SD application area . It's model was very bad . They put all the characteristics in this infocube . And this InfoCube uses all 13-custom dimensions .
    So i need analyze the performance and design aspects of this infocube .
    So what i need analyze like Line Item Dimension ...etc ???
    So pls let me what are the things i can analyze from this infocube.
    Regards
    mohammed
    xeca

    Hi,
        This is Thilak. There r so may aspects for designing a cube and to improve the performance. In order to design a cube.we have to consider 2 things.
    1.Reducing number of dim tables and
    2.Reducing no of records.
    Note:
    1. If the Relation between the master data tables is 1:m or 1:1 then go for assigning single sid table to single dim table.
    2. If the relation in m:m then go for assigning multiple sid tables to single dim table.
    Performance:
    1.        When u r designing a cube first think  is there any requirement of report based on Calyear/month or fiscyearperiod. If there is any requirement then go for partitionong a cube.
    2.     If u going to assign a single sid table to single dim table then go for making that dim table as "LINE ITEM DIMENSION TABLE"

  • Oracle Express Tips/Whitepapers on Design and Performance ?

    Hi all,
    I was wondering if someone could please provide me with some information to
    find
    Articles / White Papers / Tips on Oracle Express, design and performance
    tuning etc.,
    Thanks for your help in advance.
    RN
    null

    Following this link i thik that you can find what you are looking for.
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showNOT?p_id=131949.1&p_showHeader=1&p_showHelp=0
    Bye Luigi

  • Important!! Improve the life and performance of the battery.

    Reduce the operating temperature and increase battery life
    The battery in your notebook PC is designed to provide the necessary amount of energy for the processor while maintaining HP high safety standards. As a result, the battery may not charge or may stop providing power to the notebook when the battery temperature exceeds the specified, design safety level.
    If the battery life appears shorter than normal, the battery stops charging before it is 99%-100% full and the battery appears warmer than usual, the battery has most likely reached its designed "no charge" safety state. The battery will no longer charge until the temperature condition is corrected.
    Try one of the following methods to correct the battery temperature:
    When charging the battery, do not use applications that require large amounts of system resources such as graphic or memory intensive applications, heavy and extended hard drive usage.
    Turn off your notebook and remove the battery to allow it to return to a safe operating temperature.
    Make sure the notebook PC is operating on a hard surface. Using the Notebook PC on a bed or sofa may block the vents causing the notebook PC to heat up and shut down.
    By taking these steps, the battery will return to its normal operating temperature range and continue to charge and discharge as designed.
    Calibrating the battery while PC not in use
    Recalibrating the battery requires a cycle of a complete charge and a complete discharge. To recalibrate the battery while using the PC is not is use complete the following steps.
    The recalibration may take 1-5 hours depending on the age of the battery and the configuration of the notebook PC you own. The PC should not be used while you perform the following steps. Completing all the following steps will also calibrate the battery so that the power meter readings are accurate.
    Shut down the notebook PC
    Connect the AC Adapter to the notebook PC and to an electrical socket.
    Charge the Notebook PC until the Battery Charge light is Green. This indicates the battery is completely charged.
    Press and release the Power Button to start the computer.
    Press the F8 key several times when the HP Logo displays.
    When the Windows Advanced Startup Menu displays, select the Startup in Safe Mode option.
    Remove the AC power adapter from the notebook PC.
    Allow the battery to discharge completely until the notebook PC turns off.
    The battery is now calibrated and the battery level reading on the power meter is now accurate.
    If you are not using the notebook regularly then please unplug the AC adapter and shut down the notebook. By following these practices will improve the life and performance of the battery. Here is a quick list of Do's and Don'ts for the care of your Li-On batteries:
    Do's
    When you receive a new Notebook or Tablet PC, leave the battery to fully charge overnight.
    Condition a new battery by using it until it is fully discharged, and then re-charge it fully. Doing this once a month will help to accurately calibrate your battery.
    Always ensure the battery is recharged as soon as possible after it becomes fully discharged. A battery will be permanently damaged if left for an extended length of time in a fully discharged state.
    Remember that a Lithium-Ion battery will slowly deteriorate; a new battery will always perform better than one that is 6-months old.
    Remember that the battery half-life is rated for a certain total number of charge/discharge cycles (see your User Manual or Quick Start Guide for the rating). For example, a battery that is rated for 3 hours and 500 charge/discharge cycles, will still be considered as within specification, even if it only lasts for 1 hour 45 minutes after 500 charge/discharge cycles.
    Heat is the worst enemy of a battery. Allow plenty of air to circulate around the Notebook/Tablet PC, so that the battery is kept as cool as possible when charging and also when in use. If provided, use the integrated 'legs' under the Notebook to raise the notebook and improve air circulation.
    Remove the battery if storing for several months (the battery should be at approximately 50% charge or higher).
    If you use a NoteBus or if charging your Notebooks or Tablet PCs in a confined space, allow for adequate ventilation in order to keep the batteries as cool as possible.
    Don'ts
    Do Not - Expose the battery to excessive heat or cold (i.e. outside the range of 10-35 degrees Centigrade ambient).
    Do Not - Store the battery in a fully charged state (store batteries with about 50% charge).
    Do Not - Allow a nearly flat battery to be unused for more than a month or so. The battery will slowly discharge until it becomes fully discharged and this will permanently damage the battery cells.
    Do Not - Charge your Notebook/Tablet PC inside a carry case - the battery may overheat.
    Do Not - Charge your Notebook/Tablet PC when stacked on top of each other - the battery may overheat.
    Remember: Your battery is slowly degrading all the time, even if it is not used. Keeping your battery as cool as possible will slow down this degradation considerably.
    For more information please visit the following links:
    How to Improve the Performance of the Battery
    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01297640&cc=us&lc=en&dlc=en
    10 Tips to make your Laptop Battery last longer
    http://labnol.blogspot.com/2006/03/10-tips-to-make-your-laptop-battery.html
    Disclaimer: By clicking on the link above, you will be leaving HP.com to visit a web site that is not maintained by HP and where the HP privacy policy does not apply. This link is provided to you for convenience and does not serve as an endorsement by HP of any information or contacts that you may find on this non-HP site.
    ||-Although I am working on behalf of HP, I am speaking for myself and not for HP.-||
    //Click on Kudos if my reply was helpful and answered your question//
    ||-If my answer solved the problem please mark the topic as the accepted solution-||

    I hope the above article will help you guys..
    ||-Although I am working on behalf of HP, I am speaking for myself and not for HP.-||
    //Click on Kudos if my reply was helpful and answered your question//
    ||-If my answer solved the problem please mark the topic as the accepted solution-||

  • Business Intelligence and performance point problem

    Hi Everyone,
    Please does anyone know why creating a dashboard from a Sharepoint list is such a hassle. I have configured performance point services, a secure store and a business intelligent website that has a data connection library and  Performance Point content
    list . Unfortunately anytime I try to use dashboard designer to create a new data source and specify the site settings(using unattended account method) I can't save it. It gives me an error claiming that I either don't have permissions or the data source
    doesn't exist.
    A search around the web for answers doesn't seems to solve my problem as it suggests that the problem is associate with many things which I'm not clear about. Please could someone who is good at this at least tell me the critical things to check and the
    clearest way to set this up.
    Thanks,
    Dominic

    Hi Dominic,
    According to your error " I either don't have permissions or the data source doesn't exist", it should be related to the unattended account permission.
    Once the unattended service account has been configured, you must grant that account access to your data sources:
    For SQL Server data, the account must have a SQL logon with db_datareader permissions on each database that you want to access.
    For SQL Server Analysis Services data, the account must have read access to the cube or an appropriate portion of the cube, depending on your needs.
    For Excel Services data, the account must have access to the Excel workbook in a SharePoint document library.
    For data in a SharePoint list, the account must have read access to the list.
    Reference:
    https://technet.microsoft.com/en-us/library/ee836145.aspx
    Also you can have a look at the blog:
    http://www.chrismcnulty.net/blog/Lists/Posts/Post.aspx?ID=74
    https://yossidahan.wordpress.com/2012/08/14/cant-get-ssas-databases-to-appear-in-performance-point-dashboard-designer-check-you-adomd-net-version/
    Best Regards,
    Eric
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • Forms and Reports: Automated Test tools - functionality AND performance

    All,
    I'm looking to get a few leads on an automated test tools that may be used to validate Oracle forms and reports (see my software configuration below). I'm looking for tools that can automate both functional tests and performance. By this I mean;
    Functional Testing:
    * Use of shortcut keys
    * Navigation between fields
    * Screen organisation (filed locations)
    * Exercise forms validation (bad input values)
    * Provide values to forms and simulate user commit, and go and verify database state is as expected
    Performance Testing:
    * carry out tests for fixed user load
    * carry out tests for scaled step increase in user load
    * automated collection of log files and metrics during test
    So far I have:
    http://www.neotys.com/
    Thanks in advance for your response.
    Mathew Butler
    Configuration:
    Red Hat Enterprise Linux x86-64 architecture v4.5 64 bit
    Oracle Application Server 10.1.2.0.2 ( with patch 10.1.2.3 )
    Oracle Developer Suite (Oracle Forms and Reports) V10.1.2.0.2 ( with patch 10.1.2.3 )
    Oracle JInitiator 1.3.1.17 or later
    Microsoft Internet Explorer 6

    are there any tools for doing this activity like oracle recommended tools?
    Your question is unclear.  As IK mentioned, the only tool you need is a new version of Oracle Forms/Reports.  Open your v10 modules in a v11 Builder and select Save.  You now have a v11 module.  Doing a "Compile All PL/SQL" before saving is a good idea, but not required.  The Builders and utilites provided with the version 11 installation are the only supported tools for upgrading your application.  If you are trying to do the conversion of many Forms files in a scripted manner, you can use the Forms compiler in a script.  Generating new "X" files will also update the source modules (fmb, mmb, pll).  See MyOracleSupport Note 955143.1
    Also included in the installation in the Forms Migration Assistant.  Although it is more useful to people coming from older versions, it can also be used to move from v10 to 11.  It allows you to select more than one file at a time.  Documentation for this utility can be found in the Forms Upgrade Guide.
    Using the Oracle Forms Migration Assistant

  • Oracle 10g RAC design with ASM and OCFS

    Hi all,
    I have a question about a proposed Oracle 10g Release 2 RAC design for a 2 node cluster.
    ASM can store database files but not Oracle binaries nor OCR and voting disk. As such, OCFS version 1 does not support a shared Oracle Home. We plan to use OCFS version 2 with ASM version 2 on Red Hat Linux Enteprrise Server 4 with Oracle 10g Release 2 (10.2.0.1).
    For OCFS v2, a shared Oracle home and shared OCR and voting disk are supported. My question is does the following proposed architecture make sense for OCFS v2 with ASM v2 on Red Hat Linux 4?
    Oracle 10g Release 2 on Red Hat Enterprise Linux Server 4:
    OCFS V2:
    - shared Oracle home and binaries
    - shared OCR and vdisk files
    - CRS software shared OCFS v2 filesystem
    - spfile
    - controlfiles
    - tnsnames.ora
    ASM v2 with ASMLib v2:
    Proposed ASM disk groups:
    - data_dg for application data
    - backupdg for flashback and archivelogs
    - undo_rac1dg ASM diskgroup for undo tablespace for racnode1
    - undo_rac2dg ASM diskgroup for undo tablespace for racnode2
    - redo_rac1dg ASM diskgroup to hold redo logs for racnode1
    - redo_rac2dg ASM diskgroup to hold redo logs for racnode2
    - temp1dg temp tablespace for racnode1
    - temp2dg temp tablespace for racnode2
    Does this sound like a good initial design?
    Ben Prusinski, Senior DBA

    OK Tim, thanks for advices.
    I think Netbackup can be integrated with RMAN but I don't want to loose time on this (political).
    To summarize:
    ORACLE_HOME and CRS_HOME on each node (RAID1 and NTFS)
    Shared storage:
    Disk1 and disk 2: RAID1: - Raw partition 1 for OCR
    - Raw partition 2 for VotingDisk
    - OCFS for FLASH_RECOVERY_AREA
    Disk3, disk4 and disk5: RAID 0 - Raw with ASM redundancy normal 1 diskgroup for database files.
    This is a running project here, will start testing the design on VMware and then go for production setup.
    Regards

  • MTS with batch management, serialization and Handling unit

    Hello All,
    I am testing a scenario for MTS with batch management, serialization and Handling unit for discrete manufacturing.
    Everything worked fine till I created the Handling unit for the finished product.
    The production order has a quantity of 3 EA.
    It has three serial numbers 1, 2 and 3. (serial numbers can be displayed from order->Header->serial numbers)
    I created one Handling unit for production order quantity of 3 EA.
    I tried to do a goods receipt for the production order using transaction COWBHUWE.
    I get the following error when I try to post the GR:
    Only 0 serial numbers entered instead of 3
    Message no. IO304
    Diagnosis
    There is a serial number obligation, so the number of serial numbers must equal the number of serial numbers in the material document.
    You can post the operation only if you entered the correct number of serial numbers previously.
    System Response
    Depending on the context in which the error arises, the system continues processing, or the required function cannot be performed.
    Procedure
    You have the following options, for example:
    Check that the serial numbers are entered fully.
    If necessary, display an error log.
    If necessary, contact your system administrator.
    What did I miss?
    How to fix this problem?
    Please help.
    Thanks in advance
    George

    I added the serialization procedure HUSL to the serial number profile and it fixed the problem.

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • Sql server partition parent table and reference not partition child table

     
    Hi,
    I have two tables in SQL Server 2008 R2, Parent and Child Table.  
    Parent has date time, and it is partitioned monthly,  there is a Child table which just refer the Parent table using Foreign key relation.   
    is there any problem the non-partitioned child table referring to a partitioned parent table?
    Thanks,
    Areef

    The tables will need to be offline for the operation. "Offline" here, means that you wrap the entire operation in a transaction. Ideally, this transaction would:
    1) Drop the foreign key.
    2) Use ALTER TABLE SWITCH to drop the old data.
    3) Use ALTER PARTITION FUNCTION to drop the old empty partition.
    4) Use ALTER PARTITION FUNCTION to add a new empty partition.
    5) Reapply the foreign keys WITH CHECK.
    All but the last operation are metadata-only operation (provided that you do them right). To perform the last operation, SQL Server must scan the child tbale and verify that all keys are present in the parent table. This can take some time for larger tables.
    During the transaction, SQL Server holds Sch-M locks on the table, which means that are entirely inaccessible, even for queries running with NOLOCK.
    You avoid this the scan by applying the fkey constraint WITH NOCHECK, but this can have impact on query plans, as SQL Server will not consider the constraint as trusted.
    An alternative which should not be entirely dismissed is to use partitioned
    views instead. With partitioned views, the foreign keys are not an issue, because each partition is a pair of tables, with its own local fkey.
    As for the second question: it appears to be completely pointless to partition the parent, but not the child table. Or does the child table only have rows for a smaller set of the rows in the parent?
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Server side includes: outdated and not supported by host?

    I've been using SSI's for ages with no problem from any host. Recently I signed up with a new web hosting company, one of the more established ones in the Seattle area. They do not support SSI's and claim that using them is not safe and is not a modern approach to web design. They suggested I use a more advanced programming language to create my sites, such as ASP.NET (something I have no desire to do).
    Are server side includes outdated and no longer the way to do things? I find that hard to believe.
    -Jesse

    Are server side includes outdated and no longer the way to do things? I find that hard to believe.
    That language is from Microsoft who seem to think so along with Windows based hosting providers offering ASP.NET.
    IIS no longer comes with SSI installed by default. It has to be configured by the hosting provider.
    http://blogs.iis.net/robert_mcmurray/archive/2010/12/28/iis-notes-on-server-side-includes- ssi-syntax-kb-203064-revisited.aspx
    However, classic SSIs (in the sense of including static content from one page inside another) are still one of the most powerful, safe and flexible tools for managing repetitive page elements in any web page. Not to mention easy.
    ASP.NET has its own way of doing the equivalent of SSIs - and lots more beside but much of which does not interest web designers at this stage.
    http://stackoverflow.com/questions/894720/asp-net-equivalent-of-server-side-includes
    http://searchsoftwarequality.techtarget.com/answer/Alternatives-to-server-side-includes-fo r-ASPNET

Maybe you are looking for