Very Large Business Application (VLBA)

CVLBA Research Center
The Center for Very Large Business Applications (CVLBA) is a joint research initiative of the Otto-von-Guericke University Magdeburg and the Technische Universität München which was initiated by SAP in 2006. The decision by SAP AG to fund research with these two university partners stems from the cooperation experiences and the knowledge transfer that has taken place by jointly running the SAP University Alliances Project. Fifteen researchers in the CVLBA teams at Otto-von-Guericke University Magdeburg and Technische Universität München focus their research on bridging the gap between ERP research and ERP development.
Defining Very Large Business Applications
The first step in exploring issues of VLBA was to establish a common working definition to be used by all partners. According to the definition used a VLBA is essentially characterized u2013 in contrast to a single business application u2013 by its strategic importance for an organization, by its spatial, organizational, cultural or technical unrestrictedness and by its capability to be implemented by application systems or system landscapes.
The goal of the foundation of VLBA is a profound definition that conceives a VLBA as both, an application (instantiation level; VLBA in the narrower sense) and a research framework (meta level; VLBA in the broader sense). You can find more details in the text attached in this thread. We are looking forward to your feedback about this highly relevant topic, in particular regarding sustainability, legacy systems and cultural diversity!

Definition of Very Large Business Application
The first step in exploring issues of a Very Large Business Application (VLBA) was to establish a common working definition to be used by all partners. A VLBA deals with a business application, which has a strategic importance within a constituted organization. Significant features of a VLBA are:
(1) A VLBA supports one or more processes, from which one at least is a business process. Consequently, a VLBA is directly successfully effective and the strategic dependency of the organization is given by an application of a VLBA, because changing or turning the system away is associated with big financial, organizational and personnel-related costs.
(2) A VLBA does not have any spatial, organizational, cultural or technical limits.
(3) VLBAs could be implemented through application systems as well as through system landscapes. It is significant that they support a (universal organizational) business process.
A possible far reaching ability of automation of internal processes should be achieved through the application of the most modern technologies. Supply Chain Management (SCM)- and Customer Relationship Management (CRM)-systems are instances for this kind of software, as far as they fulfill all defined requirements. VLBAs are similar to a Business Information System in the manner that they can support several Business Application Fields and in this case, they are based on several types of Business Application Systems.
VLBA as a Field of Research
Along with the first definition, the concept of u2018VLBAu2019 was classified and integrated into the already existing conceptual world of business informatics, using the UML. This allows distinguishing u2018VLBAu2019 from other related topics and clearly defines its scope in the business informaticsu2019 scientific world. In particular, a VLBA can be regarded as a special system landscape on the one hand and as a research area in the other hand. The present-day heterogeneous and grown system landscapes, just like they are usually discovered in the business practice, suffer from the symptom of spaghetti integration. Therefore, it seems to be practical to raise principles of the Software Engineering to the level of the system landscapes and to establish a design theory in the sense of System Landscape Engineering.
Authors: Lars Krüger (CVLBA Magdeburg) & Bastian Grabski (CVLBA Magdeburg)

Similar Messages

  • Large JNI application

    Hi: I have a very large Fortran application, compiled as a shared object file, which I am trying to run from Java via the JNI. It contains some very large array declarations. If I reduce the size of the arrays, it works fine. But if I increase them to the size I need, it bombs out at the System.load() statement, with the message "Failed to map segment from shared object: Cannot allocate memory".
    I can also run the shared object application from a small Fortran main program. When I do that, I get the same problem. I can fix this by increasing the stack size, with the command "ulimit -s 20480".
    However, this solution does not seem to work with Java. I have tried issuing the "ulimit -s 20480" command before starting the JVM, but it doesn't help. I have also tried changing the -Xms -Xmx and -Xss options before starting the JVM, but no joy.
    The Fortran application is compiled with g77 on a Redhat 9 system. I am using Java 1.4.2.
    Any suggestions would be appreciated.
    Cheers................Neil

    Just run it as a seperate application.
    Use sockets, files or streams to communicate with it.
    Doing this will make it easier to debug both apps and in case there are bugs in the fortran side it won't take down your entire application.

  • JRockit for applications with very large heaps

    I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
    I used the JRMC to profile the operation and here were the results that I thought were interesting:
    liveset 30%
    heap fragmentation 2.5%
    GC Pause time average 600ms
    GC Pause time max 2.5 sec
    It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
    For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
    Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
    My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
    Thanks in advance for any help you can provide.

    Any suggestions for using JRockit with very large heaps?

  • Business Application Guru Max Dolgicer Speaks at Business Technology Summit

    Max Dolgicer has more than 25 years of management and technical experience in development and support of Business applications, software products and systems internals. An internationally recognized expert, Max is Technical Director and principal at International System Group, (ISG) Inc a leading consulting firm that specializes in design, development and integration of large-scale distributed applications using leading edge Middleware technologies. Max is coming to Business Technology Sumit 2010 (www.btsummit.com) to speak about all things SOA, on 12 November at the NIHANS Convention Center in Bangalore. At the summit, Max covers the following sessions:
    * Managing the SOA Evolution: once a company has completed initial SOA projects, the number of deployed services increases such that the key question no longer is how to build services, but rather how to efficiently govern the development and operation of services on an enterprise scale. The focus of SOA shifts to reusability, securing how a growing number of clients access the services, and assuring that Service Level Agreements (SLAs) are met, to name just a few issues. At this point companies run the danger that a "free for all" environment proliferates, and the benefits of SOA cannot be realized. The key is to introduce SOA governance before services spin out of control. Managing the evolution of SOA into the cloud with the correct governance is the next challenge. This keynote will address: typical categories of SOA projects, how SOA Maturity Models and governance relate, and how SOA governance needs to be extended when we move applications into the cloud
    * A ROI Calculator for SOA: let the Numbers Do the Talking: there are many pro and very few con arguments from an engineering perspective that make us believe that SOA is a superior approach for most application development and integration projects. However, nowadays we typically won't get away with brilliant technical arguments to justify the transition to SOA. In most cases we will have to convince the CFO that there is a positive bottom line result. This presentation outlines a ROI model for application development based on service reusability in a SOA. It describes how the cost effect of reuse can be calculated during the development and the maintenance cycle of a portfolio of service oriented business applications. The model is based on metrics that have been widely accepted throughout the IT industry. The model will then be illustrated by a project where multiple business applications have been developed within a SOA that employs a foundation of reusable services. This presentation will show an overview of a project that is used as an example, a popular ROI model that is the basis for the ROI calculation, and the application of the model to determine concrete monetary savings.
    * Defining a SOA Roadmap Based on SOA Maturity Model: once a company has completed initial SOA projects, the number of deployed services increases and the key question no longer is how to build services, but rather how to efficiently manage the development and operation of services on an enterprise scale. What is needed is a concise roadmap that guides the evolution of SOA such that IT can deliver the right value at the right time to the business. This roadmap has to address multiple dimensions of IT: architecture, development processes, applications, information, etc. This presentation will outline a model against which the degree of service oriented maturity of an organization can be assessed, and a process (i.e. the roadmap) for assessing the current and desired degree of service maturity of an organization and for developing a plan for how to get to the target state. This presentation will show: what SOA Maturity Models exist today?, walkthrough of the levels and key elements of each level, developing a custom SOA Roadmap and project example for mapping a Maturity Model to a Roadmap.
    * Service Oriented Integration (SOI): doing Integration the Right Way: IT managers have been under increasing pressure to migrate a portfolio of independent “stovepipe” applications to an integrated set of business services that can be aligned with changing business requirements and support new business processes faster and with reduced cost. Today, corporations have to choose from a number of integration products (e.g. Enterprise Service Buses) that have quite different capabilities, never mind different architectures and standards. This seminar starts with a comparison of SOA and event based architectures and then outlines the key issues and guidelines that architects should consider when defining an integration architecture based on services. The key point of the seminar is a case study that illustrates how SOA concepts have been applied in a real project. It explains the key architectural and design decisions that produced an integration architecture and a set of services that were reused beyond one particular project. This presentation will show: drivers for Service Oriented Integration (SOI), comparing SOA to Event-Driven Architecture (EDA), how to evolve from Enterprise Application Integration (EAI) to SOA/EDA to SOI, and applying SOI in a project example.
    Max is a contributing editor for Application Development Trends magazine and recognized instructor and presents extensively at major industry conferences including Gartner's Web Service and Application Integration conferences, Sys-Con's Web Services, XMLOne, XMLDevCon, JavaDevCon, e-Business Integration, Java Expo, Component Development, GIGA's Middleware Choices, and Comdex.
    Follow the summit on Twitter, here: http://twitter.com/btsummit and LinkedIn: http://events.linkedin.com/Business-Technology-Summit-2010/pub/331907
    Saltmarch Media
    E: [email protected]
    Ph: +91 80 4005 1000

    Max Dolgicer has more than 25 years of management and technical experience in development and support of Business applications, software products and systems internals. An internationally recognized expert, Max is Technical Director and principal at International System Group, (ISG) Inc a leading consulting firm that specializes in design, development and integration of large-scale distributed applications using leading edge Middleware technologies. Max is coming to Business Technology Sumit 2010 (www.btsummit.com) to speak about all things SOA, on 12 November at the NIHANS Convention Center in Bangalore. At the summit, Max covers the following sessions:
    * Managing the SOA Evolution: once a company has completed initial SOA projects, the number of deployed services increases such that the key question no longer is how to build services, but rather how to efficiently govern the development and operation of services on an enterprise scale. The focus of SOA shifts to reusability, securing how a growing number of clients access the services, and assuring that Service Level Agreements (SLAs) are met, to name just a few issues. At this point companies run the danger that a "free for all" environment proliferates, and the benefits of SOA cannot be realized. The key is to introduce SOA governance before services spin out of control. Managing the evolution of SOA into the cloud with the correct governance is the next challenge. This keynote will address: typical categories of SOA projects, how SOA Maturity Models and governance relate, and how SOA governance needs to be extended when we move applications into the cloud
    * A ROI Calculator for SOA: let the Numbers Do the Talking: there are many pro and very few con arguments from an engineering perspective that make us believe that SOA is a superior approach for most application development and integration projects. However, nowadays we typically won't get away with brilliant technical arguments to justify the transition to SOA. In most cases we will have to convince the CFO that there is a positive bottom line result. This presentation outlines a ROI model for application development based on service reusability in a SOA. It describes how the cost effect of reuse can be calculated during the development and the maintenance cycle of a portfolio of service oriented business applications. The model is based on metrics that have been widely accepted throughout the IT industry. The model will then be illustrated by a project where multiple business applications have been developed within a SOA that employs a foundation of reusable services. This presentation will show an overview of a project that is used as an example, a popular ROI model that is the basis for the ROI calculation, and the application of the model to determine concrete monetary savings.
    * Defining a SOA Roadmap Based on SOA Maturity Model: once a company has completed initial SOA projects, the number of deployed services increases and the key question no longer is how to build services, but rather how to efficiently manage the development and operation of services on an enterprise scale. What is needed is a concise roadmap that guides the evolution of SOA such that IT can deliver the right value at the right time to the business. This roadmap has to address multiple dimensions of IT: architecture, development processes, applications, information, etc. This presentation will outline a model against which the degree of service oriented maturity of an organization can be assessed, and a process (i.e. the roadmap) for assessing the current and desired degree of service maturity of an organization and for developing a plan for how to get to the target state. This presentation will show: what SOA Maturity Models exist today?, walkthrough of the levels and key elements of each level, developing a custom SOA Roadmap and project example for mapping a Maturity Model to a Roadmap.
    * Service Oriented Integration (SOI): doing Integration the Right Way: IT managers have been under increasing pressure to migrate a portfolio of independent “stovepipe” applications to an integrated set of business services that can be aligned with changing business requirements and support new business processes faster and with reduced cost. Today, corporations have to choose from a number of integration products (e.g. Enterprise Service Buses) that have quite different capabilities, never mind different architectures and standards. This seminar starts with a comparison of SOA and event based architectures and then outlines the key issues and guidelines that architects should consider when defining an integration architecture based on services. The key point of the seminar is a case study that illustrates how SOA concepts have been applied in a real project. It explains the key architectural and design decisions that produced an integration architecture and a set of services that were reused beyond one particular project. This presentation will show: drivers for Service Oriented Integration (SOI), comparing SOA to Event-Driven Architecture (EDA), how to evolve from Enterprise Application Integration (EAI) to SOA/EDA to SOI, and applying SOI in a project example.
    Max is a contributing editor for Application Development Trends magazine and recognized instructor and presents extensively at major industry conferences including Gartner's Web Service and Application Integration conferences, Sys-Con's Web Services, XMLOne, XMLDevCon, JavaDevCon, e-Business Integration, Java Expo, Component Development, GIGA's Middleware Choices, and Comdex.
    Follow the summit on Twitter, here: http://twitter.com/btsummit and LinkedIn: http://events.linkedin.com/Business-Technology-Summit-2010/pub/331907
    Saltmarch Media
    E: [email protected]
    Ph: +91 80 4005 1000

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • Help Using a webservice to report back very large data

    Hi All,
    I am in the process of creating a web service to report back lots of data according to some input params, I am using axis2 to test the soap message.
    Now the problem, the webservice will be collection data from the live database which is attached to a web application. It has high usage and I don't want to affect there service.
    Do you think it will affect there service and we would be better to report against a backup database or something?
    Also a client can request data on a certain user just a plain string ID in the wsdl file (data returned will be very large user history), now I was thinking would it be best to keep this as a single users input or create a array of stings with a max length (set in a xsd property)?
    My idea was it might be better that someone just implements another client with single ID and get data for each users, therefore the request could be called
    Customer * number of users
    Or they array XSD will allow them to report on say 10 at a time?

    Hi Steve
    It seems like you have alot of different versions of Crystal Reports and Business Objects products and you are getting them mixed up a bit. It can be confusing.
    So basically you want to access Crystal reports via the BO SDK and you want the reports to connect to web services.
    1. Creating the Report
    Since you have BOE XI R2 my guess is that you have a copy of Crystal Reports XI R2 or R1.
    Create the report with XI because it comes with a special XML Web Services data source driver. My guess is that you already have the web service created.
    2. Publish the report to BOE XI.
    Using the report designer publish the report to BOE (Save As).
    3. Write code to view the report.
    If you need to change the datasource at runtime then you will want to use the Report Application Server (RAS) SDK to do that. If you are only changing the data source to move from Dev/QA/Production then you may not want to do this task at runtime. In the CMC you should be able to change the data source for migration purposes.If you truely need to do it at runtime then you want to do RAS.
    Here's a sample to get you started.
    http://diamond.businessobjects.com/node/6197
    <a href="/blog/10">Rob&#39;s blog - http://diamond.businessobjects.com/robhorne</a>

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

  • I support a very large school district currently running Firefox 3.6. What will happen at end of life date? We're in the middle of online testing this week.

    I run the test center for a very large school district with over 120k students. We've got a current deployed base of 54k client machines using Firefox 3.6. We haven't upgraded due to multiple reasons, the most important of which is removing the possibility of using In Private Browsing from the students, and dealing with plugin-updates for the non digital natives (read dumber than a bag of hammers users) that make up the majority of the client base.
    We're testing ESR now, but just found out that end of life for 3.6 is tomorrow, 4/24. We are currently in the middle of statewide online testing. The question is, what will happen tomorrow when the browser goes end of life. The ESR wiki mentions that "an update to the current version of Desktop Firefox will be offered through the Application Update Service"
    So the main question is, are my students/teachers going to get a popup telling them they have to update the browser if we have the updates already turned off? If so, can I turn it off remotely using SCCM, because it will cause all kinds of havoc.
    Please advise asap, and thanks in advance.

    We had to do some serious gymnastics to remove at least most of the ability to use IPB. We removed it from the gui, but unfortunately, if they know the hotkey, they can still bring it up. Security has some serious headaches with this, as by law they have to be able to track where students go, and going with private browsing removes their ability to do forensic work they're required to be able to do. Not a very well thought out feature from Mozilla in my opinion, but it is what it is. Successive versions have made it even more difficult to remove even the gui portion.
    We do plan to release ESR due to the aforementioned security issues, but testing has been slow.
    But thanks for the reply. I think we can turn off the updates if it isn't already done.

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Can iCloud be used to synchronize a very large Aperture library across machines effectively?

    Just purchased a new 27" iMac (3.5 GHz i7 with 8 GB and 3 TB fusion drive) for my home office to provide support.  Use a 15" MBPro (Retina) 90% of the time.  Have a number of Aperture libraries/files varying from 10 to 70 GB that are rapidly growing.  Have copied them to the iMac using a Thunderbolt cable starting the MBP in target mode. 
    While this works I can see problems keeping the files in sync.  Thought briefly of putting the files in DropBox but when I tried that with a small test file the load time was unacceptable so I can imagine it really wouldn't be practical when the files get north of 100 GB.  What about iCloud?  Doesn't appear a way to do this but wonder if that's an option.
    What are the rest of you doing when you need access to very large files across multiple machines?
    David Voran

    Hi David,
    dvoran wrote:
    Don't you have similar issues when the libraries exceed several thousand images? If not what's your secret to image management.
    No, I don't  .
    It's an open secret: database maintenance requires steady application of naming conventions, tagging, and backing-up.  With the digitization of records, losing records by mis-filing is no longer possible.  But proper, consistent labeling is all the more important, because every database functions as its own index -- and is only as useful as the index is uniform and holds content that is meaningful.
    I use one, single, personal Library.  It is my master index of every digital photo I've recorded.
    I import every shoot into its own Project.
    I name my Projects with a verbal identifier, a date, and a location.
    I apply a metadata pre-set to all the files I import.  This metadata includes my contact inf. and my copyright.
    I re-name all the files I import.  The file name includes the date, the Project's verbal identifier and location, and the original file name given by the camera that recorded the data.
    I assign a location to all the Images in each Project (easy, since "Project" = shoot; I just use the "Assign Location" button on the Project Inf. dialog).
    I _always_ apply a keyword specifying the genre of the picture.  The genres I use are "Still-life; Portrait; Family; Friends; People; Rural; Urban; Birds; Insects; Flowers; Flora (not Flowers); Fauna; Test Shots; and Misc."  I give myself ready access to these by assigning them to a Keyword Button Set, which shows in the Control Bar.
    That's the core part.  Should be "do-able".  (Search the forum for my naming conventions, if interested.)  Or course, there is much more, but the above should allow you to find most of your Images (you have assigned when, where, why, and what genre to every Image). The additional steps include using Color Labels, Project Descriptions, keywords, and a meaningful Folder structure.  NB: set up your Library to help YOU.  For example, I don't sell stock images, and so I have no need for anyone else's keyword list.  I created my own, and use the keywords that I think I will think of when I am searching for an Image.
    One thing I found very helpful was separating my "input and storage" structure from my "output" structure.  All digicam files get put in Projects by shoot, and stay there.  I use Folders and Albums to group my outputs.  This works for me because my outputs come from many inputs (my inputs and outputs have a many-to-many relationship).  What works for you will depend on what you do with the picture data you record with your cameras.  (Note that "Project" is a misleading term for the core storage group in Aperture.  In my system they are shoots, and all my Images are stored by shoot.  For each output project I have (small "p"), I create a Folder in Aperture, and put Albums, populated with the Images I need, in the Folder.  When these projects are done, I move the whole Folder into another Folder, called "Completed".)
    Sorry to be windy.  I don't have time right now for concision.
    HTH,
    --Kirby.

  • Lightroom Linked to Business Application

    July 15, 2011
    Hello:
    I'm interested in a business application that links to Lightroom 2.7; the operating system is Windows XP.  The application would link Photoshop files to cost, sale price, customers, etc.  Hopefully, a thumbnail of the photo could be included with the file name.
    At this time, I've used Excel and done some expansion into Access.  The set-up is very cumbersome and time consuming.  Can I link between Excel and the Lightroom database?
    Help with this effort would be appreciated.
    Scott

    March 7, 2012
    Dear Hal:
    I thank you for your suggestion. 
    My original thought of your reply was that I begin programming.  Re-reading your reply revealed that this is not the case.  Also, programming would take me away from photography.
    My intent is for my ledger and bills to include thumbnails of the photos, which are sold as greeting cards or prints.  It would be nice to have the photos entered automatically, instead of manually.
    I thank you.
    Scott

  • Import very large csv files into SAP

    Hi
    We're not using PI, but have middleware called Trading Networks. Our design is fixed (not my decision) to not upload files into Application Server and import it from there. Our design dictates that we must write RFCs and Trading Networks will call the RFC per interface with very large file sent as table of strings. This takes 14 minutes to import into SAP plain Z-table from where we'll integrate. As a test we uploaded the file to Application Server and integrated into Z-table from there. This took 4 minutes. However our architect is not impressed that we'll stretch available Application Server to it's limits.
    I want to propose that the large file be split in e.g. 4 parts at Trading Networks level and call 4 threads of the RFC which could reduce integration time to e.g. 3 minutes 30 seconds. Is there someone that has suggestions in this regard especially about a proposed, working, elegant solution for integrating large files with our current environment? This will form the foundation of our project.
    Thank you and best regards,
    Adrian

    Zip compression can be tried. The RFC will receive zip stream which will be decompressed using CL_ABAP_ZIP.

  • Very large bdump file sizes, how to solve?

    Hi gurus,
    I currently always find my disk space is not enough, after checking, it is the oraclexe/admin/bdump, there's currently 3.2G for it, my database is very small, only holding datas of 10mb.
    It didn't happen before, only currently.
    I don't know why it happened, I have deleted some old files in that folder, but today I found it is still very large compare to my database.
    I am running an apex application with xe, the applcaitions works well, we didn't see anything wrong, but only the bdump file very big.
    any tip to solve this? thanks
    here comes my alert_xe.log file content:
    hu Jun 03 16:15:43 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5600.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:15:48 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=5452
    Thu Jun 03 16:15:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:16:16 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:20:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:21:50 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:25:56 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:26:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:30:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:31:19 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:36:00 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:36:46 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=1312
    Thu Jun 03 16:36:49 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:37:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:41:51 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:42:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:46:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:47:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:51:57 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:52:35 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:56:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:57:10 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=3428
    Thu Jun 03 16:57:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:57:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:02:16 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:02:48 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:07:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:08:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:12:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:12:41 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:17:21 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:17:34 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=5912
    Thu Jun 03 17:17:37 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:18:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:22:37 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:23:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:27:39 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:28:02 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:32:42 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:33:07 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:37:45 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:38:40 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=1660
    Thu Jun 03 17:38:43 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:39:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:42:54 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=31, OS id=6116
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174259', 'KUPC$S_1_20100603174259', 0);
    Thu Jun 03 17:43:38 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=32, OS id=2792
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174338', 'KUPC$S_1_20100603174338', 0);
    Thu Jun 03 17:43:44 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:44:06 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:44:47 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=33, OS id=3492
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174448', 'KUPC$S_1_20100603174448', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=34, OS id=748
    to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM');
    Thu Jun 03 17:45:28 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 5684K exceeds notification threshold (2048K)
    KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
    Thu Jun 03 17:45:28 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 5681K exceeds notification threshold (2048K)
    Details in trace file c:\oraclexe\app\oracle\admin\xe\bdump\xe_dw01_748.trc
    KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
    Thu Jun 03 17:48:47 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:49:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:53:49 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:54:28 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
    Fri Jun 04 07:46:55 2010
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Windows XP Version V5.1 Service Pack 3
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:1653M/2047M, Ph+PgF:4706M/4958M, VA:1944M/2047M
    Fri Jun 04 07:46:55 2010
    Starting ORACLE instance (normal)
    Fri Jun 04 07:47:06 2010
    LICENSE_MAX_SESSION = 100
    LICENSE_SESSIONS_WARNING = 80
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =33
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    processes = 200
    sessions = 300
    license_max_sessions = 100
    license_sessions_warning = 80
    sga_max_size = 838860800
    __shared_pool_size = 260046848
    shared_pool_size = 209715200
    __large_pool_size = 25165824
    __java_pool_size = 4194304
    __streams_pool_size = 8388608
    spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
    sga_target = 734003200
    control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
    __db_cache_size = 432013312
    compatible = 10.2.0.1.0
    db_recovery_file_dest = D:\
    db_recovery_file_dest_size= 5368709120
    undo_management = AUTO
    undo_tablespace = UNDO
    remote_login_passwordfile= EXCLUSIVE
    dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
    shared_servers = 10
    job_queue_processes = 1000
    audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
    background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
    user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
    core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
    db_name = XE
    open_cursors = 300
    os_authent_prefix =
    pga_aggregate_target = 209715200
    PMON started with pid=2, OS id=3044
    MMAN started with pid=4, OS id=3052
    DBW0 started with pid=5, OS id=3196
    LGWR started with pid=6, OS id=3200
    CKPT started with pid=7, OS id=3204
    SMON started with pid=8, OS id=3208
    RECO started with pid=9, OS id=3212
    CJQ0 started with pid=10, OS id=3216
    MMON started with pid=11, OS id=3220
    MMNL started with pid=12, OS id=3224
    Fri Jun 04 07:47:31 2010
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 10 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    PSP0 started with pid=3, OS id=3048
    Fri Jun 04 07:47:41 2010
    alter database mount exclusive
    Fri Jun 04 07:47:54 2010
    Setting recovery target incarnation to 2
    Fri Jun 04 07:47:56 2010
    Successful mount of redo thread 1, with mount id 2601933156
    Fri Jun 04 07:47:56 2010
    Database mounted in Exclusive Mode
    Completed: alter database mount exclusive
    Fri Jun 04 07:47:57 2010
    alter database open
    Fri Jun 04 07:48:00 2010
    Beginning crash recovery of 1 threads
    Fri Jun 04 07:48:01 2010
    Started redo scan
    Fri Jun 04 07:48:03 2010
    Completed redo scan
    16441 redo blocks read, 442 data blocks need recovery
    Fri Jun 04 07:48:04 2010
    Started redo application at
    Thread 1: logseq 1575, block 48102
    Fri Jun 04 07:48:05 2010
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 1575 Reading mem 0
    Mem# 0 errs 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Fri Jun 04 07:48:07 2010
    Completed redo application
    Fri Jun 04 07:48:07 2010
    Completed crash recovery at
    Thread 1: logseq 1575, block 64543, scn 27413940
    442 data blocks read, 442 data blocks written, 16441 redo blocks read
    Fri Jun 04 07:48:09 2010
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=25, OS id=3288
    ARC1 started with pid=26, OS id=3292
    Fri Jun 04 07:48:10 2010
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    Thread 1 advanced to log sequence 1576
    Thread 1 opened at log sequence 1576
    Current log# 3 seq# 1576 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
    Successful open of redo thread 1
    Fri Jun 04 07:48:13 2010
    ARC0: STARTING ARCH PROCESSES
    Fri Jun 04 07:48:13 2010
    ARC1: Becoming the 'no FAL' ARCH
    Fri Jun 04 07:48:13 2010
    ARC1: Becoming the 'no SRL' ARCH
    Fri Jun 04 07:48:13 2010
    ARC2: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    ARC0: Becoming the heartbeat ARCH
    Fri Jun 04 07:48:13 2010
    SMON: enabling cache recovery
    ARC2 started with pid=27, OS id=3580
    Fri Jun 04 07:48:17 2010
    db_recovery_file_dest_size of 5120 MB is 49.00% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Fri Jun 04 07:48:31 2010
    Successfully onlined Undo Tablespace 1.
    Fri Jun 04 07:48:31 2010
    SMON: enabling tx recovery
    Fri Jun 04 07:48:31 2010
    Database Characterset is AL32UTF8
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=28, OS id=2412
    Fri Jun 04 07:48:51 2010
    Completed: alter database open
    Fri Jun 04 07:49:22 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:32 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:57 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:54:10 2010
    Shutting down archive processes
    Fri Jun 04 07:54:15 2010
    ARCH shutting down
    ARC2: Archival stopped
    Fri Jun 04 07:54:53 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:55:08 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:56:25 2010
    Starting control autobackup
    Fri Jun 04 07:56:27 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    Fri Jun 04 07:56:28 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_21
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_20
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_17
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_16
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_14
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_12
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_09
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_07
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_06
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_03
    ORA-27093: 无法删除目录
    Fri Jun 04 07:56:29 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_21
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_20
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_17
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_16
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_14
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_12
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_09
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_07
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_06
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_03
    ORA-27093: 无法删除目录
    Control autobackup written to DISK device
    handle 'D:\XE\AUTOBACKUP\2010_06_04\O1_MF_S_720777385_60JJ9BNZ_.BKP'
    Fri Jun 04 07:56:38 2010
    Thread 1 advanced to log sequence 1577
    Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Fri Jun 04 07:56:56 2010
    Thread 1 cannot allocate new log, sequence 1578
    Checkpoint not complete
    Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Thread 1 advanced to log sequence 1578
    Current log# 3 seq# 1578 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
    Fri Jun 04 07:57:04 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 2208K exceeds notification threshold (2048K)
    KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
    Fri Jun 04 07:59:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:59:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []

    Hi Gurus,
    there's a error ora-00600 in the big trc files as below, this is only part of this file, this file is more than 45mb in size:
    xe_mmon_4424.trc
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_4424.trc
    Fri Jun 04 17:03:22 2010
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Windows XP Version V5.1 Service Pack 3
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:992M/2047M, Ph+PgF:3422M/4958M, VA:1011M/2047M
    Instance name: xe
    Redo thread mounted by this instance: 1
    Oracle process number: 11
    Windows thread id: 4424, image: ORACLE.EXE (MMON)
    *** SERVICE NAME:(SYS$BACKGROUND) 2010-06-04 17:03:22.265
    *** SESSION ID:(284.23) 2010-06-04 17:03:22.265
    *** 2010-06-04 17:03:22.265
    ksedmp: internal or fatal error
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Current SQL statement for this session:
    BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    41982E80 418 package body SYS.DBMS_HA_ALERTS_PRVT
    41982E80 552 package body SYS.DBMS_HA_ALERTS_PRVT
    41982E80 305 package body SYS.DBMS_HA_ALERTS_PRVT
    419501A0 1 anonymous block
    ----- Call Stack Trace -----
    calling call entry argument values in hex
    location type point (? means dubious value)
    ksedst+38           CALLrel  ksedst1+0 0 1
    ksedmp+898          CALLrel  ksedst+0 0
    ksfdmp+14           CALLrel  ksedmp+0 3
    _kgerinv+140         CALLreg  00000000             8EF0A38 3
    kgeasnmierr+19      CALLrel  kgerinv+0 8EF0A38 6610020 3672F70 0
    6538808
    kjhnpost_ha_alert CALLrel _kgeasnmierr+0       8EF0A38 6610020 3672F70 0
    0+2909
    __PGOSF57__kjhn_pos CALLrel kjhnpost_ha_alert 88 B21C4D0 B21C4D8 B21C4E0
    t_ha_alert_plsql+43 0+0 B21C4E8 B21C4F0 B21C4F8
    8 B21C500 B21C50C 0 FFFFFFFF 0
    0 0 6
    _spefcmpa+415        CALLreg  00000000            
    spefmccallstd+147   CALLrel  spefcmpa+0 65395B8 16 B21C5AC 653906C 0
    pextproc+58         CALLrel  spefmccallstd+0 6539874 6539760 6539628
    65395B8 0
    __PGOSF302__peftrus CALLrel _pextproc+0         
    ted+115
    _psdexsp+192         CALLreg  00000000             6539874
    _rpiswu2+426         CALLreg  00000000             6539510
    psdextp+567         CALLrel  rpiswu2+0 41543288 0 65394F0 2 6539528
    0 65394D0 0 2CD9E68 0 6539510
    0
    _pefccal+452         CALLreg  00000000            
    pefcal+174          CALLrel  pefccal+0 6539874
    pevmFCAL+128 CALLrel _pefcal+0           
    pfrinstrFCAL+55 CALLrel pevmFCAL+0 AF74F48 3DFB92B8
    pfrrunno_tool+56 CALL??? 00000000 AF74F48 3DFBB728 AF74F84
    pfrrun+781          CALLrel  pfrrun_no_tool+0 AF74F48 3DFBB28C AF74F84
    plsqlrun+738 CALLrel _pfrrun+0            AF74F48
    peicnt+247          CALLrel  plsql_run+0 AF74F48 1 0
    kkxexe+413          CALLrel  peicnt+0
    opiexe+5529         CALLrel  kkxexe+0 AF7737C
    kpoal8+2165         CALLrel  opiexe+0 49 3 653A4FC
    _opiodr+1099         CALLreg  00000000             5E 0 653CBAC
    kpoodr+483          CALLrel  opiodr+0
    _xupirtrc+1434       CALLreg  00000000             67384BC 5E 653CBAC 0 653CCBC
    upirtrc+61          CALLrel  xupirtrc+0 67384BC 5E 653CBAC 653CCBC
    653D990 60FEF8B8 653E194
    6736CD8 1 0 0
    kpurcsc+100         CALLrel  upirtrc+0 67384BC 5E 653CBAC 653CCBC
    653D990 60FEF8B8 653E194
    6736CD8 1 0 0
    kpuexecv8+2815      CALLrel  kpurcsc+0
    kpuexec+2106        CALLrel  kpuexecv8+0 673AE10 6736C4C 6736CD8 0 0
    653EDE8
    OCIStmtExecute+29   CALLrel  kpuexec+0 673AE10 6736C4C 673AEC4 1 0 0
    0 0 0
    kjhnmmon_action+5 CALLrel _OCIStmtExecute+0    673AE10 6736C4C 673AEC4 1 0 0
    26 0 0
    kjhncheck_ha_reso CALLrel kjhnmmon_action+0 653EFCC 3E
    urces+140
    kebmronce_dispatc CALL??? 00000000
    her+630
    kebmronce_execute CALLrel kebmronce_dispatc
    +12 her+0
    _ksbcti+788          CALLreg  00000000             0 0
    ksbabs+659          CALLrel  ksbcti+0
    kebmmmon_main+386 CALLrel _ksbabs+0            3C5DCB8
    _ksbrdp+747          CALLreg  00000000             3C5DCB8
    opirip+674          CALLrel  ksbrdp+0
    opidrv+857          CALLrel  opirip+0 32 4 653FEBC
    sou2o+45            CALLrel  opidrv+0 32 4 653FEBC
    opimaireal+227 CALLrel _sou2o+0             653FEB0 32 4 653FEBC
    opimai+92           CALLrel  opimai_real+0 3 653FEE8
    BackgroundThreadSt  CALLrel  opimai+0
    art@4+422
    7C80B726 CALLreg 00000000
    --------------------- Binary Stack Dump ---------------------
    ========== FRAME [1] (_ksedst+38 -> _ksedst1+0) ==========
    Dump of memory from 0x065386DC to 0x065386EC
    65386D0 065386EC [..S.]
    65386E0 0040467B 00000000 00000001 [{F@.........]
    ========== FRAME [2] (_ksedmp+898 -> _ksedst+0) ==========
    Dump of memory from 0x065386EC to 0x065387AC
    65386E0 065387AC [..S.]
    65386F0 00403073 00000000 53532E49 20464658 [[email protected] ]
    6538700 54204D41 0000525A 00000000 08EF0EC0 [AM TZR..........]
    6538710 6072D95A 08EF0EC5 03672F70 00000017 [Z.r`....p/g.....]
    6538720 00000000 00000000 00000000 00000000 [................]
    Repeat 1 times
    6538740 00000000 00000000 00000000 00000017 [................]
    6538750 08EF0B3C 08EF0B34 03672F70 08F017F0 [<...4...p/g.....]
    6538760 603AA0D3 065387A8 00000001 00000000 [..:`..S.........]
    6538770 00000000 00000000 00000001 00000000 [................]
    6538780 00000000 08EF0A38 06610020 031E1D20 [....8... .a. ...]
    6538790 00000000 065386F8 08EF0A38 06538D38 [......S.8...8.S.]
    65387A0 0265187C 031C8860 FFFFFFFF [|.e.`.......]
    ========== FRAME [3] (_ksfdmp+14 -> _ksedmp+0) ==========
    and the file is keeping increasing, though I have deleted a lot of this, but:
    as I marked:
    time size
    15:23 pm 795mb
    16:55 pm 959mb
    17:01 pm 970mb
    17:19 pm 990mb
    Any solution for that?
    Thanks!!

  • ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2

    [oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
    ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
    DWH12.__large_pool_size=16777216
    DWH11.__large_pool_size=16777216
    DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    DWH12.__pga_aggregate_target=2902458368
    DWH11.__pga_aggregate_target=2902458368
    DWH12.__sga_target=4328521728
    DWH11.__sga_target=4328521728
    DWH12.__shared_io_pool_size=0
    DWH11.__shared_io_pool_size=0
    DWH12.__shared_pool_size=956301312
    DWH11.__shared_pool_size=956301312
    DWH12.__streams_pool_size=0
    DWH11.__streams_pool_size=134217728
    #*._realfree_heap_pagesize_hint=262144
    #*._use_realfree_heap=TRUE
    *.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='DWH'
    *.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
    *.db_recovery_file_dest_size=7373586432
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
    DWH12.instance_number=2
    DWH11.instance_number=1
    DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
    DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
    *.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
    *.log_archive_format='DWH_%t_%s_%r.arc'
    #*.memory_max_target=7226785792
    *.memory_target=7226785792
    *.open_cursors=1000
    *.processes=500
    *.remote_listener='LISTENERS_SCAN'
    *.remote_login_passwordfile='exclusive'
    *.sessions=555
    DWH12.thread=2
    DWH11.thread=1
    DWH12.undo_tablespace='UNDOTBS2'
    DWH11.undo_tablespace='UNDOTBS1'
    SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
    [oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename
    # Useful for debugging multi-threaded applications
    kernel.core_uses_pid = 1
    # Controls the use of TCP syncookies
    net.ipv4.tcp_syncookies = 1
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    #kernel.shmall = 4294967296
    kernel.shmall = 8250344
    # Oracle kernel parameters
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    kernel.shmmax = 536870912
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304
    Please can I know how to resolve this error.

    CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
    ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters

  • T3/RMI packet size very large

    In order to determine our bandwidth requirements, we recently placed a
    sniffer on our network to analyze our message packets. We've noticed
    that
    our packets are very large and it appears that much of the overhead is
    in
    the overhead of RMI or T3.
    Here are some sample numbers for similar message between a single client
    and our WLS server.
    T3: 3500 bytes
    T3 w/ HTTP tunneling: 5500 bytes
    IIOP: 1250 bytes (using VisiBroker ORB talking to Smalltalk ORB)
    As you can see the T3 packet size is 65% larger than the same packet
    sent
    via Corba/IIOP. It also appears that with RMI, all of the full class
    names
    and variable names are also being passed along within the packet. Are we
    missing something or is this an understood fact? Is there anything we
    can
    do to fix this problem? As it stands, our bandwidth requirements to
    support
    the larger T3 packet size are astronomical and this would not be
    feasible
    in a production environment. Does anyone know what is the typical
    percentage
    overhead increase per packet. It appears to be about 400%.
    Our WLS environment is described below.
    Edwin Marcial
    Continental Power Exchange
    Weblogic Environment
    WLS Server
    WLS 4.51 w/ Service Pack 7
    NativeIO = true
    ExecuteThreadCount = 40
    readTimeoutMillis=5000
    readTimeoutMillisSSL=10000
    Dell Pentium III 600 w/ 512 MB memory
    JavaSoft 1.2.2
    -ms128 -mx350
    WLS Client
    Java Application
    t3s and https (using WLS RMI)
    JavaSoft 1.1.7b
    typically Pentium 200 MHz or better w/ 64MB or more

    I think you are kind of stuck with this. RMI is a heavyweight protocol in
    comparision to IIOP. If the message sizes really bother you that much you
    may want to look into and a EJB implementation that maps RMI to IIOP such as
    the Inprise Application Server which sits atop the VisiBroker ORB.
    -paul
    Edwin Marcial <[email protected]> wrote in message
    news:[email protected]..
    In order to determine our bandwidth requirements, we recently placed a
    sniffer on our network to analyze our message packets. We've noticed
    that
    our packets are very large and it appears that much of the overhead is
    in
    the overhead of RMI or T3.
    Here are some sample numbers for similar message between a single client
    and our WLS server.
    T3: 3500 bytes
    T3 w/ HTTP tunneling: 5500 bytes
    IIOP: 1250 bytes (using VisiBroker ORB talking to Smalltalk ORB)
    As you can see the T3 packet size is 65% larger than the same packet
    sent
    via Corba/IIOP. It also appears that with RMI, all of the full class
    names
    and variable names are also being passed along within the packet. Are we
    missing something or is this an understood fact? Is there anything we
    can
    do to fix this problem? As it stands, our bandwidth requirements to
    support
    the larger T3 packet size are astronomical and this would not be
    feasible
    in a production environment. Does anyone know what is the typical
    percentage
    overhead increase per packet. It appears to be about 400%.
    Our WLS environment is described below.
    Edwin Marcial
    Continental Power Exchange
    Weblogic Environment
    WLS Server
    WLS 4.51 w/ Service Pack 7
    NativeIO = true
    ExecuteThreadCount = 40
    readTimeoutMillis=5000
    readTimeoutMillisSSL=10000
    Dell Pentium III 600 w/ 512 MB memory
    JavaSoft 1.2.2
    -ms128 -mx350
    WLS Client
    Java Application
    t3s and https (using WLS RMI)
    JavaSoft 1.1.7b
    typically Pentium 200 MHz or better w/ 64MB or more

Maybe you are looking for