Architectural approach

Hi java gurus,
i'm designing a sales tracking application, which has a gui client (not web based), which will connect to an SQL server database. I've written the gui in swing and thats fine and dandy. My question how should i approach developing the middle tier. Should i do a straigh sql connection on the clinet side (using a db access thread) or should i connect it to some middleware, e.g. connect to a jboss or resin server which will handle the connection / sql requests to the db server. If so how should i approach this? Use DAO or session facade? Some insight here would be great! Thanks again.

hi all,
thanks for the rapid resposne. The reason i wrote
e iton swing is i use a JDesktop frame, and the have
a jmenu bar etc. So the gui is quite complex. not
something that can be easily developed in a html ,
dhtml etc (you can imagine the code on that!!). Actually I've seen browser based JSP UIs that are very complex and attractive. Unless your users are demanding drag & drop, a browser might be an alternative. But you see fixated on your Swing app, so that's fine.
the
number of users would be a min of 15. Min is no problem. What about the max?
originally i
coded the sql queries and connection on the client,
through a separate thread, so that way atleast the
requests do not block the paint methods of the gui
(so you don't get that funny dragging on your
windows!). I've compiled the application as an EXE
file, and this exe is on a shared network drive.
Users just have a link to the shortcut. I guess my
y main question is regardless if i use session
facade or dao, how will i get my gui to retrieve the
results? The UI Listener class that handles the query request will instantiate the DAO, call its finder method, and work with the List of objects that come back. It's all in memory, so there's no worries there.
i've developed a jsp, servlet, ejb
application before, and normaly i would use a servlet
to query the ejbs, based on a jsp request. But
through a downloaded swing based client i'm not quite
sure. You're going to have to have your Swing client make a remote request to get results back.
If you talk directly to an EJB, you'll do the JNDI lookup in the Listener that handles the query event to get the EJB remote interface, make the call, and deal with the data structure of objects that comes back. Your UI won't deal with ResultSets or anything like that anymore. It'll just be a data structure or object that will then be displayed.
The DAO will be hidden behind the session facade in this case. All the database specific stuff will be in the DAO and out of the UI. The session bean can handle any transactional needs you have, too. Connections are pooled. Must more scalable.
%

Similar Messages

  • Best architectural approach ?

    Hi,
    We are in the process of creating a PoC to integrate WebLogic Web Services, Oracle Coherence and Oracle Composite Database. We have followed the below approach to integrate them...
    1. Created a CompositeView in Oracle Composite by connecting to three different databases / flat files ( Oracle, MySQL & XML ).
    2. Designed a Distributed Cache using Oracle Coherence on top of Oracle Composite ( with DB Cache Store implemented as read only ).
    3. Have a WebLogic Web Service ( implemented using JAX-WS ) which queries Oracle Coherence for data.
    Currently we have a single method exposed in our web service to take an orderId as an input and return a complex datatype ( plain Java Bean ) with one row of data. In DB cache store, we have implemented only the load() method to load each row from Oracle Composite in case there is a cache miss. Being a Poc, we have stuck to basics of coherence and web services. Do you have any better approach in implementing what we are trying out currently.
    Thanks
    Karthik

    Have you looked into relative path's? that would solve the issue of different OS path names.
    Also, you might want to consider the deployment descriptors you can use when compiliing, you can set enviroment specific variables like paths in there.

  • Best data update approach

    I am currently trying to decide on the best architecture approach for the synchronisation of data between a CRM system and SAP.  The CRM system is already built and will be producing files as part of nightly batch containing updates that are required for synchronisation to SAP.  Examples of these updates are:
    - New Customers
    - Updates to existing customer information (e.g. date of birth)
    - Regional Structure Updates
    and so on….
    We do not have a middleware platform, nor can we process the updates asynchronously throughout the day using messages, so we must use a batch approach where the CRM creates a file containing the changes and then we load and update SAP each night.
    My questions is should we get the CRM system to format the changes as IDOCs or just create a custom flat file containing the changes?
    - If we use IDOCs I assume I can then reuse standard SAP functionality to make the updates to the SAP DB (and I assume that they would be able to tell the different between a create and an update).  However the downside is that will be extra complexity in creating correctly formatted IDOC in the CRM system.  A follow on question would be - Am I correct in assuming that a single IDOC can contain updates for many different records (e.g. in the case of the batch to update customer information would I need an IDOC for each update or just 1 mega IDOC?
    - If we use a flat file format then we will need to create a custom ABAP program to read in the file and call the appreciate methods to either locate and update records or create new ones.
    Any guidance would be good, particularly on the relative advantages of using IDOCs over flat files for this type of batch processing.

    Hi Richard,
    Welcome to SDN.
    I would suggest to use IDoc for interface between SAP and CRM (vice versa).
    IDoc technology has excellent error handling and capability to reprocess the error case by case.
    It has also good error analysis and testing tool.
    Regards,
    Ferry Lianto

  • Many-to-many question

    Just asking for some help with some search terms, as I know that this topic has been argued out somewhere. I just haven't been able to find it:
    If I'm setting up a cross-reference/junction table in order to fulfill a many-to-many requirement, should I create a technical key as a primary key with a unique index on the combined foreign keys? Or should I make the combination of the foreign keys the primary key?
    Our newest standards force a technical key on all tables, and one of the Developers is asking why that's necessary on a cross-reference table. I've searched on "cross reference table primary key", "junction table primary key". Most answers suggesting a technical key are due to database performance issues that can be solved with it (not suggested for that reason for Oracle that I've found yet).
    --=Chuck

    >
    If I'm setting up a cross-reference/junction table in order to fulfill a many-to-many requirement, should I create a technical key as a primary key with a unique index on the combined foreign keys? Or should I make the combination of the foreign keys the primary key?
    >
    Assuming that each of your foreign keys is a surrogate key and not an actual data value, I have to agree with Dan on this one.
    The two primary attributes that a key value should have are: 1) it should be unique and 2) it should have no inherent meaning.
    By 'no inherent meaning' I mean it should NOT be a data value or have any meaning whatsoever attached to its value. If the value were to change it would have no impact at all on the actual data though naturally the value has to be changed the same everywhere it is used.
    A classic example in the U.S. is social security number. It should NOT be used as a key even though it might be considered a 'natural' key.
    The value is a data item that has meaning and is subject to change if for no other reason than typographical error. It is the problems involved in making changes to data items used as keys that surrogate keys are intended to prevent.
    A surrogate key (even a sequence) has no inherent meaning. If a value of 2 is being used in a system and, unknown to you, someone changes the value to a 3 everywhere in your system that it is used you will never notice it.
    I have seen no standard suggesting that multi-column surrogate keys are inappropriate. Your intersect table can use the combination of the foreign keys as the primary key and fulfill the contract of using surrogate keys.
    Sven suggested this
    >
    For example you later discover that you want a detail table for your joined table. Then the detail table would need only one column as a foreign key constraint, instead of two (which can have major performance implications).
    >
    If if was desired to meet that future requirement by using one column you can easily do so by adding a new column specifically for that purpose without any impact whatsoeve on the existing system.
    As for this possible architecture change
    >
    Or you add some third column to include into the uniqueness logic.
    >
    Well that requirement is a fundamental change to your existing one. The existing requirement is that the combination of the two foreign keys should be unique. It is not clear if Sven was suggesting that that requirement is no longer valid or if rather suggesting that there is no an ADDITIONAL requirement that those two values as well as a third should also be unique.
    If you need to implement uniqueness for the two values you currently have as well as for those combined with a third value that would be better implemented by creating yet another INTERSECT table specifically for that purpose and leave the existing intersect table in place. That architecture approach expands the data model without disrupting the existing one.
    The main design goal, in my opinion, should always be to produce a design which most easily meets current requirements without foreclosing the ability to meet unknown future requirements.

  • Paypal integration with Oracle 11g??

    I am trying to run the PayPal integration sample application under 11g and running into an issue when it tries to access the sandbox url:
    p_api_url is being passed into the module setting things up and is:
    https://api-3t.sandbox.paypal.com/nvp
    l_http_req := utl_http.begin_request(p_api_url, 'POST');Error I am getting back is :
    ORA-29273: HTTP request failed
    Cause: The UTL_HTTP package failed to execute the HTTP request.
    So trying to do a simple post the Paypal url is failing, why??
    Thank you,
    Tony Miller
    LuvMuffin Software

    There is no integration out-of-box between cache and database, and never will be, because it highly dependent not only on database type, but also on tables structure and complexity of objects and SQL queries you need for retrieving entities you need from database. First step would be to using events, as perfectly described within blog entry mentioned below.
    But it's just a tiny but of the actual work, one has to build an separate set of objects reacting on events and doing the actual data extract and do it reliably, rolling back extract data, reconnecting upon database failures and allow partial updates of the entities in cache in order to pertain high performance in case of huge amounts of extract batches.
    There exists a generic architectural approach & solution made specifically to leverage Oracle database features on this regard, having all these features, I've built it for one of our OLTP apps.

  • Multiple Root elements in the XSD Schema

    Hi Dear MDM gurus,
    I have manually created an XSD Schema and I can import it in the Syndicator without any problem.
    I have 3 different Root elements in the schema that I want to use and map to the corresponding table of this type.
    In my Syndicator screen in the Rood dropdown box I can see all three of the Root elements that I have in the schema.
    If I select the first Root element - everything goes fine and I can do my mapping, save the map, etc.
    The problem is that if I select the second or third root element to do the mapping for them, the syndicator does not show the structure in the Item Mapping after that.
    I tried moving the Second root element of the schema and make it first and it works with it, but then the other two are not appearing and I have the same issue.
    Does MDM support only one Root element in the Schema? If that's the case, why does it show all three of them in the dropdown?
    Here is an example:
    1. If I have them in this order in my XSD
              <xs:element name="ManufacturerGroupNumber" type="ManufacturerGroupNumbers"/>
              <xs:element name="SupplierGroupNumber" type="SupplierGroupNumbers"/>
              <xs:element name="SupplierLocationNumber" type="Suppliers"/>
    I can only see the structure when I select the "ManufacturerGroupNumber".
    2. If I have them in the Schema in this order
              <xs:element name="SupplierLocationNumber" type="Suppliers"/>
              <xs:element name="SupplierGroupNumber" type="SupplierGroupNumbers"/>
              <xs:element name="ManufacturerGroupNumber" type="ManufacturerGroupNumbers"/>
    I can only see the structure when I select the first one again "SupplierLocationNumber" and I can only do the mapping for it.
    Any help would be appreciated.
    Thanks in advance,
    Boris

    Hello Satish,
    Thank you for your quick response.
    I read some of the architectural approaches and XML specifications and depending on your design, you may have only one Root element or in rare cases Several Root elements. In my case, I was advised by our PI experts to use Multiple Root elements and this gives me the following:
    Advantages:
    u2022 The reusability of the schema definition is available for all types and all elements defined in the global namespace.
    Disadvantages:
    u2022 Since there are many global elements, there are many valid XML documents.
    I initially had the schema as you suggested, but they didn't like it in PI and advised me to change it with having multiple roots.
    What I'm trying to figure out is if there is a bug in MDM that does not allow to use the rest of the root elements, so I can open an OSS message.
    Thanks,
    Boris

  • Multi-platform File Adapter

    We are currently looking at the requirements for several new processes involving use of the File Adapter functionality. Our production BPM environment is RedHat Linux 4, while this particular legacy system runs on Windows 2000. The legacy system is already configured to read and write XML files into Windows file shares. Some integration processes should kick off when XML files are written to the file shares. Other processes will write XML files back into the file shares. I am looking for recommendations as to the best architectural approach to dealing with this multi-platform problem within the BPEL PM. As I see it, we have a few of options:
    1.     Utilize FTP server functionality rather than direct file access to read and write the files in a platform-independent manner.
    2.     Use some other technology to bridge between the platform-specific file directories and something less dependent on platform. For example, pick up the files from the Windows directory and write them to an AQ queue. Then feed the BPEL process from the queue.
    3.     Run BPM on multiple platforms and allow the Windows instance to handle Windows file drops while the Linux instance handles Linux file drops. Obviously there is a cost penalty here as well as complexity during deployment.
    Any thoughts or experiences are welcome.
    Thank you.

    Have you looked into relative path's? that would solve the issue of different OS path names.
    Also, you might want to consider the deployment descriptors you can use when compiliing, you can set enviroment specific variables like paths in there.

  • OWB bugs, missing functionality and the future of OWB

    I'm working with OWB for some time now and there are a lot of rough edges to discover. Functionality and stability leave a lot to be desired. Here's a small and incomplete list of things that annoy me:
    Some annoying OWB bugs (OWB 10g 10.1.0.2.0):
    - The debugger doesn't display the output parameters of procedures called in pre-mapping processes (displays nothing, treats values as NULL). The mapping itself works fine though.
    - When calling selfmade functions within an expression OWB precedes the function call with a constant "Functions." which prevents the function from being executed and results in an error message
    - Occasionally OWB cannot open mappings and displays an error message (null pointer exception). In this case the mapping cannot be opened anymore.
    - Occasionally when executing mappings OWB doesn't remember changes in mappings even when the changes were committed and deployed
    - When using aggregators in mappings OWB scrambles the order of the output attributes
    - The deployment of mappings sometimes doesn't work. After n retries it works without having changed anything in the mapping
    - When recreating an external table directly after dropping the table OWB recreates the external table but always displays both an error message and a success message.
    - In Key Lookups the screen always gets garbled when selecting an attribute as a join condition
    - Usage of constants results in aborts in the debugger
    - When you reconcile a table used in a key lookup the lookup condition sometimes changes. OWB seems to remember only the position of the lookup condition attribute but not the name.
    - In the process of validating a mapping often changes in the mapping get lost and errors occur like 'Internal Errors' or 'Null Pointer Exceptions'.
    - When you save the definition of external tables OWB always adds 2 whitespace columns to the beginning of all the lines following 'ORGANISATION EXTERNAL'. If you save a lot of external table definitions you get files with hundreds of leading whitespaces.
    Poor or missing functionality:
    - No logging on the level of single records possible. I'd like the possibility to see the status of each single record in each operator like using 'verbose data' in PowerCenter
    - The order of the attributes cannot be changed. This really pisses me off expecially if operators like the aggregator scramble the order of attributes.
    - No variables in expressions possible
    - Almost unusable lookup functionality (no cascading lookups, no lookup overrides, no unconnected lookups, only equal condition in key lookups)
    - No SQL overrides in soruces possible
    - No mapplets, shared containers or any kind a reusable transformations
    - No overview functionality for mappings. Often it's very hard to find a leftover operator in a big mapping.
    - No copy function for attributes
    - Printing functionality is completely useless
    - No documentation functionality for mappings (reports)
    - Debugger itself needs debugging
    - It's very difficult to mark connections between attributes of different operations. It's almost impossible to mark a group of connections without marking connections you don't want to mark.
    I really wonder which of the above bugs and mssing functionality 'Paris' will address. From what I read about 'Paris' not many if at all. If Oracle really wants to be a competitor (with regard to functionality) to Informatica, IBM/Ascential etc. they have a whole lot of work to do or purchase Informatica or another of the leading etl tool
    vendors.
    What do you think about OWB? Will it be a competitor for the leading etl tools or just a cheap database add on and become widely used like SAB BW not for reasons of technology or functionality but because it's cheap?
    Looking forward to your opinions.
    Jörg Menker

    Thanks to you two for entertaining my thoughts so far. Let me respond to you latest comments.
    Okay, lets not argue which one is better.. when a tool is there .. then there are some reasons to be there...But the points raised by Jorg and me are really very annoying. Overall I agree with both yours and Jorg's points (and I did not think it was an argument...merely sharing our observations with each other (;^)
    The OWB tool is not as mature as Informatica. However, Informatica has no foothold in the database engine itself and as I mentioned earlier, is still "on the outside looking in..." The efficiency and power of set-based activity versus row-based activity is substantial.
    Looking at it from another way lets take a look at Microstrategy as a way of observing a technical strategy for product development. Microstrategy focused on the internals (the engine) and developed it into the "heavy-lifting" tool in the industry. It did this primarily by leveraging the power of the backend...the database and the hosting server. For sheer brute force, it was champion of the day. It was less concerned with the pretty presentation and more concerned with getting the data out of the back-end so the user didn't have to sit there for a day and wait. Now they have begun to focus on the presentation part.
    Likewise this seems to be the strategy that Oracle has used for OWB. It is designed around the database engine and leverages the power of the database to do its work. Informatica (probably because it needs to be all things to all people) has tended to view the technical offerings of the database engine as a secondary consideration in its architectural approach and has probably been forced to do so more now that Oracle has put themselves in direct competition with Informatica. To do otherwise would make their product too complex to maintain and more vendor-specific.
    I am into the third data warehousing/data migration project and my previous two have been on Informatica (3 years on it).I respect your experience and your opinions...you are not a first timer. The tasks we have both had to solve and how we solved them with these tools are not necessarily the same. Could be similar in instances; could be quite different.
    So the general tendency is to evaluate the tool and try to see how things that were needed to be done in my previous projects can be done with this tool. I am afraid to say .. I am still not sure how these can be implemented in OWB. The points raised by us are probably the fall out of this deficiency.One observation that I would make is that in my experience, calls to the procedural language in the database engine have tended to perform very poorly with Informatica. Informatica's scripting language is week. Therefore, if you do not have direct usability of a good, strong procedural language to tackle some complicated tasks, then you will be in a pickle when the solution is not well suited to a relational-based approach. Informatica wants you to do most things outside of the database (in the map primarily). It is how you implement the transformation logic. OWB is built entirely around the relational, procedural, and ETL components in the Oracle database engine. That is what the tool is all about.
    If cost is the major factor for deciding a tool then OWB stands far ahead...Depends entirely on the client and the situation. I have implemented solutions for large companies and small companies. I don't use a table saw to cut cake and I don't use a pin knife to fall trees. Right tool for the right job.
    ...thats what most managers do .. without even looking how in turn by selecting such a tool they make the life tough for the developers.Been there many times. Few non-technical managers understand the process of tool evaluation and selection and the value a good process adds to the project. Nor do they understand the implications of making a bad choice (cost, productivity, maintainability).
    The functionality of OWB stands way below Informatica.If you are primarily a GUI-based implementer that is true. However, I have often found that when I have been brought in to fix performance problems with Informatica implementations that the primary problem is usually with the way that the developer implemented it. Too often I have found that the developer understands how to implement logic in the GUI component (the Designer/Maps and Sessions) with a complete lack of understanding of how all this activity will impact load performance (they don't understand how the database engine works.) For example, a strong feature in Informatica is the ability to override the default SQL statement generated by Informatica. This was a smart design decision on Informatica's part. I have frequently had to go into the "code" and fix bad joins, split up complex operations, and rip out convoluted logic to get the maps to perform within a reasonable load window. Too often these developers are only viewing the problem through the "window" of the tool. They are not stepping back and look at the problem in the context of the overall architecture. In part Informatica forces them to do this. Another possible factor is they probably don't know better.
    "One tool...one solution"
    Microstrategy until recently had been suffering from that same condition of not allowing the developer to create the actual query). OWB engineers need to rethink their strategy on overriding the SQL.
    The functionality of OWB stands way below Informatica.In some ways yes. If you do a head-to-head comparison of the GUI then yes. In other ways OWB is better (Informatica does not measure up when you compare it with all of the architectural features that the Oracle database engine offers). They need to fix the bugs and annoyances though.
    .. but even the GUI of Informatica is better than OWB and gives the developer some satisfaction of working in it.Believe me I feel your pain. On the other hand, I have suffered from Informatica bugs. Ever do a port from one database eingine to another just to have it convert everything into multi-byte? Ever have it re-define your maps to parallel processing threads when you didn't ask it to?
    Looking at the technical side of things I can give you one fine example ... there is no function in Oracle doing to_integer (to_number is there) but Informatica does that ... Hmm-m-m...sorry, I don't get the point.
    The style of ETL approach of Informatica is far more appealing.I find it unnecessarily over-engineered.
    OWB has two advantages : It is basically free of cost and it has a big brother in Oracle.
    It is basically free of cost...When you are another "Microsoft", you can throw your weight around. The message for Informatica is "don't bite the hand that feeds you." Bad decisions at the top.
    Regards,
    Dan Phillips

  • Preloader and Document Class BIG question (yeap please help)

    Hy,
    I know that this its a question posted many, many times, but
    after searching the net, reading a lot of books and searching this
    forum too, I cant get out with a solution. If I'd say for sure
    there is no possibility to create something like this, I just go
    back to old methods but is not the scope of Adobe with AS3 to
    encourage the use of OOP principle or not?
    The problem:
    I have a single fla file (AS3) with a single frame on
    timeline, frame that its there when you will create the file with
    flash. In the library I have different symbols, that for simplicity
    are only jpg image, (BitmapData) checked for export for
    ActionScript and exported on frame one. An external .as file called
    DocumentClass its off course my Document Class
    This its all that I want to do with the fla, the goal its to
    create, animate etc. only with AS3 in external classes, no timeline
    script. I don't want to load external files, XML, or else in this
    movie. I just want a single swf after compilation, no additional
    files.
    Ok, how do I create a preloader that will take care of
    starting the logic after the whole swf its loaded and in the same
    time shows the user a percentage or a load bar or something that
    its not the blank screen when the swf its downloading. I want to do
    this without another swf that load this swf, or timeline scripts,
    or place all the content on second frame and then gotoAndStop to
    the third frame. All this are not solution but cheap tricks, that
    are against all this OOP principle that I just continue to read in
    books and here from guru programmers.
    The big question is:
    It is possible to create a preloader, when use a document
    class with your fla? And if yes, how?
    I know that the Document Class its not instantiated if its
    not fully loaded, if that's true when the document class will be
    fully loaded? maybe after the whole movie its loaded? And, if its
    true, it will never show a percentage bar "while" the movie its
    loaded. And if that's true WHY use a document class anyway?
    Thank you for reading this and I really wait to get some
    answer.

    I am pretty sure you cannot do self preloader with one frame
    and all the objects in the library. I guess the key here is
    one-frame design. Screen refreshes (renders) only when all the
    scripts in the frame are executed - this is a very important thing
    to understand about how Flash works. Yes, you can force screen
    refresh with updateAfterEvent() method but it is attached to a
    handful of events only (MouseEvent and TimerEvent) but, again, all
    this functionality is available only after first frame scripts are
    executed. Thus, it seems like the only way to create preloader from
    within SWF is to use multiple frames and set library objects to
    load in later (not first) frame.
    quote:
    And if that's true WHY use a document class anyway?
    Well, preloader is the last thing that would be on my mind in
    terms of using AS3 ability to link DocumentClass to the top movie.
    This feature allows for very sophisticated architectural
    approaches. It has no connection to preloader as to any other
    features developer wants to implement. Neither it depends on or
    negates timeline. As a matter of fact, although I love one-frame
    applications, I find on numerous occasions that my application
    would be more efficient if I used several (at least two) frames.
    gotoAndStop is not deprecated. It is a valid MovieClip class'
    method. After all, having only one frame doesn't mean not having
    frames at all - there is one already. Frames are fundament of
    Flash. AS3 did introduce frameless entities like Sprite, etc. but
    it doesn't mean that frames are going anywhere.
    I would agree that timeline code is inferior to
    classed/packaged (read: better organized) code but, still, how is
    it not OOP? Frame is an Object, right? Why using timeline is cheap
    and not a solution?
    On a side note, I see too many times how some authors (and
    managers) are pushing their agenda (or close mindedness) onto their
    audience with no real substantiation. Claiming that timeline in
    Flash is not valid architectural decision from OOP standpoint is
    totally wrong. As wrong as strict adherence to design patterns. I
    don't think there is sharply defined "right" or "wrong" in
    programming. One finds the best optimal solution. The goal is to
    create something that works fine. Unless, of course, the process is
    the goal - but very few of us can afford focusing on the process.

  • When oracle invalidates result_cache results without any changes in objects

    Hi all!
    On our production servers we have simple function with result_cache, like this:
    create or replace function f_rc(p_id number) return number result_cache
    is
      ret number;
    begin
      select t.val into ret from rc_table t where t.id=p_id;
      return ret;
    exception
      when no_data_found then
         return null;
    end;
    /And its results frequently invalidates without any changes in table or
    function. I found only 2 cases when oracle invalidates result_cache
    results without any changes in table:
    1. "select for update" from this table with commit;
    2. deletion of unrelated rows from parent table if there is unindexed
    foreign key with "on delete cascade".
    I test it on 11.2.0.1, 11.2.0.3, on solaris x64 and windows. Test
    cases: http://www.xt-r.com/2012/07/when-oracle-invalidates-resultcache.html
    But none of them can be the cause of our situation: we have no
    unindexed fk, and even if i lock all rows with "select for update", it
    still does not stop invalidating.
    In what other cases this happens? Am I right that the oracle does not
    track any changes, but the captures of the locks and "commits"?
    Best regards,
    Sayan Malakshinov
    http://xt-r.com

    Hmm.. Do you about our situation or about test cases with "select for update" and "fk" too?
    I'm not sure that it is a bug, maybe it's an architectural approach to simplify and reduce the cpu load?
    Best regards,
    Sayan Malakshinov
    http://xt-r.com

  • Use of application module pool and ADF Busines Components

    Hi to all;
    Lets suppose an web application with about 10 CRUD forms and 15 to 20 reports or forms just to query and show data;
    That's clear to me, all the advantages of using App modules pool.
    But for that reports ..... Just an Read only and Forward Only data ?
    I was wondering, if it will be more effective and lightweight if we just take an JNDI JDBC connection query data and show it.
    This imaginary application will make use of application module pool to provide that 10 CRUD web forms and in other hand, will have for reports,JNDI data sources;
    What are your opinion about having this two architectural approach working together in one application ?
    Very thanks;
    Marcos Ortega
    Brazil;

    Hi Deepak;
    BC4J in my opinion is great and i am proud to share this opinion with all of you;
    As a meter of fact, i post this thread to help me better understand BC4J architecture.
    I think that my doubt main point is ...
    Are application modules pool's life cycle an extra work , when the job is just to read and show data ?
    Perhaps, an document about statefull and/or stateless application service release, help me;
    IMHO;
    cached data most of the time must to be discarted for reports, always we want to query database directly, View's object ClearCache() method would be called to reports.
    I think that it's different, when we are talking about sequent requests when we need to span the session, views and entities states.
    Forwards Thanks;

  • ANN: Agile Infrastructure

    The industry speaks positively about agile methods for software development
    (http://www.agilealliance.org) but hasn't yet applied those principles to
    the folks who do networking and infrastructure. It is overdue and I will
    take the first shot at some things that can make them more agile:
    1.. Keep network and infrastructure architecture to a minimum yet
    sufficient
    Develop organization-wide standards using an agile Enterprise Architecture
    approach for each area of the infrastructure including the network, data
    center, desktops, etc. Develop the future state of your enterprise
    architecture by making sure that you have only one of a particular type. For
    example, today you may have Windows NT and a 100Base T network, where
    tomorrow you may be running PDAs on 802.11b.
    2.. Maintain centralized control with decentralized operations
    Centralized control allows one to control costs, architecture and
    deployment standards. Decentralized operations simply mean that it does not
    matter where your IT people are located. This works in remote locations and
    outsourcing arrangements as well. Support personnel should always be placed
    as close as possible to the end customer.
    3.. Keep the mainframe holy and worship it daily
    In the age of distributed computing, disciplines are more important than
    ever. Some non-agile organizations have tried to migrate the mainframe
    discipline to client/server and browser based paradigms and have failed
    miserably. The disciplines found on the mainframe need to be customized and
    streamlined. The vast majority of mainframers grew up with useful methods
    for capacity planning, disaster recovery and had extensive change
    management. Today we need these time proven practices more than ever without
    of course the bureacracy.
    4.. Measure everything as you cannot manage what you do not measure
    In the days of the mainframe, they could measure everything related to
    their infrastructure and could tell you their system availability and other
    system qualities. Today, we do not collect these metrics under the guise
    that we are too busy. If an organization, calculates their uptime and holds
    people accountable the staff will figure out a way to run the systems more
    efficiently.
    5.. All production systems are equal in the eyes of architects
    The vast majority of enterprises today have mainframes, pc's, servers,
    pda's and so on and have taken non-agile approaches by organizing support
    groups around them. An agile organization will refer to these groups as
    simply "technical support" and make sure its staff is cross-trained on as
    many platforms as each can handle. Separating support along technologies
    results in inefficiencies, political problems, poor communications and piss
    poor morale. Reorganize based on maximizing efficiency and utilization not
    technology.
    6.. Worship users and give them praise
    The number one problem with failed projects is the lack of communication.
    Being agile requires one to prefer human interaction over processes. If your
    IT folks would rather string cable or play with VI then failure itself is
    wired. In many shops the communications issue is even more systemic in that
    they cannot adequately communication with their own peers. In the days of
    the mainframe, it was obvious who did what to whom. In distributed
    architectures, everything is spread across disparate tiers, technologies and
    even locations. The team should use agile methods and process that instill
    communication between IT and customers as well as amongst the various IT
    silos. This should be incorporated as part of the job description for all IT
    personnel.
    7.. Spread joy to people in foreign lands
    Many wise dinosaurs from the days when mainframes were king, sat in their
    lofty ivory towers meditating on how the world should be. The only time they
    would interact with common business folk is when the help desk would summon
    them with an usual problem. In other words, being reactionary was status
    quo.Todays economy requires IT to walk with the great unwashed and
    communicate with their users. Simply, IT needs to shmooze, sell and stand on
    their soapbox selling their wares. This is the first step in real
    reengineering.
    With this thought in mind, I have created a new Yahoo Group to discuss
    agile methods in the networking and infrastructure discipline. Check out
    http://groups.yahoo.com/group/agileinfrastructure
    James McGovern
    Co-author of the book: Java Web Services Architecture
    http://www.amazon.com/exec/obidos/ASIN/1558609008/chiltownworldwid/
    http://www.webservicesarchitecture.com

    The industry speaks positively about agile methods for software development
    (http://www.agilealliance.org) but hasn't yet applied those principles to
    the folks who do networking and infrastructure. It is overdue and I will
    take the first shot at some things that can make them more agile:
    1.. Keep network and infrastructure architecture to a minimum yet
    sufficient
    Develop organization-wide standards using an agile Enterprise Architecture
    approach for each area of the infrastructure including the network, data
    center, desktops, etc. Develop the future state of your enterprise
    architecture by making sure that you have only one of a particular type. For
    example, today you may have Windows NT and a 100Base T network, where
    tomorrow you may be running PDAs on 802.11b.
    2.. Maintain centralized control with decentralized operations
    Centralized control allows one to control costs, architecture and
    deployment standards. Decentralized operations simply mean that it does not
    matter where your IT people are located. This works in remote locations and
    outsourcing arrangements as well. Support personnel should always be placed
    as close as possible to the end customer.
    3.. Keep the mainframe holy and worship it daily
    In the age of distributed computing, disciplines are more important than
    ever. Some non-agile organizations have tried to migrate the mainframe
    discipline to client/server and browser based paradigms and have failed
    miserably. The disciplines found on the mainframe need to be customized and
    streamlined. The vast majority of mainframers grew up with useful methods
    for capacity planning, disaster recovery and had extensive change
    management. Today we need these time proven practices more than ever without
    of course the bureacracy.
    4.. Measure everything as you cannot manage what you do not measure
    In the days of the mainframe, they could measure everything related to
    their infrastructure and could tell you their system availability and other
    system qualities. Today, we do not collect these metrics under the guise
    that we are too busy. If an organization, calculates their uptime and holds
    people accountable the staff will figure out a way to run the systems more
    efficiently.
    5.. All production systems are equal in the eyes of architects
    The vast majority of enterprises today have mainframes, pc's, servers,
    pda's and so on and have taken non-agile approaches by organizing support
    groups around them. An agile organization will refer to these groups as
    simply "technical support" and make sure its staff is cross-trained on as
    many platforms as each can handle. Separating support along technologies
    results in inefficiencies, political problems, poor communications and piss
    poor morale. Reorganize based on maximizing efficiency and utilization not
    technology.
    6.. Worship users and give them praise
    The number one problem with failed projects is the lack of communication.
    Being agile requires one to prefer human interaction over processes. If your
    IT folks would rather string cable or play with VI then failure itself is
    wired. In many shops the communications issue is even more systemic in that
    they cannot adequately communication with their own peers. In the days of
    the mainframe, it was obvious who did what to whom. In distributed
    architectures, everything is spread across disparate tiers, technologies and
    even locations. The team should use agile methods and process that instill
    communication between IT and customers as well as amongst the various IT
    silos. This should be incorporated as part of the job description for all IT
    personnel.
    7.. Spread joy to people in foreign lands
    Many wise dinosaurs from the days when mainframes were king, sat in their
    lofty ivory towers meditating on how the world should be. The only time they
    would interact with common business folk is when the help desk would summon
    them with an usual problem. In other words, being reactionary was status
    quo.Todays economy requires IT to walk with the great unwashed and
    communicate with their users. Simply, IT needs to shmooze, sell and stand on
    their soapbox selling their wares. This is the first step in real
    reengineering.
    With this thought in mind, I have created a new Yahoo Group to discuss
    agile methods in the networking and infrastructure discipline. Check out
    http://groups.yahoo.com/group/agileinfrastructure
    James McGovern
    Co-author of the book: Java Web Services Architecture
    http://www.amazon.com/exec/obidos/ASIN/1558609008/chiltownworldwid/
    http://www.webservicesarchitecture.com

  • Search in database in all the tables

    Hi friends,
    I want to search following text in database in all the tables :
    Paris-Murex BO for BNL- Panorama(Roma) / BNL-Panorama for Paris-Murex BO(Roma)
    above text is thr in our client site which is getting retrieved from database, I know the database instance name but but dont know from which table this value (text) is comming from.
    please help it's urgent!!!

    This is a depressingly common request. Why are there so many shonky undocumented applications out there?
    Anyway, the solution for a one-off exercise is this brute force approach. It's not pretty and the performance will be Teh Suck, but it will find the string. If you think that the string might be in more than one column you should remove the EXIT and let the thing grind on for as long as it takes.
    set serveroutput on
    declare
        n number;
    begin
        for r in ( select owner, table_name, column_name
                   from all_tab_columns
                   where owner not in ('SYS', 'SYSTEM')
                   and   data_type in ('CHAR', 'VARCHAR2')
                   and   data_length >= 43 )
        loop
            dbms_output.put_line('checking!!'||r.owner||'.'||r.table_name||'.'||r.column_name);
            begin
                execute immediate 'select 1 from '||r.owner||'.'||r.table_name
                              ||' where '||r.column_name||' like ''%Paris-Murex BO for BNL- Panorama(Roma) / BNL-Panorama for Paris-Murex BO(Roma)%'''
                              ||' and rownum = 1'
                into n;
                dbms_output.put_line('found it!!'||r.owner||'.'||r.table_name||'.'||r.column_name);
                exit;
            exception
                when no_data_found
                then
                    null;
            end;
        end loop;
    end;
    /If you have CLOBs and BLOBs you want to check as well then you'll need to run through a second query using CONTAINS(). Implementing that is left as an exercise for the reader.
    Note that if you want to do this on a regular basis then you need a completely different, more architectural approach.
    Cheers, APC
    blog: http://radiofreetooting.blogspot.com

  • Skillset required for ESOA

    Hi Experts,
    Could someone please tell me what is the Technical Skillset required to get into ESOA? I am an experienced NetWeaver Consultant with experience in XI and EP Implementations and Development Experience in core ABAP & WebDynpro Java.
    I am not a core Java/WebServices person. Do I qualify to get into ESOA? What other specific skillsets do I need to acquire to get into ESOA architecting/ consulting?
    Any suggestions would be appreciated.
    Thanks,
    Shobhit

    I Think you have all the SKILLS for ESOA except the FACT you did not mentioned about your architectural skills.
    SOA is an architecture principle to build agile IT services to enable business functions Using IT.
    You have to get the skills of architecture approach to address business problems using all principles, technologies, framework and products. IN ESOA SAP apply all these entities into their stream of products and address the SOA concepts into realization.
    I highly recomend you to learn about architecture framework such as TOGAF, E2AF, start thiniking product independent way to build services to fullfill the business needs using a TOP down, Botton Up and Meet in the Middle tactics. Then apply the PIA to a PDA (Product Dependent Approach) then apply that to SAP's ESOA tools then you became an ESOA expert.
    Please not buy all have seen in SAP practice professional's approach to be an ESOA guy. Wanna feel it, Apply for an ESOA architect role in SAP, they will insist your knowledge not only SAP tools for ESOA but great level knowledge of TOGAF, E2AF etc and skills to think outside the bunn.
    Thanks

  • Writing a common business services Interface - J2EE?

    Hi Guys,
    Tanks for the previous help. The problem here is, if possible I would
    like to know some architectural approaches for following design
    question:
    We are basically building an enterprise application on J2EE Platform.
    The database tier underlying the application is accessed by more than
    one portal. Some of the portals are external to organization, but share
    the data stored in the database.
    What would be the best approach to address this problem? The basic idea
    is to write a re-usable common business service component in J2EE
    application and provide access to database at Application Server level?
    How can I make use of J2EE services/components such as JMS, EJB's to
    effectively allow other systems/portals to access database via J2EE
    container? Is XML a viable solution?
    thanks
    Ramesh

    Hi Ramesh,
    "Ramesh Ankam" <[email protected]> wrote in message
    news:[email protected]..
    Thanks for the reply! So this means that application or external portalhas
    to make remote rmi:// call to access the session bean and invoke businessYes, if you want secure communications. You may also consider
    making your application WebServices enabled. Anyway, you need
    a clearly defined business layer. Otherwise your data won't be treated
    in a uniform way and you will end up with maintanance burden.
    methods? Is JMS is a overkill for this requirement?It really depends on what you should get as a result.
    If you need messaging, JMS is a good thing.
    Regards,
    Slava Imeshev
    >
    RA
    Slava Imeshev wrote:
    Hi Ramesh,
    You need to write a set of Staletess Session Beans representing you
    business logic, and to define security properly. You should never
    expose database (entity EJBs) to layers other then your business
    layer.
    Regards,
    Slava Imeshev
    "Ramesh Ankam" <[email protected]> wrote in message
    news:[email protected]..
    Hi Guys,
    Tanks for the previous help. The problem here is, if possible I would
    like to know some architectural approaches for following design
    question:
    We are basically building an enterprise application on J2EE Platform.
    The database tier underlying the application is accessed by more than
    one portal. Some of the portals are external to organization, but
    share
    the data stored in the database.
    What would be the best approach to address this problem? The basicidea
    is to write a re-usable common business service component in J2EE
    application and provide access to database at Application Serverlevel?
    >>>
    How can I make use of J2EE services/components such as JMS, EJB's to
    effectively allow other systems/portals to access database via J2EE
    container? Is XML a viable solution?
    thanks
    Ramesh

Maybe you are looking for

  • How to turn off ken burns effect in new imovie

    The new iMovie---11--  how in the heck do I turn off the Ken Burns effect? Nothing against Mr Burns but.... I don't want everything to move! Thanks!

  • Unknown network error code -3212

    i recently messed with my nat type with my xbox and started messing with my network settings on my computer.  i cannot connect to the itunes store due to the fact of an unknown network error (code -3212).  I can connect to the internet via google chr

  • Second Admin User - keychain login?

    Recently added second Admin User (Spouse). When logged in to the second Admin User, being required to enter Keychain login password from first Admin User. This is for Safari and for other apps. If the second User is logged in at an administrative lev

  • A silver dollar for anyone who can explain Mail 'Get Messages' failure

    Mail 6.2 (build 1499) on a very stable 10.8.2. No settings changed in the last 6 months; no accounts added (etc) in the last 30 days; same ISP (Time Warner - have reset my cable modem etc). Until a couple of days or so ago, Get Mail (top left) took a

  • Kodak Photo CD file format

    Does anyone know if there is a way to open image files from the once so popular ( back in the '90's) Kodak Photo CD?