Best practices for WC State, Code, Attribute

Currently, our process for assigning WC Codes is to assign them at the position level once an ee is joined to the position.  The result is a lot of one-off efforts outside of our regular workflow.  Is it possible to assign these attributes at the job key level?  So that way anytime a position is created, it inherits that information from the job key?

Dear Ryan,         
It is not possible in the standard to add tab into PPOME/PPOSE, as per notes below
327614    TA PPOME customer-specific tab pages
483345    Transaction PPOME Tab for standard infotypes is missing
410869    PPOME: Infotypes in the detailed area
385019    PPOME Detail including SAP standard infotypes
I have also pasted in the relevant paragraph from note 385019:
Tab pages for additional SAP standard infotypes are not available in the standard system and CANNOT be generated as described in note 327614 (transaction PPOME customer-specific tab pages).
All the infotypes predefined for PPOME/PPOSE can be found in table T77OCTABUS; it is only necessary to activate or deactivate them.
Best regards
Sarah

Similar Messages

  • Cairngorm 3 - Best practice for component states

    Hi,
    I have here small question.
    I have some component states (eq: application state, some component states, atp.) in my Application.
    I'm not sure if is best practise to save component states in Presentation Models, they hold current state only. So, i create ApplicationState class, which holding all possible application states. And here is my problem, i'm not sure, where should be this "State" class stored. In application directory or create another "states" directory  name ?
    Well, i'm not sure.
    Thx.

    Hi,
    I have here small question.
    I have some component states (eq: application state, some component states, atp.) in my Application.
    I'm not sure if is best practise to save component states in Presentation Models, they hold current state only. So, i create ApplicationState class, which holding all possible application states. And here is my problem, i'm not sure, where should be this "State" class stored. In application directory or create another "states" directory  name ?
    Well, i'm not sure.
    Thx.

  • JSR 168 best practice for saving inter-portlet state

    The portlet specification doesn't yet cover inter-portlet communication. Until
    it does, what is the best practice for saveing state so that multiple portlets
    can use the same data?

    The portlet session is layered on top of the HTTP session except for
    attribute scoping. So, all portlets in a webapp share the same
    HTTP/portlet session for a given client.
    All APPLICATION_SCOPEd portlet session attributes are automatically
    available to other portlets. With PORTLET_SCOPEd attributes, portlet
    containers namespace attribute names, so it would be hard to get
    attributes by name (but still possible).
    Subbu
    Chris Jennings said the following on 10/14/2003 09:17 AM:
    If I set an attribute in a PortletSession, though, it isn't visible form another
    PortletSession... right? Please tell me that's wrong ;-) If that's right, how
    can I have portlet B get to something set by portlet A?
    Subbu Allamaraju <[email protected]> wrote:
    Chris,
    For sharing transient state, the only possible mechanism in V1.0 is to
    use sessions.
    Subbu

  • What is the best practice for changing view states?

    I have a component with two Pie Charts that display
    percentages at two specific dates (think start and end values).
    But, I have three views: Start Value only, End Value only, or show
    Both. I am using a ToggleButtonBar to control the display. What is
    the best practice for changing this kind of view state? Right now
    (since this code was inherited), the view states are changed in an
    ActionScript function which sets the visible and includeInLayout
    properties on each Pie Chart based on the selectedIndex of the
    ToggleButtonBar, but, this just doesn't seem like the best way to
    do this - not very dynamic. I'd like to be able to change the state
    based on the name of the selectedItem, in case the order of the
    ToggleButtons changes, and since I am storing the name of the
    selectedItem for future reference.
    Would using States be better? If so, what would be the best
    way to implement this?
    Thanks.

    I would stick with non-states, as I have always heard that
    states are more for smaller components that need to change under
    certain conditions, like a login screen that changes if the user
    needs to register.
    That said, if the UI of what you are dealing with is not
    overly complex, and if it will not become overly complex, maybe
    states is the way to go.
    Looking at your code, I don't think you'll save much in terms
    of lines of code.

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best practice for opening/closing JDBC conection

    I've written a program that accesses a database, and I'd like to know the best practice for opening and closing that connection. For example should I use a try{} finally {} block,
    try
        //load driver
        //create conection
        //create statement object
        //sql statement to execute
        //execute statement
    finally
        //close connection
    }Or should I split the code into seperate methods, maybe an init() method that loads the driver and makes the connection and an execute() method that creates the statement and executes it and finally a cleanUp() method that closes the connection.
    So, which would be the best way to do this?

    Hallo,
    your idea seems OK to me. However, there are a couple of points to consider:
    1. Do you just want to execute one SQL query? Or will your program execute several? If the latter case is possible, then you do not want to close your connection between queries. Opening and closing a connection to a database takes 'a lot of time', relatively speaking. In this case, it is better to save the connection and reuse it every time that you need it.
    2. Do not forget to close the statements and result sets that you create or get back from JDBC. Depending on the database (eg Oracle) that you are using, you can run out of cursors, and your application will stop.

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • What’s the best practice for this scenario?

    Hi,
    My users want the ability to change the WHERE and/or ORDER BY clause at runtime. They may define user preferences on each screen ( which is bind to a view object). They want to see the same records based on WHERE/ORDER BY defined on the last visit. That is why I keep the users preferences and load the screen based on that, using :
    View.setWhereClause(...);
    View.setOrderByClause(...);
    View.executeQuery();
    This works good when only one user working with the application but faced low performance when more than one user working with the application.
    What are the points to increase the performance and what is the best practice for this scenario?
    Thanks for your help in advance.

    Sung,
    I am talking only about 2 users in my testing. I am sure i missed something but could not recognize that.
    This page is my custom query page including a tag to instantiate app module in stateful mode at the top <jbo:ApplicationModule..> and a tag to instantiate data source <jbo:Datasource...> and release tag at the bottom <jbo:ReleasePageResources..> and some java code in the middle(body). The java code constructed the query statement and then fires the query to set the view object based on the query statement using the above methods.
    So, I am facing very slow performance(speed) when two clients load this page at the same time. Looks like the entire application locks for others when one client load this page and fire the query. i realized the battle neck is where executeQuery() is executing.
    what do you think.
    Thanks in advance for your comments.

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

Maybe you are looking for

  • CO_PA document creation

    Gurus, How is it possible avoid the creation of a co-pa document for a particular invoice document type in a SD flow? Many thanks Regards Edited by: vittoria Acquafredda on Dec 16, 2009 4:47 PM

  • Syncing Apps from Iphone to Itunes and vice versa, quick question.

    Hello, so I had a small question to ask about transferring apps from both the iphone to itunes. Basically, I downloaded an app from the app store ON my iphone. I've made a lot of progress and have a lot of data in this app. The(Sync app) box was neve

  • New iWeb Weblog - Where is Save As option?

    Hi. I am using iWeb for the first time. It is version 3.0.2 (489). I used *New Page* (and *New Site* too when trying to figure this out) under the File menu and then selected a template as prompted. After making my changes to the template to what I w

  • 2 monitors on 2012 Core i7 13 inch model - any problems?

    I have a Core i7 Macbook Pro 2012 13 inch model I've done some reading up and have read in places that 2 monitors with a USB3 HDMI device is possible... BUT... because it's a 13 inch model, and that will ONLY be able to have 2 screens with the laptop

  • HT201303 i forgot my answers of security question what should i do ????

    Hello, i forgot my answers of security question of me, i want to reset the question or send my answers to my email. i dont know how does it happen but im so upset for that. need ur help plz thank u regards, Lolo alshareef