Best Practice on Dependencies

We are new to Tidal (6.1)  and create jobs or jobgroup dependencies as:
default is to use Match Occurrence
Last Occurrence is only used for specific jobs that are demanded adhoc
This gives us some problems when we manipulate jobs - suddenly we have non-matching occurrence numbers for jobs in the same jobgroup and dependencies fail to work as intended.
This could - in many cases - be solved, if we used Last Occurrence dependencies. On the other hand we might want to have more than one occurrence of the jobgroup scheduled for the same day and then Last Occurrence won't work.
What is Best Practice  ?

1. Match occurrence : Use when want dependency of the same number of occurrence of the depended Job.
2. Last occurrence : Use when want your dependent job should not be bother the same occurrence of depended Job.
3. First occurrence : Use when you want to always consider first occurrence of depended job.
The choice of above dependency is based on you need. Generally if you are going to schedule your Jobs and scheduled job using dependency or output coming from depended job into dependent job then this matters.

Similar Messages

  • Best practice for dependencies that require manual download?

    Hi,
    what is the right thing to do when creating a package, that requires the user to manually download something? Specifically, I am creating a package, that installs the LeapMotion software (a .deb file). As it requires a developer account to download it, it has to be downloaded by the user and put somewhere. I've seen in the ttf-ms-win8 package, that it just puts the file names into the source array and expects the user to put the files in place. But when I do the same, I get an error about the missing deb file, when running `makepkg --source`.
    My PKGBUILD can be found here.
    As this is my first attempt in creating a package for Arch, any other comments on my PKGBUILD are very welcome as well.

    Just a couple of tips for you. First, you don't need this anymore:
    cd "${srcdir}"
    All the package, build and prepare functions begin in ${srcdir}. Second, I'd recommend that you move to sha2 sums instead of md5. Third, it's really good practice to quote all your variables, including ${srcdir} and ${pkgdir}. Fourth, ${pkgname} should be lowercase. And finally, use `install` instead of `mv` for the package function.
    Your way of defining the source variable is perfectly fine, so other than the changes I would make above, it looks great.
    All the best,
    -HG

  • SAP PI conceptual best practice for synchronous scenarios

    Hi,
    <br /><br />Apologies for the length of this post but I'm sure this is an area most of you have thought about in your journey with SAP PI.
    <br /><br />We have recently upgraded our SAP PI system from 7.0 to 7.1 and I'd like to document  best practice guidelines for our internal development team to follow.
    I'd be grateful for any feedback related to my thoughts below which may help to consolidate my knowledge to date.
    <br /><br />Prior to the upgrade we have implemented a number of synchronous and asynchronous scenarios using SAP PI as the hub at runtime using the Integration Directory configuration.
    No interfaces to date are exposes directly from our backend systems using transaction SOAMANAGER.
    <br /><br />Our asynchronous scenarios operate through the SAP PI hub at runtime which builds in resilience and harnesses the benefits of the queue-based approach.
    <br /><br />My queries relate to the implementation of synchronous scenarios where there is no mapping or routing requirement.  Perhaps it's best that I outline my experience/thoughts on the 3 options and summarise my queries/concerns that people may be able to advise upon afterwards.
    <br /><br />1) Use SAP PI Integration Directory.  I appreciate going through SAP PI at runtime is not necessary and adds latency to the process but the monitoring capability in transaction SXMB_MONI provide full access for audit purposes and we have implemented alerting running hourly so all process errors are raised and we handle accordingly.  In our SAP PI Production system we have a full record of sync messages recorded while these don't show in the backend system as we don't have propogation turned on.  When we first looked at this, the reduction in speed seemed to be outweighed by the quality of the monitoring/alerting given none of the processes are particularly intensive and don't require instant responses.  We have some inbound interfaces called by two sender systems so we have the overhead of maintaing the Integration Repository/Directory design/configuration twice for these systems but the nice thing is SXMB_MONI shows which system sent the message.  Extra work but seemingly for improved visibility of the process.  I'm not suggesting this is the correct long term approach but states where we are currently.
    <br /><br />2) Use the Advanced Adapter Engine.  I've heard mixed reviews about this functionaslity, there areh obvious improvements in speed by avoiding the ABAP stack on the SAP PI server at runtime, but some people have complained about the lack of SXMB_MONI support.  I don't know if this is still the case as we're at SAP PI 7.1 EHP1 but I plan to test and evaluate once Basis have set up the pre-requisite RFC etc. 
    <br /><br />3) Use the backend system's SOAP runtime and SOAMANAGER.  Using this option I can still model inbound interfaces in SAP PI but expose these using transaction SOAMANAGER in the backend ABAP system.  [I would have tested out the direct P2P connection option but our backend systems are still at Netweaver 7.0 and this option is not supported until 7.1 so that's out for now.]  The clear benefits of exposing the service directly from the backend system is obviously performance which in some of our planned processes would be desirable.  My understanding is that the logging/tracing options in SOAMANAGER have to be switched on while you investigate so there is no automatic recording of interface detail for retrospective review. 
    <br /><br />Queries:
    <br /><br />I have the feeling that there is no clear cut answer to which of the options you select from above but the decision should be based upon the requirements.
    <br /><br />I'm curious to understand SAPs intention with these options  -
    <br /><br />- For synchronous scenarios is it assumed that the client should always handle errors therefore the lack of monitoring should be less of a concern and option 3 desirable when no mapping/routing is required? 
    <br /><br />- Not only does option 3 offer the best performance, but the generated WSDL is ready once built for any further system to implement thereby offering the maximum benefit of SOA, therefore should we always use option 3 whenever possible?
    <br /><br />- Is it intended that the AAE runtime should be used when available but only for asynchronous scenarios or those requiring SAP PI functionality like mapping/routing otherwise customers should use option 3?  I accept there are some areas of functionality not yet supported with the AAE so that would be another factor.
    <br /><br />Thanks for any advice, it is much appreciated.
    <br /><br />Alan
    Edited by: Alan Cecchini on Aug 19, 2010 11:48 AM
    Edited by: Alan Cecchini on Aug 19, 2010 11:50 AM
    Edited by: Alan Cecchini on Aug 20, 2010 12:11 PM

    Hi Aaron,
    I was hoping for a better more concrete answer to my questions.
    I've had discussion with a number of experienced SAP developers and read many articles.
    There is no definitive paper that sets out the best approach here but I have gleaned the following key points:
    - Make interfaces asynchronous whenever possible to reduce system dependencies and improve the user experience (e.g. by eliminating wait times when they are not essential, such as by sending them an email with confirmation details rather than waiting for the server to respond)
    - It is the responsibility of the client to handle errors in synchronous scenarios hence monitoring lost through P-P services compared to the details information in transaction SXMB_MONI for PI services is not such a big issue.  You can always turn on monitoring in SOAMANAGER to trace errors if need be.
    - Choice of integration technique varies considerably by release level (for PI and Netweaver) so system landscape will be a significant factor.  For example, we have some systems on Netweaver 7.0 and other on 7.1.  As you need 7.1 for direction connection PI services we'd rather wait until all systems are at the higher level than have mixed usage in our landscape - it is already complex enough.
    - We've not tried the AAE option in a Production scenarios yet but this is only really important for high volume interfaces, something that is not a concern at the moment.  Obviously cumulative performance may be an issue in time so we plan to start looking at AAE soon.
    Hope these comments may be useful.
    Alan

  • Best Practices for File Organizati​on/Project Explorer

    So we are finally getting SCC at my organization to manage our LabVIEW development, and that is good! 
    Now, we are starting in on discussions about how we should organize our files on disk and how we should use the Project Explorer. When I started here about 3 years ago, I wasn't very familiar with the project explorer, so I read the article at http://zone.ni.com/devzone/cda/tut/p/id/7197. Two of the main things I took away from that article are:
    1. Organize Files in a logical manner on disk. Whatever that is, it is not a flat file structure.
    2. The top level VI should be separate from other source code. Preferably, it should reside in the application folder.
    Push Back Against These Recommendations
    Before I was hired, most, if not all LabVIEW development was done utilizing a flat file structure and the top level VI lived with the source code. Since we didn't have a proper SCC, each individual organized files as he saw fit. So I started using the Project Explorer (not even its use is totally accepted right now) and I began follow recommendations 1 and 2 above. I didn't always follow #1 very strictly, but I have been working towards it, and I have always followed #2 religiously. 
    Since we are starting these discussions on how we should organize files on disk I'm starting to get some push back to following these two recommendations.
    The arguments I get in favor of using a flat file structure is that you always know where every file is; including the top-level VI. It is also argued that it is a lot of effort to organize and search for VIs when they all reside in different folders. I think the fear is that by getting "clever" and organizing our files in such a manner we'll make things complicated and we will somehow shoot ourselves in the foot. 
    The argument I get against separating the top level VI from the rest of the source code is that it:
    (a) Won't be clear where it is (like it is buried within hundreds of VIs). However, it is argued, you can just put a "!" in front of the file name and then it is always the at top of the flat file structure.
    (b) An extension of argument of (a) is that things either look or seem messy when VIs (including top level VI) don't live in a sub-folder and are just hanging out with the Project Explorer file. 
    (c) I think there may be some fear of breaking the VI by moving it and altering the dependencies for the VI. 
    Convincing Others its Good to Follow These Recommendations
    So, if I want to follow NI's recommendations, I need to come up with reasons we should follow these recommendations. Also, I should state that I care about following these recommendations because its what NI recommends. They've been around the block a few times and I'm sure there are good reasons why these are best practices. However, I don't think I've given a very compelling case for why these recommendations should be followed.
    So I'll tell you all what I think good reasons are for these recommendations and perhaps I can get some feedback or additional support? If I'm crazy for wanting to follow these recommendations maybe someone can point out why I'm crazy. 
    (a) Arguments for Following Both
    I. I passed the CLAD a couple of weeks ago, and I have started studying for the CLD. Part of the CLD is following both of these recommendations (see page 6 of http://ftp.ni.com/evaluation/certification/cld/cld​_exam_prep_guide_english.pdf). While this isn't a reason in and of itself, it suggests that if it important when being certified it is important in practice!
    II. If we hire new developers that are familiar with LabVIEW, they will most likely be familiar with these recommendations, especially if they are certified. That will lead to increased productivity out of the door because they won't have to learn our special way of doing things.
    (b) Arguments for Organized File Structure
    I. Unused VIs are easier to identify and remove. Right now we never remove VIs because we don't know if they are used or not. This leads to a lot of VI bloat.
    II. It is hard to know what a specific VIs function is in a flat file structure by looking at the name.
    (c) Arguments for Separating Top Level VI from Source Code
    I. Placing the top level VI is an intuitive place for this VI. As long as the top level VI is the only VI in the application folder there is no mistake it is the top level VI, especially once you open it. This makes it easy for new developers to find the top level VI. I'd argue it isn't very intuitive for new developers to know that a VI in the source code folder that is prefaced with a "!" is the top level VI.
    Summary
    So that is what I think so far. Is there anything else I am missing to support following those two recommendations or am I just being inflexible?
    Thanks!

    zenthoef,
    As a CLA, I have struggled with file structure over the years.  Here are my recommendations:
    1.  Put the top level VI and the project in the top-level folder.  This makes it very clear where to begin.
    2.  Put the remaining user interface VIs in a separate folder.  Again, it makes it very clear what the functionality of these VIs are.
    3.  If you are using object, put each object in a separate folder.  Place the family of objects in one folder, with each object in a subfolder.
    4.  Keep the remaining VIs either in a single folder.  This can contain a small number of subfolder if your project is large, but too many folders makes it hard to figure out where your VIs are.  For example, you might have a DAQ subfolder, an Analysis subfolder, and a Report subfolder.  But if you had a Test1 folder, a Test2 folder, and you had a VI that was used by both tests, where would it go?  Keep it simple.
    5.  You inferred that it is hard to figure out what a VI does by its name.  That implies that 1) you need better names, and 2) your VIs are too complicated.  A VI should do a single function which can be adequately described by its name.  That VI might be something like Analyze Data.vi, which would contain a bunch more subVIs (like Get 1st Harmonics.vi), but each VI would contain a single function.  You wouldn't save the data to a report in the Analyze Data.vi, for example.
    The most compelling reason for following these suggestions is that it is easier to figure out what the code is doing after you haven't looked at it for a while.  Once you have an application that is working and bug free, you shouldn't have to touch the code until you want to add features.  If that is even 6 months later, you will probably have forgotten how the code works.  As a consultant, I have had to update other people's code, and just figuring how where to start can be a challenge.
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com

  • Best practice for replicate the IDM (OIM 11.1.1.5.0) environment

    Dear,
    I need to replicate the IDM production to build the IDM test environment. I have the OIM 11.1.1.5.0 in production with lots of custom codes.
    I have two approaches:
    approach1:
    Manually deploy the code through Jdeveloper and export and import all the artifacts. Issue is this will require a lot of time and resolving dependencies.
    approach2:
    Take the OIM, MDS and SOA schemas export and Import the same into new IDM Test DB environment.
    could you please suggest me what is the best practice and if you have some pointers to achieve the same.
    Appreciate your help.
    Thanks,

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • "Best practice" for components calling components on different panels.

    I'm very new to Swing. I have been learning from tutorials, but these are always relatively simple interfaces , in which every component and container is initialised and added in the constructor of a main JFrame (extension) object.
    I would assume that more complex, real-world examples would have JPanels initialise themselves. For example, I am working on a project in which the JFrame holds multiple JPanels. One of these Panels holds a group of JToggleButtons (grouped in a ButtonGroup). The action event for each button involves calling the repaint method of one of the other Panels.
    Obviously, if you initialise everything in the JFrame, you can simply have the ActionListener refer to the other JPanel directly, by making the ActionListener a nested class within the JFrame class. However, I would like the JPanels to initialise their own components, including setting the button actions, by using an extension of class JPanel which includes the ActionListeners as nested classes. Therefore the ActionListener has no direct access to JPanel it needs to repaint.
    What, then, is considered "best practice" for allowing these components to interact (not simply in this situation, but more generally)? Should I pass a reference to the JPanel that needs to be repainted to the JPanel that contains the ActionListeners? Should I notify the main JFrame that the Action event has fired, and then have that call "repaint"? Or is there a more common or more correct way of doing this?
    Similarly, one of the JPanels needs to use a field belonging to the JFrame that holds it. Should I pass a reference to this object to the JPanel, or should I have the JPanel use "getParent()", or some other method?
    I realise there are no concrete answers to this query, but I am wondering whether there are accepted practices for achieving this. My instinct is to simply pass a JPanel reference to the JPanel that needs to call repaint, but I am unsure how extensible this would be, how tightly coupled these classes would become.
    Any advice anybody could give me would be much appreciated. Sorry the question is so long-winded. :)

    Hello,
    nice to get feedback.
    I've been looking at a few resources on this issue from my last post. In my application I have been using the Observer and Observable classes to implement the MVC pattern suggested by T.PD.(...)
    Two issues (not fatal, but annoying) with this are:
    -Observable is a class, not an interface; since most of my Observers already extend JPanel (or some such), I have had to create inner classes.
    -If an Observer is observing multiple Observables, it will have to determine which Observer called its update() method (by using reference equality or class comparison or whatever). Again, a very minor issue, but something to keep in mind.I don't deem those issues are minor. The second one in particular, is rather annoying in terms of maintenance ("Err, remind me, which widget is calling this "update()" method?").
    In addition to that, the Observable/Observer are legacy non-generified classes, that incurr a loosely-typed approach (the subject and context arguments to the update(Observable subject, Object context) methods give hardly any info in themselves, and they generally have to be cast to provide app-specific information.
    Note that the "notification model" from AWT and Swing widgets is not Observer-Observable, but merely EventListener . Although we can only guess what reasons made them develop a specific notification model, I deem this essentially stems from those reasons.
    The contrasting appraoches are discussed in this article from Bill Venners: The Event Generator Idiom (http://www.artima.com/designtechniques/eventgenP.html).
    N.B.: this article is from a previous-millenary series of "Design Techniques" articles that I found very useful when I learned OO design (GUI or not).
    One last nail against the Observer/Observable model: these are general classes that can be used regardless of the context (GUI/non-GUI code), so this makes it easier to forget about Swing threading rules when using them (essentially: is the update method called in the EDT or not).
    If anybody has any information on the performance or efficiency of using Observable/ObserverI would be very surprised if this had any performance impact. If it had, that would mean that you have either:
    - a lot of widgets that are listening to one another (and then the Mediator pattern is almost a must to structure such entangled dependencies). And even then I don't think there could be any impact below a few thousands widgets.
    - expensive or long-running computation in the update methods. That's unrelated to the notification model itself.
    - a lot of non-GUI components that use the Observer/Observable to communicate among themselves - all the more risk then, to have a GUI update() called outside the EDT, see remark above.
    (or whether there are inbuilt equivalents for Swing components)See discussion above.
    As far as your remark 2 goes (if one observer observes more than one subjects, the update() method contains branching logic) : this also occurs with the Event Delegation model indeed: for example, it is quite common that people complain that their actionPerformed() method becomes unwieldy when the same class listens for several JButtons.
    The usual advice for this is, use anonymous listeners, each of which handles the event from only one source (and generally very close in code to the definition of that source), and that simply translates the "generic" event notification method into a specific method call of a Controller or Mediator .
    Best regards.
    J.
    Edited by: jduprez on May 9, 2011 10:10 AM

  • Best Practice for making material obselete

    Hai Experts,
    Could any one please advice me what s the best practice to make a material obselete. Is there any best practice or incase if it is not there what would be the diff ways it cant be made obselete.
    Please advice me on this.
    Thanking you in advance
    Regards,
    Gopalakrishnan.S

    Archiving is a process, rather than a transaction. you have to take care that you fulfill local laws and retain your data for x years for internal and external auditors.
    archiving requires customizing, you have to tell SAP where your archive is and what name it has and how big it can be, and when (definition of retention period) a document can be archived.
    you have to talk with your business how long they want the data keep in the production system (this depends on how long they need to run reports on this data)
    you have to define who can access the archived data and how this data is accessed.
    Who runs archiving?
    Tough question. Data owner is the business, so they should run.
    But archiving just buckets fragments your tables.
    so many companies have a person who is fulltime responsible for archiving and does it for all business units and organisations.
    Archiving is nothing what can be done "on the fly", you will encounter all kind of errors, processes that are not designed well will certainly create a lot problems in archiving and need to be reworked.
    I had an archving project with 50 days project time, and could develope guidelines to archive about 30 different objects. there are 100 more objects possible, and we have still not archived master data like material, vendors and customers because of so many dependencies and object that need to be archived prior to those objects.

  • Best practices to include client libraries used at component level

    How to include component level resources, while following the best practices.
    Ex:
    I am looking at the geometrixx media site in CQ 5.6. In the some of components, ex: 2-col-article-summary we have a client library defined under the component.
    /apps/geometrixx-media/components/2-col-article-summary
    -2-col-article-summary.jsp
    -clientlibs
      -css
      -css.txt
    if i look at the categories of the clientlib, it is defined as follows
    categories String[] apps.geometrixx-media, apps.geometrixx-media.2-col-article-summary
    The only place this client library is included is in the head.jsp of main page level component.
        <cq:includeClientLib categories="apps.geometrixx-media.all"/> - this in turn embeds the apps.geometrixx-media
    my questions are as follows
    1) Why do we have two categories for the clientlib, if ithe second category name( apps.geometrixx-media.2-col-article-summary) is not being used. Is there is some other usage for this i am missing.
    2) Also these set of css is always included no matter whether a specific component is added to the page or not.
    3)  I could use the following to include the client at the component level, but this will cause unnecessary <script> and <link> elements at the component level mark up.
       <cq:includeClientLib categories="apps.geometrixx-media.2-col-article-summary"/>
    Essentially  i am trying to understand, how to include a specific component level resources while following the best practices

    Hi,
    I dont have CQ5.6 setup so could not see referring example but by looking at description i can say
    Ans 1. The client library can be invoked directly through tag lib as <cq:includeClientLib categories="category name"/> but it can also be invoked when you add "dependencies" property to client library folder and in that case it resolve all the dependencies first. So to answer your quesiton by looking at client library folder configuration you can not say that specific category has not been used any where or not invoked.
    Ans 2. Invoking client library folder completely depends on your code where you are placing call for client library using <cq:includeClientLib> tag and dependencies configuration. So you have to dig it more to trace out all the calls (also default css/js loads)
    Ans 3. Correct. So accomplish that best way to manage client library at component level and give it a unique name which can not be invoked any where neither through <cq:includeClientLib> call nor through dependenies configuration. This way you can avoide overridding of same library files. (Better to manage proper hierarchy of library)
    Hope it gives you some idea .
    Thanks,
    Pawan

  • SQL 2012 service accounts best practice

    I'm installing SQL Server 2012 for ConfigMgr 2012 r2 and I wonder what is the best practice for SQL service accounts.
    During the installation of SQL Server, in the server configuration/Service accounts menu I'm allowed to configure following service accounts: SQL Server Agent, SQL Server Agent Database Engine, SQL Server Reporting Services, SQL Server Browser.
    Do I have to create separate domain user (not admin) accounts for each service and configure service principal name (SPN) for all of them?
    For example: Domain user account named SQLSA for SQL Server Agent, another domain user account
    SQLADBE for SQL Server Agent Database Engine etc.

    During the installation of SQL Server 2012, the user is prompted to provide service account
    credentials. The default service accounts suggested vary depending on whether SQL Server
    2012 is installed on a computer running Windows Vista or Windows Server 2008 or on a computer
    running Windows 7 or Windows Server 2008 R2. On computers running Windows Vista
    or Windows Server 2008 operating systems, the following default service accounts are used:
    - NETWORK SERVICE Database Engine, SQL Server Agent, Analysis Services,
    Integration Services, Reporting Services, SQL Server Distributed Replay Controller,
    SQL Server Distributed Replay Client
    - LOCAL SERVICE SQL Server Browser, FD Launcher (Full-Text Search)
    - LOCAL SYSTEM SQL Server VSS Writer
    On computers running Windows 7 or Windows Server 2008 R2 operating systems, the following
    default accounts are used:
    - Virtual Account or Managed Service Account Database Engine, SQL Server Agent,
    Analysis Services, Integration Services, Replication Services, SQL Server Distributed
    Replay Controller, SQL Server Distributed Replay Client, FD Launcher (Full-Text Search)
    - LOCAL SERVICE SQL Server Browser
    - LOCAL SYSTEM SQL Server VSS Writer
    For Windows 7 and Windows Server 2008 R2, you can use a Managed Service Account
    (MSA) or a Managed Local Account. The differences between these account types are as
    follows:
    - Managed Service Account (MSA) This special kind of domain account managed
    by a domain controller is assigned to a single member computer and used for running
    services. The MSA password is managed by the domain controller. MSAs can register
    a Service Principal Name (SPN) with Active Directory. MSAs use a $ name suffix; for
    example, CONTOSO\SQL-A-MSA$. You must create the MSA prior to running SQL
    Server Setup if you want to use an MSA with SQL Server services.
    - Virtual Accounts or Managed Local Accounts These virtual accounts can access
    the network in a domain environment and are used by default for service accounts
    during SQL Server 2012 setup when run on Windows 7 or Windows Server 2008 R2.
    Such accounts use the NT SERVICE\<SERVICENAME>format. You don’t need to specify
    a password when using virtual accounts with SQL Server 2012 because this is handled
    automatically by the operating system.
    You should run SQL Server services, using the minimum possible user rights, and use an
    MSA or virtual account when possible. If you are manually configuring service accounts, use
    separate accounts for different SQL Server services. If it is necessary to change the properties
    of service accounts used for SQL Server 2012, use SQL Server tools such as SQL Server
    Configuration Manager. This ensures that all necessary dependencies are
    updated, which does not happen if you use only the Services console.
    Although you can configure domain accounts as service accounts, this strategy requires
    more effort because you must ensure that service account passwords are changed regularly.
    You must also manage SPNs, which are required for Kerberos authentication.
    Best regads
    P.Ceglie

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practices - Distributi​ng Dynamic VI's with LV2011

    I'm distributing code which consists of a main program which calls existing (and future) vi's dynamically, but one at a time. The dynamically called vi's do not have input or output terminals. They run, one at a time, in a sub-panel in the main program. The main program needs to maintain a reference to the dynamically loaded vi so it can be sure the dyn. loaded vi has fully stopped before unloading calling a replacement vi. These vi's do not used Shared Variables or Globals, but may have a few vi's in common with the main program (it would be OK to duplicate these vi's in the release).
    With that background, what are the best practices these days for releasing dynamically loaded vi's (and their dependents)?
    If I use a Project Library (.lvlib), it would seem that I need to first build a .exe containing the top-level vi's (the one's to be dynamically loaded), so that a separate .lvlib can be generated which includes their dependencies. The contents of this .lvlib and a .lvlib containing the top-level vi's can then be merged to create a single .lvlib, and then a packed library can be generated for distribution with the main .exe.
    This seems way too involved (but necessary?)
    My goal is to simply have a .exe for the main program, and some other structure containing the dynamically called vi's and their dependents. This seemed so straighforward when a .exe was really a .llb a few years ago
    Thanks in advance for your feedback.
    Solved!
    Go to Solution.

    A great source of information I've found since posting is here:
    http://zone.ni.com/devzone/cda/pub/p/id/1261
    regarding packed libraries. Bottom line - they automatically include dependencies to the top-level dynamically linked vi's placed in a .lvlib from which the .lvlibp is built..
    I cannot seem to find an example of dynamically calling a vi within a packed library. If I use the old .exe as llb method, I get an Error 7.

  • Best Practices in Building/Stage-In

    What are the best practices in building/stage-in process? I tried to use a CVS repository to have employees stage in their projects then a pair of actual builders will check them out and compile, completing the elevation to production.
    I'm experiencing challenges, however - namely:
    1) Our organization's process is that should development activities continue, the developer needs to formally check-out the production copy, compiled by the builders.
    2) Eclipse build paths are relative to the computer of the developer, so references to JAR's are invalid and need to be resolved manually.
    - I attempted to use a shared folder, mapped to a drive (Drive T:\) but is there any better way of making sure the correct JARs are referenced and that the references will still work after commit.
    3) CVS Conflicts arise when we try to check in a project (from production) that already has a newer copy committed by a developer (bawal ito - pero for the purposes of SIT, we needed to test this case)
    Also we're using Eclipse Build. Is there any better process of building?
    Thanks a lot for your help.

    801661 wrote:
    1) Our organization's process is that should development activities continue, the developer needs to formally check-out the production copy, compiled by the builders.No idea what you're trying to say here.
    2) Eclipse build paths are relative to the computer of the developer, so references to JAR's are invalid and need to be resolved manually.
    - I attempted to use a shared folder, mapped to a drive (Drive T:\) but is there any better way of making sure the correct JARs are referenced and that the references will still work after commit.Not sure what you mean about "references still working after commit."
    3) CVS Conflicts arise when we try to check in a project (from production) that already has a newer copy committed by a developer (bawal ito - pero for the purposes of SIT, we needed to test this case)No idea what you're saying here. The whole notion of "checking in from production" does not compute.
    In general, however, any time two developers work on the same file, there's a chance for conflicts. Most VCSs come with a conflict resolution/merge tool.
    Also we're using Eclipse Build. Is there any better process of building?An IDE's build can be fine for individual developers' intra-day builds, but for nightly builds that are to be promoted to QA or Production, you'll usually use build tool, such as ant or maven, possibly driven by another process-managing tool such as cruisecontrol.
    Other than that, I'm not following exactly what your processes or problems are, so I'll just try to offer some general tips that have worked for me.
    1. When it's time to cut an official build from the lastest checked in code, briefly disallow checkins, whether by shouting over cube-tops or by administratively enforcing a lock on your VCS. (I don't know if CVS supports that or not, but most VCSs should.) The buildmaster labels the current state of "main" or "trunk", and may even preemptively create a branch that's rooted there. Once the label has been applied, checkins can be re-enabled. This is all often done automatically late at night.
    2. The bulidmaster does a fresh checkout against the new label, and builds from there.
    3a. For 3rd party jars that your application uses, create spot in the repository, e.g. /thirdparty, and stick the jars in there, using whatever directory layout and version is appropriate for you. When you label for a build in step 1, make sure you label the thirdparty tree as well, so that you can always get back to the proper version of the entire repository for a given build.
    3b. Alternatively, there's a tool called maven that can automate and simplify (after an initial learning curve) the management of those dependencies.
    4. For paths that are needed by the developers' environments, pick a standard location. Developers can either go with that, and all its attendant simplicities, or they can arrange things how they want, but they are still individually responsible for a) getting their work done in a timely fashion, b) managing their own environment, without being able to rely on the common knowledge of the rest of the group, and c) not doing things that rely on their particular arrangement and hence end up breaking for everyone else.
    5. CVS, while serviceable for simpler projects, lacks some advanced features that other VCSs have. Subversion is a pretty good free tool. It was based on CVS, I think, or at the very least, its commands are almost identical to CVS's in a lot of cases. Perforce is somewhat more feature rich, I think, but quite a bit more complex, and not free. Git is supposed to be gaining popularity, and is free I think, but I've never used it. Clearcase is very powerful, but it's also expensive, and pretty much requires a full-time admin.

  • [Beginner] Best practice Backing Bean, DataBinding

    Hi everybody,
    I followed an oracle course in November 'Oracle Fusion Middleware 11g: Build Applications with ADF I'
    Great, now I know what are the different parts of the framework.
    But now I'd like to go a bit in depth.
    Here is a simple example
    Login page
    Backing Bean
    Session Bean
    Read only View Object (select from view)
    Application module
    We have a username and password in a table but the password is encrypted in the column, in fact this is the checksum of the password.
    Here is what should be done.
    Login page (username, password) --> proceed button --> get data from VO --> compare usernames --> transform password coming from the login page (MD5) --> compare it to the database password -->
    setting params to session bean --> redirecting to login2 page.
    Here is what I have actually
    I have a tuned AM with a java class having a method doLogin(String username, String password) with a String return value.
    This method is exposed to the UI via a client interface.
    Here are my questions.
    Where do I have to check and transform all those params. In the Read Only VO via a tuned Java class extending ViewRowImpl or in the tuned AM java class ??
    And now for the session bean where do I have to instanciate it. In the Backing Bean I suppose.
    Wouldn't it be better to call the client interface from the backing bean, then instanciate the session bean, with params coming from the AM, in that BackBean.
    I have so many question that I don't know where to begin. :-(
    Kind Regards for your help.
    Stessy Delcroix

    Hi Frank,
    thanks for your response.
    Now where do I have to implement the logic behind the login.
    I explain a bit the application.
    We have coordinator and partner who can connect to the website.
    If a coordinator didn't change the passwords for both coordinator and partner, the partner cannot login.
    And a message will be displayed on the login page.
    If the username or password is not correct, another message will be displayed.
    Now how to return the params of the query ??
    Do I have to implement the logic at the AM level or at the VO level ??
    I mean calling findViewObject in the AM, play with the result of the query, implement the logic and return the params that will have to be set in a session bean
    Here is the query we have
    SELECT A.PASSWORD, A.TYPE, A.PROPOSAL_ID, R.CALL_ID, R.INSTRUMENT_ID, R.ONLINE_PREPARATION, A.STATUS, I.ANNEXES_UPLOAD, C.SINGLE_PARTNER, I.REFEREE_FUNCTIONALITY, A.REFEREE_ID, I.INSTRUMENT_CODE, C.TWO_PARTB_FUNCTIONALITY, C.STAGE_TYPE, EIL.SETUP_TYPE, C.CALL_CODE, I.MANDATORY_ANNEXES " +
                   " FROM EPSS_CREDENTIALS_VIEW A, EPSS_REGISTRATION R, EPSS_INSTRUMENT I, EPSS_CALL C, EPSS_INSTRUMENT_LIST EIL" +
                   " WHERE (USERNAME = ? AND A. PROPOSAL_ID = R. PROPOSAL_ID AND R.INSTRUMENT_ID = I.INSTRUMENT_ID AND C.CALL_ID = R.CALL_ID AND I.INSTRUMENT_CODE = EIL.INSTRUMENT)
    All the values except Password will be set in a session bean.
    I know that question is a bit annoying, but I don't want to start implementing things in the wrong way. And having dependencies between layers.
    That's strange that no best practices examples have been published yet.
    Thanks a lot for your precious help.
    Regards,
    Stessy
    Edited by: Stessy.Delcroix on Mar 10, 2011 3:51 PM

  • Building complex flash game in Flash Builder 4 - Workflow/Best Practices

    I'm investigating switching to Flash Builder 4 for building a complex game that currently lives purely inside Flash CS4.  CS4 is a pretty terrible source code editor and debugger.  It's also quite unstable.  Many crashes caused by bad behavior in the SWF will take out the entire IDE so are almost impossible to debug.  And I've heard other horror stories.  To be clear, for this project I'm not interested in the Flex API, just the IDE.
    Surprisingly, it seems Flash Builder 4 isn't really set up for this type of development.  I was hoping for an "Import FLA" option that would import my Document Class, set it as the main entry point, and figure out where other assets live and construct a new project.  What is the best workflow for developing a project like this?
    What I tried:
    -Create a new Actionscript Project in the same directory where my CS4  lives
    -Set the primary source file to match the original project's source file and location
    -Set my main FLA as "export to SWC", and added "SWC PATH" to my flash builder 4 project.
    -Compile and run.. received many errors due to references to stage instance. I changed these to GetChildByName("stagename").  Instead, should I declare them as members of the main class?  (this would mimic what flash CS4 does).
    -My project already streams in several external SWF's.  I set these to "Export SWC" to get compile-time access to classes and varaibles. This works fine in cs4, the loaded SWF's behave as if they were in the native project.  Is the same recommended with FB4?
    -Should I also be setting the primary FLA as "export to swc"?  If not, how do I reference it from flex, and how does flex know which fla it should construct the main stage with?
    Problems:
    -I'm getting a crash inside a class that is compiled in one of the external SWF's (with SWC).  I cannot see source code for the stack inside this class at all.  I CAN see member variables of the class, so symbol information exists.  And I do see the stack with correct function names.  I even see local variables and function parameters in the watch window! But no source.  Is this a known bug, or "by design"? Is there a workaround?  The class is compiled into the main project, but I still cannot see source.  If FLEX doesn't support source level debugging of SWC's, then it's pretty useless to me.   The project cannot live as a single SWF.  It needs to be streaming and modular for performance and also work flow. I can see source just fine when debugging the exact same SWC/SWF through CS4.
    -What is the expected workflow with artists/designers working on the project?  Currently they just have access to all the latest source, and to test changes they run right through flash.  Will they be required to license Flash Builder as well so they can test changes?  Or should I be distributing the main "engine" as a SWF, and having it reference other SWF files that artists can work on?  They they compile their SWF in CS4, and to test the game, they can load the SWF I distribute.
    A whitepaper on this would be awesome, since I think a lot of folks are trying to go this direction.  I spent a long time searching the web and there is quite a bit of confusion on this issue, and various hacks/tricks to make things work.  Most of the information is stale from old releases (AS2!).
    If a clean workflow I would happily adopt Flash Builder 4 as the new development tool for all the programmers.  It's a really impressive IDE with solid performance, functional intellisense, a rich and configurable interface, a responsive debugger..I could go on and on.  One request is shipping with "visual studio keyboard layout" for us C++ nerds.
    Thanks very much for reading this novel!

    Flash builder debugging is a go!  Boy, I feel a bit stupid, you nailed the problem Jason - I didn't have "Permit Debugging set".  I didn't catch it because debugging worked fine in CS4 because, well, CS4 doesn't obey this flag, even for externally loaded SWF files (I think as long as it has direct access to the SWC). Ugh.
    I can now run my entire, multi SWF, complex project through FB with minimal changes.  One question I do have:
    In order to instantiate stage instances and call the constructor of the document class, I currently load the SWF file with LoaderContext.  I'm not even exporting an SWC for the main FLA (though I may, to get better intellisense).  Is this the correct way of doing it?  Or should I be using , or some other method to pull it into flex?  They seem to do the same thing.
    The one awful part about this workflow is that since almost all of my code is currently tied to symbols, and lives in the SWF, any change i make to code must first be recompiled in CS4, then I have to switch back to FB.  I'm going to over time restructure the whole code base to remove the dependency of having library symbols derive from my own custom classes.  It's just a terrible work flow for both programmers and artists alike.  CS5 will make this better, but still not great.  Having a clean code base and abstracted away assets that hold no dependencies on the code  seems like the way to go with flash.  Realistically, in a complex project, artists/designers don't know how to correctly set up symbols to drive from classes anyway, it must be done by a programmer.  This will allow for tighter error checking and less guess work.  Any thoughts on this?
    Would love to beta test CS5 FYI seeing as it solves some of these issues.
    Date: Thu, 21 Jan 2010 15:06:07 -0700
    From: [email protected]
    To: [email protected]
    Subject: Building complex flash game in Flash Builder 4 - Workflow/Best Practices
    How are you launching the debug session from Flash Builder? Which SWF are you pointing to?
    Here's what I did:
    1) I imported your project (File > Import > General > Existing project...)
    2) Create a launch configuration (Run > Debug Configuration) as a Web Application pointing to the FlexSwcBug project
    3) In the launch config, under "URL or path to launch" I unchecked "use default" and selected the SWF you built (I assume from Flash Pro C:\Users\labuser\Documents\FLAs\FlexSwcBug\FlexSwcBugCopy\src\AdobeBugExample_M ain.swf)
    4) Running that SWF, I get a warning "SWF Not Compiled for Debugging"
    5) No problem here. I opened Flash Professional to re-publish the SWF with "Permit debugging" on
    6) Back In Flash Builder, I re-ran my launch configuration and I hit the breakpoint just fine
    It's possible that you launched the wrong SWF here. It looks like you setup DocumentClass as a runnable application. This creates a DocumentClass.swf in the bin-debug folder and by default, that's what Flash Builder will create a run config for. That's not the SWF you want.
    In AdobeBugExample_Main.swc, I don't see where classCrashExternal is defined. I see that classCrashMainExample is the class and symbol name for the blue pentagon. Flash Builder reads the SWC fine for me. I'm able to get code hinting for both classes in the SWC.
    Jason San Jose
    Quality Engineer, Flash Builder
    >

  • Usage of Efxclipse Controls like FilterableTreeTable - Best practice?

    Hello all,
    what is best practice of using efxclipse controls?
    currently i have linked the needed jars from the eclipse plugins folder (i.e. efxclipse controls jar) to my project to use features like the FilterableTreeTable. But.. this can not be the best practice. If there is no maven support, there should be another solution, right?!
    thanks in advance and best regards,
    Frank

    Hi,
    I published the controls bundle to
    https://oss.sonatype.org/content/repositories/releases/ including all
    the transitive dependencies.
    > <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    > <modelVersion>4.0.0</modelVersion>
    > <groupId>my.test</groupId>
    > <artifactId>my.test.app</artifactId>
    > <version>0.0.1-SNAPSHOT</version>
    >
    > <dependencies>
    > <dependency>
    > <groupId>at.bestsolution.eclipse</groupId>
    > <artifactId>org.eclipse.fx.ui.controls</artifactId>
    > <version>2.0.0</version>
    > </dependency>
    > </dependencies>
    >
    > <repositories>
    > <repository>
    > <id>oss</id>
    > <url>https://oss.sonatype.org/content/repositories/releases/</url>
    > </repository>
    > </repositories>
    >
    > </project>
    I'll publish more artifacts in the days to come.
    Tom
    On 13.07.15 22:25, Thomas Schindl wrote:
    > It's on my todo list to publish some parts of efxclipse at maven Central
    > but i did not yet had time - i'll keep you posted

Maybe you are looking for

  • Unable to take recovery to end of the log

    I got the below error. When i am trying to take recovery of a database to end of the log.(i.e, without taking the tail log backup). I am posting the details of that error also here. Microsoft .Net framework error Unhandled exception has occurred in a

  • How do I restore addresses from archive file or time machine without iCloud overriding?

    I recently had an issue with my address book contacts.  About 90% of the contacts disappeared, then synched through iCloud to my iphone and ipad.....so nearly all my contacts were lost on all devices.  I have both time machine and an address book arc

  • Running JSP on netBeans 4.1

    Hi, I have just migrated from netBeans 4.0 to netBeans 4.1 beta, and was having problem compiling one of my JSP files (which ran perfectly on the older version of netbeans). I was wondering whether there is any setting I've missed. Appreciate some he

  • Creating Membership registration using Struts?

    Any sample code or tutorials on how to do this using struts or just plain JSP? I have basic JSP experience but no real struts. Also basic adding, editing and deleting records from mysql using JSP/html forms. Just something that I can look, modify and

  • Compressor: Failed: QuickTimer Error: 0

    When compressing with Compressor I get error message: Failed: QuickTime Error: 0 This happens when I am exporting from Final Cut Pro HD (photo Jpeg video), and also when I am compressing from Final Cut movie file (with current settings and not self c