A must read best practices when starting out in Designer

Hi,
Here is a link to a blog by Vishal Gupta on best practices when developing XFA Forms.
http://www.adobe.com/devnet/livecycle/articles/best-practices-xfa-forms.html
Please go read it now; it is excellent :-)
Niall

I followed below two links. I think it should be the same even though the links are 2008 R2 migration steps.
http://kpytko.pl/active-directory-domain-services/adding-first-windows-server-2008-r2-domain-controller-within-windows-2003-network/
http://blog.zwiegnet.com/windows-server/migrate-server-2003-to-2008r2-active-directory-and-fsmo-roles/
Hope this help!

Similar Messages

  • Best practices when carry forward for audit adjustments

    Dear experts,
    I would like to know if someone can share his best practices when performing carry forward for audit adjustments.
    We are actually doing legal consolidation for one customer and we are facing one issue.
    The accounting team needs to pass audit adjustments around April-May for last year.
    So from January to April / May, the opening balance must be based on December closing of prior year.
    Then from May / June to December, the opening balance must be based on Audit closing of prior year.
    We originally planned to create two members for December period, XXXX.DEC and XXXX.AUD
    Once the accountants would know their audit closing balance, they would have to input it on the XXXX.AUD period and a business rule could compute the difference between the closing of AUD and DEC periods and store the result on an opening flow.
    The opening flow hierarchy would be as follow:
    F_OPETOT (Opening balance Total)
        F_OPE (Opening balance from December)
        F_OPEAUD (Opening balance from the difference between closing balance of Audit and December periods)
    Now, assume that we are in October, but for any reason, the accountant run a carry forward for February, he is going to impact the opening balance because at this time (October), we have the audit adjustments.
    How to avoid such a thing? What are the best practices in this case?
    I guess it is something that you may have encounter if you did a consolidation project.
    Any help will be greatly appreciated.
    Thanks
    Antoine Epinette

    Cookman and I have been arguing about this since the paleozoic era. Here's my logic for capturing everything.
    Less wear and tear on the tape and the deck.
    You've got everything on the system. Can't tell you how many times a client has said "I know that there was a better take." The only way to disabuse them of this notion is to look at every take. if it's not on the system, you've got to spend more time finding the tape, and adding "wear and tear on the tape and the deck." And then there's the moment where you need to replace the audio for one word from another take. You can quickly check all the other takes (particularly if you've done a thorough job logging the material - see below)_.
    Once it's on the system, you still need to log and learn the material. You can scan thru material much faster once it's captured. Jumping around the material is much easier.
    There's no question that logging the material before you capture makes you learn the material in a more thorough way, but with enough selfdiscipline, you can learn the material as thoroughly once it's been captured.

  • Best practice for checking out a file

    Hello,
    What is the SAP best practice to check out a file? (DTR -> EDit or DTR -> Edit Exclusive?)
    What are pros and cons of checking out a file exclusively?
    Thanks
    MLS

    Thanks Pascal.
    Also, I think if a developer checks out exclusively, makes changes and leaves the company without checking in, the only way is to revert those files in which case all his changes will be gone.

  • Best practice when modifying SAP Standard Development Component

    Hello Experts,
    What is best practice when modifying SAP Standard Development Component (Java Web Dynpro)? Iu2019m looking for the best method to do modifications to SAP Standard DC so that my changes will be kept (or need low maintenance) after a new service package (or EHP) is applied.
    Thanks,
    Kevin

    Hi,
      'How to use Busiess Packages in Enterprise Portal 6.0' is available in this link.
    http://help.sap.com/bp_epv260/EP_EN/documentation/How-to_Guides/misc/Using_Business_Packages.pdf
    Check out for the best practices.
    Regards,
    Harini S

  • Best practices to secure out of bound management access

    What are the best practices to secure Out Of Bound Management (OOBM) access?
    I planning to put in an DSL link for OOBM. I have a console switch which supports SSH and VPN based on IPSec with NAT traversal. My questions are -
    Is it secure enough?
    Do I need to have a router/firewall in front of the console switch?
    Im planing to put a Cisco 1841 router as an edge router. What do you think?
    Any suggestions would be greatly appreciated.

    Hi,
    You're going to have an OOB access via VPN?
    This is pretty secure (if talking about IPsec)
    An 1841 should work fine.
    You can check the design recommendations here:
    www.cisco.com/go/srnd
    Chose the security section...
    Hope it helps.
    Federico.

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Best practices when making service requests

    Best practices when making service requests
    We've been working on moving our old services that were built with an different service request tool into RequestCenter and were wondering if anyone had any thoughts about best standards or practices for the new forms that they would be willing to share.  For example, one such standard might be that the customer - initiator information will always be displayed at the top of the request.
    Are there any other standardizations you could share that help lend consistency and provide improved readability for request forms?  Maybe someone has a design framework guide they would be willing to share?
    Thanks!
    Tim

    Thanks for the comments and the book suggestion.
    We've been placing the customer information at the top because wanted the customer to review the information before subbmitng the form.  Our LDAP data is somewhat spotty and we want to make sure we have the right information when the form is submitted but I can see the advantages to placing it at the bottom as well.  I'll have to think that over more.
    Does anyone find tha certain fields work better than others?  For example, we've not had much

  • Books / links on best practices when writing on-line Help

    Hi everyone
    Not sure were to place this topic...
    I have not posted in here for ages...
    I am a RoboHelp user and I am looking for one or several
    books about best practices when writing on-line help. For examples,
    what are the "rules" or "do's" and "don'ts" for CSS, topic linking,
    number of clicks, links within a topic, index building, etc.
    Just wondering if some people on this forum know about some
    good books where all of the rules or do's would be compiled?
    Thanks in advance for any input.
    Regards

    KeepItSimple,Stupid!
    That is, just because there are neat things like drop-down
    text, marquees, and such, doesn't mean you should use them.
    Stick to the basic HTML fonts and colors (use the
    w3schools web site for all
    things HTML and CSS.
    Instead of styles, create your lists by selecting Normal
    paragraphs and formatting with the Bullet and Number toolbar
    buttons.
    Keep your tables as simple as possible (try not to nest them
    and have all sorts of row and column spans, and try to avoid lists
    and figures, if you can). Also, break up very long tables into
    functional groupings with introductory headings.
    Use
    Peter Grainge's web
    site and
    Rick
    Stone's web site for all the best workarounds and diagnostics.
    Good luck,
    Leon

  • What is the best practice when any of Src/Tgt DB is re-started in streams

    We have a live production dual direction streams environment (A --> B and B --> A) and due to a DBFcorrupt file at source A, it was brought down and all traffic was switched to B. All streams captures,prop and apply were enabled and messages were captured at B and propagated to A (but cannot reach A and apply there as it was down) .When A is re-started, some of the captured messages in B never got applied to A. What could be the possible reason. What is a best practice in a streams environment when any of the source/target instance is shutdown and re-started.

    Hi Serge,
    A specific data file got corrupted and they restored it. Can you please send me the URL for the metalink document about that bug in 9.2. I'd really appreciate your help on this.
    Thx,
    Amal

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • What do you find to be best practice when it comes to writing AS code to manage a big app ?

    Right now I am considering 3 options:
    1) I write all the code in a root component that extends group for example like this:
    <s:Application>
         <s:AppGroup>
                   <s:List />
         </s:AppGroup>
    </s:Application>
    So in it I will write the code to manage the list. Ok but Imagine now I have 10 views inside that AppGroup each having a list which needs to be managed. So here comes my option 2.
    2) I write code in the AppGroup component to manage what's on it's level and for new level (view for example) I create another component like ViewGroup1, ViewGroup2 etc which extends group or something else and I write code in it to manage what's inside of it. This looks like this:
    <s:Application>
         <s:AppGroup>
              <s:ViewGroup1>
                   <s:List />
              </s:ViewGroup1>
              <s:ViewGroup2>
                   <s:List />
              </s:ViewGroup2>
         </s:AppGroup>
    </s:Application>
    So this time the code to manage the views will be in AppGroup and the code for managing the Lists will be in the ViewGroup1/2 component.
    3) of course sometimes mixed architecture if for example the ViewGroup1 is very simple and doesn't have list but a label or something like that the code could be written in the AppGroup.
    What do you think of this code structure? Is my logic good or there's something else considered a best practice at the moment ? Thanks!

    Thank you all for the thoughts. Could we please stick to flex only for now...
    Currently I have a project where I see this structure:
    <Application creationComplete="init();">
         <fx:Script source="MainApp.as" /> - all initialization code is here
         <components /> - many components
    </Application>
    In the creation complete of the Application which is in MainApp.as, dataProviders are set and a controller class is initiated to which the Application is passed as Object and everything is manipulated from that controller. As you mentioned I guess you can always create additional controllers and pass them the Application or some other components from which they could start controlling so to speak.
    I am not sure if this structure is good or not, I started comparing it with mine and I ended up here...
    What I see at this point compared to mine is that:
    - in the included MainApp.as in Application I have question marks when i type something like "stage" in a function, I needs me to type "this.stage", which I don't like. To me it looks like including is bad and maybe everything should have started with creationComplete in the Application mxml with importing and initiating the controller with passing him the Application right away. Is that correct?
    - in the example given above, after MainApp initiates the controller by passing him the Application, the controller looses all of the nice code hints since now the Application is an object... maybe it's wrong for it to be object ? Should it be something else?
    Compared to my approach when I separate my logic into AS Group which is then extended as MXML Group. All I have to do is declare the instances in AS which I have as IDs in MXML and voila... I can control them and write their logic with all the nice code hints present.
    So basicly at this point you say instead of extending Group in AS every time I want to separate logic, write a controller right ?
    Here is what I summarized for now:
    1) Create a RootController class
    2) Initiate it in the creation complete of the Application passing the Application (as what type - object or something else?)
    3) manage all logic in that controller
    4) if parts of the application are too complex they can be separated into additional controllers.
    5) the RootController can initiate SubControllers which can initiate SubSubControllers
    6) to all controllers a component must be passed as a starting point for the logic
    Is this correct? If yes, what about the code hinting compared to my approach?
    Would be very nice if someone of you could make a very very very simple app with the model you are talking about, or if you have an article you took it from share the link! Thanks!

  • Need best practice when accessing an ucm content after being transferred.

    Hi All,
    I have a business requirement where I need to auto-transfer the content to another UCM when this content expires in the source UCM.
    This content needs to be deleted after it spends a certain duration in the target UCM.
    Can anybody advise me the best practice to do this in the Oracle UCM?
    I have set up an expiration date and trying to auto Replicate the content to the target UCM once the content reaches the expiration date.
    I am not aware of the best practice to access the content when it is in the target UCM?
    Any help in this case would be greatly appreciated.
    Regards,
    Ashwin

    SR,
    Unfortunately temp tables are the way to go. In Apex we call them collections (not the same as PL/SQL collections) and there's an API for working with them. In other words, the majority of the leg work has already been done for you. You don't have to create the tables or worry about tying data to different sessions. Start you learning here:
    http://download.oracle.com/docs/cd/E14373_01/appdev.32/e11838/advnc.htm#BABFFJJJ
    Regards,
    Dan
    http://danielmcghan.us
    http://sourceforge.net/projects/tapigen
    http://sourceforge.net/projects/plrecur
    You can reward this reply by marking it as either Helpful or Correct ;-)

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • Best practice when doing large cascading updates

    Hello all
    I am looking for some help with tackling a fairly large cascading update.
    I have an object tree that needs to be merged using JPA and Toplink.
    Each update consists of 5-10000 objects with a decent depth as well.
    Can anyone give me some pointers/hints towards a Best practice for doing this? Looping though each object with JPA's merge takes minutes to complete, so i would rather not do that.
    I have never actually used TopLinks own API before, so i am especially interested if TopLink has an effective way of handling this, preferably with a link to some related reading material?
    Note that i have a somewhat duplicate question on (Noting for good forum practice)
    http://stackoverflow.com/questions/14235577/how-to-execute-a-cascading-jpa-toplink-batch-update

    Not certain what you think you can't do. Take a long clip and open it in the Viewer. Set In and Out points. Drop that into the Timeline. Now you can move along in the Viewer clip and set new Ins and Outs and drop that into the Timeline. Clips in the Timeline are created from the Ins and Outs you set in the Viewer.
    Is that what you want to do? If it is, I don't where making copies of the clip would work for you
    Later, if you want to match up a clip in the Timeline to that master clip, just use Match Clip (find) in the timeline to find where it correaltes to your main clip
    You can have FCE automatically create subclips at camera cut points by using DV Stop/Start Detect if that is what you're looking for

  • Best practice when upstream does not provide version number

    I'm currently preparing a package for the game Dreamfall Chapters, unfortunately the Developer does not (yet) provide a Version number.
    What is the best practice for this? Just count the releases and set epoch when they finally provide one themselves?
    thanks in advance!

    If you are missing a version number, you could use the date. Prefix it with "r" in case you want to avoid epoch when upstream decides to start versioning.
    pkgver=r20140122
    If you have another release on the same day, increment another subversion
    pkgver=r20140122.1
    Counting the releases by hand is poor practice (you could miss one). Do it only if upstream provides the count.
    Edit: To tie the version stronger to the release, you can also add a part of the archive hash, e.g. +m###### with the first 6 characters of the md5sum.
    pkgver=r20140122+m23df1e
    Edit: r as prefix is better, thanks rumpelsepp.
    Last edited by progandy (2014-10-22 09:32:56)

Maybe you are looking for

  • Magic mouse - certain items don't respond to click!

    I've had no problem with my Magic Mouse ever - Model MB829/LLA Am running Imac with OS 10.5.8. Today I saw a Magic Mouse Software update listed in Software Updates and innocently enough I chose to install it. Upon restart the Mouse will click and sel

  • Botched BIOS update

    I was having problems with my A215-6804 laptop just stopping on me with the screen going blank and becoming completely unresponsive so I ran the BIOS updater (the Windows version)...it got through about 5 of the 17 steps and abruptly shut down....now

  • I just updated my yahoo toolbar & got message to reboot. Same message keeps coming up. I'm doing this in Explorer.

    My Yahoo tool bar had disappeared Yahoo "help" said to reinstall & update for Firefox. I did & got message I had to reboot to have change take effect. I did & when I tried to log onto Firefox thew reboot messsage reappeared. After several attempts it

  • Black image from Dalsa CL-P1-2048 with 1424

    Hi all,I am using Dalsa CL-P1 2048 line scan camera with PCI 1424(LVDS). I checked with MAX and have some problems in acquiring the correct image. In short I am getting Image only with the following setup in MAX: 1. Integration Type – Total 2. Line r

  • How to disable touchpad on Satellite L300-19F with WXP?

    Toshiba Satellite L300-19F which was with windows Vista I have installed windows XP and everything works fine. Since I am using USB mouse for operating I want to deactivate the touch pad but when I press the Fn + deactivate key for touch pad it does