Best practice : Starting a job ?

Hi,
I already use OWB 9.0.3.x. Each monday night, a script is started via crontab, it checks if a table has been updated (every 15 min), when this table has been updated, it "touch" a flag file in unix, then it starts our OWB packages (pls) via unix and sqlplus scripts.
Is there a way with OWB 10G and OEM to accomplish something like that: checking if a table has been updated, then starting a process flow, etc.?
Thanks

I also want to run the process flow when there the source table gets updated in OWB R2. Did you figured it out? let me know.
Thanks.

Similar Messages

  • Best practice - Manage background job users

    Gate Keepers and Key Masters,
    What is the best way to manage users who run background jobs?
    For example currently we have a special system user with SAP_ALL that is only used to schedule jobs with. and we manage who has authorization to schedule jobs.
    We are told that this is not the best way to go about it and that we have to remove SAP_ALL from that user. I don't see a very good way to eliminate SAP_ALL profile, short of analyzing every single batch job that is already scheduled, and creating or assigning existing roles for each job or step. Even that doesn't guarantee that authorizations given to my batch user would be enough to run any jobs that may be scheduled in the future.
    Can you give me any pointers on how to address this problem.
    Thanks
    Matt

    Hello,
      as a matter of fact the cleanest way is giving to the backgroud job's user only the authorizations to perform programs and steps he have to.
    Usually auditors allows keeping SAP_ALL for system users.
    However, a work-aroud could be the creation of a special role, containing authorization to do "almost everything". You should run transaction PFCG, enter the name of the role, save, then go straightly to the tab "Authorizations" and press the button "Change authorization data". In the pop-up screen "Choose template" choose "Do not select templates"; then follow the scroll-down menu path "Edit --> Insert authorization(s) --> Full authorization". Then go to the push-button "Organizational levels" and press the push-button "Full authorization". If you want you can refine this role, removing some critical authorization object such as, for instance, S_USER_*, or others like that. Then you can assign this role to the background user.
    Hope to be useful.
    Best regards,
    Andrea

  • General Oracle Database Performance trouble solving best practice Steps

    We use  Oracle 11g DB on Windows2008R2 as web application backend DB.
    We have peformance trouble in that DB.
    I would like to know General Oracle Database Performance trouble solving best practice Steps.
    Is there any good general best practice document for performace trouble solving in the internet ?

    @Girish Sharma:  I disagree with this. Many people say things like your phrase "..first identify the root cause and then move forward" but that is not the first step. Any such technique is nothing more than looking at some report, finding a number that you don't like, and attempting to "fix" it. Some people use that supposedly funny term "compulsive tuning disorder" (first used by Gaja Krishna Vaidyanatha) to describe this approach (also advocated in this topic by @Supriyo Dey). The first step must be to determine what the problem is. Until you know that, all those reports you mentioned (which, remember, require EE plus pack licences) are useless.
    @teradata0802, your best practice starts by finding the problem. Is it, for example, that the overnight batch jobs don't finish until lunchtime? A screen takes 10 seconds to refresh, and your target is one second? A report takes half an hour, but you need to run it every five minutes? Determine what business function is causing your client to lose money because it is too slow. Then investigate what it is doing, how, and why. You have to begin by focussing on the problem, not by running database-wide reports..

  • What are some best practices for Effective Sequences on the PS job record?

    Hello all,
    I am currently working on an implementation of PeopleSoft 9.0, and our team has come up against a debate about how to handle effective sequences on the job record. We want to fully grasp and figure out what is the best way to leverage this feature from a functional point of view. I consider it to be a process-related topic, and that we should establish rules for the sequence in which multiple actions are inserted into the job record with a same effective date. I think we then have to train our HR and Payroll staff on how to correctly sequence these transactions.
    My questions therefore are as follows:
    1. Do you agree with how I see it? If not, why, and what is a better way to look at it?
    2. Is there any way PeopleSoft can be leveraged to automate the sequencing of actions if we establish a rule base?
    3. Are there best practice examples or default behavior in PeopleSoft for how we ought to set up our rules about effective sequencing?
    All input is appreciated. Thanks!

    As you probably know by now, many PeopleSoft configuration/data (not transaction) tables are effective dated. This allows you to associate a dated transaction on one day with a specific configuration description, etc for that date and a different configuration description, etc on a different transaction with a different date. Effective dates are part of the key structure of effective dated configuration data. Because effective date is usually the last part of the key structure, it is not possible to maintain history for effective dated values when data for those configuration values changes multiple times in the same day. This is where effective sequences enter the scene. Effective sequences allow you to maintain history regarding changes in configuration data when there are multiple changes in a single day. You don't really choose how to handle effective sequencing. If you have multiple changes to a single setup/configuration record on a single day and that record has an effective sequence, then your only decision is whether or not to maintain that history by adding a new effective sequenced row or updating the existing row. Logic within the PeopleSoft delivered application will either use the last effective sequence for a given day, or the sequence that is stored on the transaction. The value used by the transaction depends on whether the transaction also stores the effective sequence. You don't have to make any implementation design decisions to make this happen. You also don't determine what values or how to sequence transactions. Sequencing is automatic. Each new row for a given effective date gets the next available sequence number. If there is only one row for an effective date, then that transaction will have a sequence number of 0 (zero).

  • Best practice to have or to start a new financial year

    Can any one please suggest a best practice to have or to start a new financial year (FY2009-2010) in SAP B1 2007 B for a trading company started on FY2008-2009 for India?

    hi,
    Check this links
    New Fiscal Year
    how to shift to new financial year
    Requirement for next fiscal year
    Jeyakanthan

  • What is the best practice when any of Src/Tgt DB is re-started in streams

    We have a live production dual direction streams environment (A --> B and B --> A) and due to a DBFcorrupt file at source A, it was brought down and all traffic was switched to B. All streams captures,prop and apply were enabled and messages were captured at B and propagated to A (but cannot reach A and apply there as it was down) .When A is re-started, some of the captured messages in B never got applied to A. What could be the possible reason. What is a best practice in a streams environment when any of the source/target instance is shutdown and re-started.

    Hi Serge,
    A specific data file got corrupted and they restored it. Can you please send me the URL for the metalink document about that bug in 9.2. I'd really appreciate your help on this.
    Thx,
    Amal

  • Best SAP Security Practices Print,file,job schedule, archiving

    Hello All, i would like to know in your experience which will be the best practices for Security  for this list below:
    - Printer security (especially check printing)
    - File path security for export/import
    - Best Practice for Job Schedule and Spool file
    - Archiving process (I can't think of any specific to security, other than Security Audit Logs)
    Are there any special transactions/system settings/parameters that must be on place in order to hard SAP Systems?
    Do you have any documentation related?
    I mean for example Job, spool i think user must just only run heir own jobs,and se their own works for printing, is there a paremeter to athenticate Prints/user, etc.
    Please let me know your comments about those related issues.
    I appreciate your help.
    Thanks a lot.
    Ahmed

    Hi,
    PFCG_TIME_DEPENDENCY
    This is best to run once a day mostly after 12.01 am as it removes the roles which are invalid for current date. As role assignment is on date basis there is no advantage of running it hourly.
    /VIRSA/ZVFATBAK
    This is for GRC 5.3, and this job is to collect FFID logs from backend to GRC repository, so if you have frequent FFID usage you can schedule it hourly or for every 30 min too, if you have enough bandwidth in your server to get the latest log report. or else you can have it scheduled for twice a day too, so it is purely based on your need.
    Hope this helps.
    BR,
    Mangesh

  • Best Practice for starting & stopping HA msg nodes?

    Just setup a cluster and was trying to start-msg ha and getting error about watcher not being started. Does that have to be started separately? I figured start-msg ha would do both.
    For now I setup this in the startup script. Will the SMF messaging.xml work with HA? Whats the right way to do this?
    /opt/sun/comms/messaging64/bin/start-msg watcher && /opt/sun/comms/messaging64/bin/start-msg ha
    -Ray

    ./imsimta version
    Sun Java(tm) System Messaging Server 7.3-11.01 64bit (built Sep 1 2009)
    libimta.so 7.3-11.01 64bit (built 19:54:45, Sep 1 2009)
    Using /opt/sun/comms/messaging64/config/imta.cnf (not compiled)
    SunOS szuml014aha 5.10 IDR142154-02 sun4v sparc SUNW,T5240
    sun cluster 3.2. And we are following the zfs doc. I haven't actually restarted the box yet, just doing configs and testing still and noted that.
    szuml014aha# ./start-msg
    Warning: a HA configuration is detected on your system,
    use the HA start command to properly start the messaging server.
    szuml014aha# ./start-msg ha
    Connecting to watcher ...
    Warning: Cannot connect to watcher
    Critical: FATAL ERROR: shutting down now
    job_controller server is not running
    dispatcher server is not running
    sched server is not running
    imap server is not running
    purge server is not running
    store server is not running
    szuml014aha# ./start-msg watcher
    Connecting to watcher ...
    Launching watcher ... 11526
    szuml014aha# ./start-msg ha
    Connecting to watcher ...
    Starting store server .... 11536
    Checking store server status ...... ready
    Starting purge server .... 11537
    Starting imap server .... 11538
    Starting sched server ... 11540
    Starting dispatcher server .... 11543
    Starting job_controller server .... 11549
    Also I read in the zfs / msg doc about the recommendations:
    http://wikis.sun.com/display/CommSuite/Best+Practices+for+Oracle+Communications+Messaging+Exchange+Server
    If I split the messages and indices, will there be any issues should I need to imsbackup and imsrestore the messages to a different environment without the indices and messages split?
    -Ray
    Edited by: Ray_Cormier on Jul 22, 2010 7:27 PM

  • Best Practices for starting up and shutting down OAS

    A co-worker suggested that we set all processes on our Application Server to manual, and enter the startup sequence (opmn, em, etc) in autoexec.nt. Is this a best practice? Has anyone done this? I cant find any examples of this online. Does anyone see any advantages to doing this? I think they mentioned that this will ensure the OS loads the applications in the correct order.

    and set all the services to "Manual"? Yes
    you created a batch file and call it from autoexec.nt? my OS was W2000, anyway the batch was executed at system startup
    net start C:\oracle\midhome\bin\nmesrvc.exe
    net start C:\oracle\midhome\opmn\bin\opmn.exe -SI didn't use those commands, I used dcmctl to start http server and OC4J for infrastructure and midtier, after starting services.
    In other installations I also used opmnctl startall.
    Message was edited by:
    Paul M.

  • Job (C) use best practices

    Experts,
    This question is in regard to best practices/common ways that various companies ensure the proper use of the Job(C) object in the HCM systems.  For example, if there are certain jobs in the system that should only be assigned to a position if that position is in certain areas of the business (i.e. belongs to specific organizational areas), how is this type of restriction maintained?  Is it simply through business processes? Is there a way/relationship that can be assigned? Are there typical customizations and/or processes that are followed?
    I'm looking to begin organizing jobs into job families, and I'm currently trying to determine and maintain the underlying organization of our companies jobs in order to ensure this process is functional.
    Any insight, thoughts, or advise would be greatly appreciated.
    Best regards,
    Joe

    Hi Joe,
    You can embed the business area info into job description and this would be a part of best practice.
    What I mean is that:
    e.g. In your company you have 4 managers:
    HR Manager
    IT Manager
    Procurement Manager
    Production Manager
    Then, as part of the Best practice of SAP, you will have 4 positions (1 position per person)
    My advice is you should also have 4 jobs that describe the positions.
    Then, in order to group all managers, you may have one job family "Managers" and assign all four jobs to that family.
    By this way you can report all the managers as well as area-specific managers (e.g. HR manager)
    As far as I know, there is no standard relationship that holds business area info.
    For further info check table T778V via SM31.
    Regards,
    Dilek

  • Best practice recommendations for Payables month end inbound jobs

    During Payables month end we hold off inbound invoice files and release them once the new period is open so that invoice get created in new fiscal period. Is this an efficient way to do? Please advise best practice for this business process.
    Thanks

    Hi,
    Can someone provide your valuable suggestions.
    Thanks
    Rohini.

  • XSERVE 10.4.7 failed start up errors best practice/recomnd Prev Maintenan?

    I have an Xserve, RAID 5 OS volume and data volumes stripped across the RAID 5 (with parity). The only other piece of software is Dantz Retrospect. Preventive maintenance was run 5/30 - which I do in this order. Reboot, see if server comes up clean, run disk warrior, repair permissions (both OS/Data volumes). All was clean - server has been up and running fine until 8/18. Rebooted server - was going to run disk warrior - server did hang on reboot and would not restart. Powered down hard, tried again - No good. Ran Disk Warrior produced a report but didn't make any changes - and it gave me a ton of errors stating: OS files chown/egripm tar grep etc - and dozens of other files under the /usr/bin sbin and usr/shared where gone - the errors were "the link file no longer points to the original file tigeros/usr/bin". It is almost as if I lost the entire File system - all data was intact on data volume. The server is on a UPS but did not have powerchute installed at the time. It has been perfectly stable. Nothing additional was installed . The only thing I can think of is power loss that exceeded the run time of the UPS and dropped the server like a rock - blowing out the file system. Any ideas/thoughts?
    I reinstalled the OS - and run disk tools - which shows all is ok - is that proof enough that the OS remains intact and safe for a reboot?
    What could have gone wrong?
    What is recommended for preventive maintenance on 10.4.7 running on an Xserve? I can't afford to have this reoccur if I can at all help - that is why I am generally pro-active with Preventive Maintenance.
    Alsoft - recommends that I run their product every 3-4 weeks to prevent such problems. Should I shy away from third party tools?
    Thank you all!
    SGM

    I really want to thank you for your response and sharing with me some really very good advice and info! I believe I need to clarify something. I am a server admin have 5 certs in Novell, Windows and Cisco certified – and I do not just go around rebooting servers. I am however, trying (and your response is quite helpful), to find best practices for working with XServe and Tiger Server. I have a bit of experience with many other Oss – and don’t normally recommend rebooting for the heck of it. I use to have Novell boxes literally up for years.
    My intention is to establish a standardized means of keeping OSX servers running with as few problems as possible. On this particular Xserve, I was to bring it down (because it was recommended to be a good practice), to do PM. By Preventive Maintenance, I did mean Disk Warrior/repair permissions. Now, before I would just run any disk utility (and yes – until now), I would reach for Disk Warrior for the reason that their tech support is usually quite good. I have spoken with them since the first release of their product – and perhaps (although not the best idea), trusted their recommendations.
    So – I would reboot the server BEFORE doing any PM, only to make sure and prove that the server comes up clean so I wouldn’t be blamed for it, if it did not come up clean due to performing maintenance. That being said, I did run Disk Warrior back around 5/30/06 and I really had very few errors, the server did come up fine. As mentioned, it was on a UPS (less the PowerChute software at the time).
    What I found was upon reboot – the server would not reboot. I proceeded to boot from the OS install CD, Disk Tools said it could not repair the problem. Secondly, I went to Disk Warrior – to run a report which did show that usr/bin and sbin contents were gone. The data volume was completely intact. So my first concern or question would be – what on earth could have caused this – a power fluctuation or the server possibly exceeding the run time – and dropping like a rock? Have you ever seen this? I just find I sleep better when I know the cause.
    Now, if I didn’t reboot to do the Preventive Maintenance – I never would have known that this file system was affected, and at the most inopportune time, I could have been called with the server being down.
    My goal is to keep this server up and running and as problem free and possible. It seems as if you have far more experience with this platform than I, and will absolutely implement your suggestions, and review your recommendations and suggested readings. I pride myself on the work I do, and I clearly need to bring my mastery of this OS up to a higher level. Books/other recommendations are welcome.
    So to reiterate – watch my logs – any specific logs under var/log? I even thought perhaps a bad block could have been the cause – my apologies, I always like to know the cause of a problem. I did reinstall the OS to the TigerOS Volume which did erase that volume – but did not zero all data. As mentioned this machine is RAID 5 three drives both the OS and Data Partitions are across all three drives. I would have preferred if the hardware and budget supported it, to mirror the OS and use RAID 5 for data.
    Is there a log that reports disk errors? You say repair permissions when you have a specific error – what specific OS Volume error would repairing permissions actually resolve?
    As mentioned, I want to prevent what had happened – I am NOT in the habit of rebooting servers, not even Windows servers with any great frequency. I have several Windows servers that prior to Microsoft’s patch Tuesday – were up for 9-12 months – even longer! If it wasn’t for “Patch Tuesday” the Windows servers would be up far longer!
    So then – in a nutshell – you are suggesting other than checking logs – there really isn’t a need to do Preventive Maintenance on an Xserve running 10.4.7? Would the log files have shown me that I had problems (still wondering what the cause was – would love your opinion), with files under usr/bin and sbin? What about running disk tools periodically from within the OS to Verify the integrity of the OS partition – no harm doing this – correct? If it shows no errors – is it safe to assume that I should be able to reboot?
    I can’t thank you enough for your time, and sharing this info with me. Mt goal is to keep this server running and as stable and as trouble-free as possible, just seeking to fill the voids of experience I have with this particular OS. I will try your recommendation for creating an image (on a test box first), but what about Super Duper (I believe it is called)? I do have to bring the box down to create the images.
    I can’t thank you enough
    Sincerely
    Be well
    Mac OS X (10.3.8)

  • A must read best practices when starting out in Designer

    Hi,
    Here is a link to a blog by Vishal Gupta on best practices when developing XFA Forms.
    http://www.adobe.com/devnet/livecycle/articles/best-practices-xfa-forms.html
    Please go read it now; it is excellent :-)
    Niall

    I followed below two links. I think it should be the same even though the links are 2008 R2 migration steps.
    http://kpytko.pl/active-directory-domain-services/adding-first-windows-server-2008-r2-domain-controller-within-windows-2003-network/
    http://blog.zwiegnet.com/windows-server/migrate-server-2003-to-2008r2-active-directory-and-fsmo-roles/
    Hope this help!

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

Maybe you are looking for

  • How can I link to a chapter in iBooks Author? I just want a link to the chapter, not to a bookmark

    How can I link to a chapter in iBooks Author? I just want a link to the chapter, not to a bookmark. But something like: "This was explained in Chapter 5." Where "Chapter 5" is a clickable hyperlink. Also, when I add a new chapter before chapter 5 thi

  • Having trouble with html snippets and movies in iweb 09

    Hi I have built a site iweb2.0.4 and recently moved it onto my new machine with iweb 09 on. Basically I have been putting in videos that I have exported for web out of Quicktime. In the past I have simply added an html snippet code like this in <ifra

  • Errors in my listener.ora file

    oracle 10.2 windows xp database run locally on my PC. just a test DB. I get errors when i try to start the listener pointing to the sidlist_listener line? SID_LIST_LISTENER =   (SID_LIST =     (SID_DESC =       (SID_NAME = PLSExtProc)       (ORACLE_H

  • List Manager - How-to?

    Trying to create a page region that has the following characteristics. Does HTMLDB have a pre-built component to do the following? I tried the List Manager component, but that doesn't seem right for this. I think the missing piece I need is the javas

  • Ordering widgets in a dialog?

    How can I selectively change the order of the widgets in a dialog? The default layout is alphabetical order. Is there a property and value I can add to selectively order them? Thanks, Gio