Database best practice theoretical question

Hi Forums
I wasn't sure which forum to post this under- general non-specific Database question.
I have a client server database app, which is working well with a single network client currently. I need to implement some kind of database record locking scheme to avoid concurrency problems. I've worked on these kind of apps. before, but never one this complicated. Obviously I've scourged the web looking into this, but found not too much good reference material.
I was thinking of making some kind of class with several lists- one for each table that requires locking having record locks whilst transactions are occuring. Then having clients request locks on records, then release locks when they are finished.
I'm wondering if client requests a lock..then crashes without releasing locks. Should there be a timeout set to enable server release record? Should the client be polling the server to show its still working on the record? This sounds a mess to me!
If anyone can direct me to some good reference material on this I would appreciate it.
br
FM
EDT* OK now I feel silly! Theres a great article on [Wiki |http://en.wikipedia.org/wiki/Concurrency_control] showing some often used methods.
Edited by: fenderman on Mar 6, 2010 3:13 AM

kev374 wrote:
thanks for the response, yes there are many columns that violate 3NF and that is why the column count is so high.
Regarding the partition question, by better I meant by using "interval" the partitions could be created automatically at the specified interval instead of having to create them manually.The key is to understand the logic behind these tables and columns. Why it was designed like this, if it's business requirement then 200 some columns are not bad, if it's design flaw 20 columns could be too much. It's not necessarily always good to have strict 3NF design, sometime for various reason you can denormalize the tables to get better performance.
As to partitioning Q, you have to do the rolling window (drop/add partition as time goes by) type of partitioning scheme manually so far.

Similar Messages

  • Eclipse / Workshop dev/production best practice environment question.

    I'm trying to setup an ODSI development and production environment. After a bit of trial and error and support from the group here (ok, Mike, thanks again) I've been able to connect to Web Service and Relational database sources and such. My Windows 2003 server has 2 GB of RAM. With Admin domain, Managed Server, and Eclipse running I'm in the 2.4GB range. I'd love to move the Eclipse bit off of the server, develop dataspaces there, and publish them to the remote server. When I add the Remote Server in Eclipse and try to add a new data service I get "Dataspace projects cannot be deployed to a remote domain" error message.
    So, is the best practice to run everything locally (admin server, Eclipse/Workshop). Get everything working and then configure the same JDBC (or whatever) connections on the production server and deploy the locally created dataspace to the production box using the Eclipse that's installed on the server? I've read some posts/articles about a scripting capability that can perhaps do the configuration and deployment but I'm really in the baby steps mode and probably need the UI for now.
    Thanks in advance for the advice.

    you'll want 4GB.
    - mike

  • Database best practice: max number of columns

    I have two questions that I would appreciate comments on...
    We have a table titled TRANSACTION with 160 columns and a view titled TRANSACTIONS_VIEW with 233 columns in it. This was designed by someone a while ago. I am wondering if it is against best practice to have this many columns in a table? I have never before seen a table with this many columns in it and feel that there must be a way to split the data into multiple tables to make it more manageable.
    My second question is on partitions, the above table TRANSACTION is partitioned by manually specifying partitions with max values on the transaction date starting august 2008 through january 2010 at 1 month increments. Isn't it much better to specify automatic partitioning using the interval clause?

    kev374 wrote:
    thanks for the response, yes there are many columns that violate 3NF and that is why the column count is so high.
    Regarding the partition question, by better I meant by using "interval" the partitions could be created automatically at the specified interval instead of having to create them manually.The key is to understand the logic behind these tables and columns. Why it was designed like this, if it's business requirement then 200 some columns are not bad, if it's design flaw 20 columns could be too much. It's not necessarily always good to have strict 3NF design, sometime for various reason you can denormalize the tables to get better performance.
    As to partitioning Q, you have to do the rolling window (drop/add partition as time goes by) type of partitioning scheme manually so far.

  • Database best practices (concurreny)?

    Hey all, i'm new to servlets and am worried about concurrency. As far as I can tell as long as variables are local to you doGet/doPost then there should be any issues.
    That being said, i've noticed alot of sample code creating a database connection in the init() method and destroying it in the destory() method.
    Are database connections thread safe? Are there any 'best' practices for database access?
    What is the best way to insure scalability accessing a database with multiple users hitting the same servlet?
    Thanks.
    Dave

    Use connection pool for one.

  • One-time import from external database - best practices/guidance

    Hi everyone,
    I was wondering if there was any sort of best practice or guideline on importing content into CQ5 from an external data source.  For example, I'm working on a site that will have a one-time import of existing content.  This content lives in an external database, in a custom schema from a home-grown CMS.  This importer will be run once - it'll connect to the external database, query for existing pages, and create new nodes in CQ5 - and it won't be needed again.
    I've been reading up a bit about connecting external databases to CQ (specifically this:http://dev.day.com/content/kb/home/cq5/Development/HowToConfigureSlingDatasource.html), as well as the Feed Importer and Site Importer tools in CQ, but none of it really seems to apply to what I'm doing.  I was wondering if there exists any sort of guidelines for this kind of process.  It seems like something like this would be fairly common, and a requirement in any basic site setup.  For example:
    Would I write this as a standalone application that gets executed from the command-line?  If so, how do I integrate that app with all of the OSGi services on the server?  Or,
    Do I write it as an OSGi module, or a servlet?  If so, how would you kick off the process? Do I create a jsp that posts to a servlet?
    Any docs or writeups that anyone has would be really helpful.
    Thanks,
    Matt

    Matt,
    the vault file format is just an xml representation of what's in the
    repository and the same as the package format. In fact, if you work on
    your projects with eclipse and maven instead of crxdelite to do your
    work, you will become quite used to that format throughout your project.
    Ruben

  • Best Practices needed -- question regarding global support success stories

    My customer has a series of Go Lives scheduled throughout the year and is now concerned about an October EAI (Europe, Asia, International) go live.  They wish to discuss the benefits of separating a European go Live from an Asia/International go live in terms of support capabilities and best practices.  The European business is definitely larger and more important than the Asia/International business and the split would allow more targeted focus on Europe.  My customer does not have a large number of resources to spare and is starting to think that supporting the combined go live may be too much (i.e., too much risk to the businesses) to handle.
    The question for SAP is regarding success stories and best practices.
    From a global perspective, do we recommend this split?  Do most of our global customers split a go live in Europe from a go live in Asia/International (which is Australia, etc.).  Can I reference any of these customers?  If the EAI go live is not split, what is absolutely necessary for success, etc, etc?  For example, if a core team member plus local support is required in each location, then this may not be possible with the resources they have u2026u2026..
    I would appreciate any insights/best practices/success stories/or u201Cwaru201D stories you might be aware of.
    Thank you in advance and best regards,
    Barbara

    Hi, this is purely based on customer requirement.
    I  have a friend in an Organization which went live in 38 centers at the same time.
    With the latest technologies in networking, distances does not make any difference.
    The Organization where I currently work in, has global business locations. In my current organization the go live was in phases. Here they went live in the region where the business was maximum first because this region was their largest and most important as far as revenue was concerned. Then after stabilizing this region, a group of consultants went to the rest of the regions for the go live in that region.
    Both the companies referred above are successfully into SAP and are leading partners with SAP. Unfortunately I am not authorized to give you the names of the Organizations as a reference for you as you requested.
    But in your case if you have shortage of manpower, you can do it in phases by first going live in the European Market and then in phases you can go live in the other regions.
    Warm Regards

  • Real time logging: best practices and questions ?

    I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
    Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
    I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
    server access logs in real time.
    At a first glance, each directory generates about 1,1 Mb of access log per second.
    1)
    I'd like to know if there're known best practices / experiences in such a case.
    2)
    Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
    sub-sytem (SAN, NAS ....) ?
    3)
    In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

    Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
    Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
    Ramdisk on windows
    [http://msdn.microsoft.com/en-us/library/dd163312.aspx]
    Ramdisk on solaris
    [http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
    [http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
    I should ask, how realtime should this log correlation be?
    Edited by: etst123 on Jul 23, 2009 1:04 PM

  • Best Practice type question

    Our environment currently has 1 Connection factory defined per JMS Module. We also have multiple queues per JMS Module.
              In other J2EE AppServer environments I have worked in, we defined 1 connection factory per queue.
              Can someone explain if there a best practice, or at least a good reason for doing one over the other.
              The environment here is new enough that we can change how things are set up if it makes sense to do so.

    My two cents: A CF allows configuration of client load-balancing behavior, flow-control behavior, default QOS, etc. I think its good to have one or more CFs configured per module, as presumably all destinations in the module are related in some way, and they therefore likely all require basically the same client behavior. If you have very few destinations, then it might help to have one CF per destination, but this places a bit more burden on the administrator to configure the extra CFs in the first place, and on everyone to remember which CF is best for communicating with which destination.
              Tom

  • ADF JSF, refresh collection using a DAO without a database - Best Practice?

    I have previously developed a comprehensive application using ADF 10.12. This application did not use a database, instead I was manually populating collections of objects that data controls were generated for to talk to the ADF components.
    This applciation was based upon the old (10.12) version of ADF and utilised the following structure;
    DataPage (start.jsp) --> DataAction (getRecords) --> DataPage (display.jsp)
    In this instance, by overriding methods on the getRecords DataAction I could populate the collection that was to be displayed on the display.jsp DataPage.
    I am now designing a new application that will hopefully use the latest version of ADF (10.13). This application will also use collections of objects from an external source.
    The structure of ADF 10.13 (faces-config.xml) is different to 10.12 (struts-config.xml) e.g.
    JSFPage (start.jsp) --> getResults (navigation case) --> JSFPage (display.jsp)
    Having read the ADF Developer Guide, and looked through example #60 (onPageLoad) that was developed by Steve Muench, I am aware that there are at least three options that I could use to get populate the collection of objects that are displayed on the display.jsp page when a button is pressed on the start.jsp page;
    1. use a backing bean that extends PageController
    2. use a backing bean that extends PagePhaseListener
    3. use a backing bean that has a specific action that is assigned to the button
    Q1a. which one is the most appropriate/efficient to use?
    When the button is pressed on the start.jsp page, it will be set to call getResults navigation case on faces-config.xml.
    Q1b. Is it possible to detect when this action is triggered, populate the collection of data which is bound the display.jsp JSFPage, and then allow the getResults navigation case to continue execution?
    The application that I am developing will have the following structure;
    recordObject - Object to hold a record
    recordCollection - Collection of recordObjects
    recordDao - DAO use to populate the recordCollection
    When using 10.12 I did not have a separate recordDao (as it was query only) I had a refresh method within the recordCollection.
    Q2. what is the most efficient way of achieving this? there will be one DAO per Collection and approx 30 Collections
    Q3. does anyone have/can point me in the direction of any other examples where actions that trigger call navigation cases are overriden, custom actions are called and then the original ones allowed to continue?
    Thanks in advance for your help/advice
    David

    Thanks for the pointers Steve, they have been very useful.
    This is what I have done;
    set up the following pages and navigation cases (show in bold) on faces-config.
    start.jsp >> getSystems >> systems.jsp >> display >> display.jsp
    systems.jsp >> new >> new.jsp
    systems.jsp << back << new.jsp
    added refreshCollection() as a button to start.jsp
    set the button to call getSystems
    added the collection as a read only table to systems.jsp
    - this works correctly
    added the collection as an input form to the new.jsp page
    added the addNewRecord(systemObject so) function as a button to new.jsp
    set the button to call back
    - this is where I encounter a problem.
    The addNewRecord(systemObject so) function takes a new record as a parameter and adds it to the collection. It is doing this but it is not populating the new record. I know that this is the case because when I return to the systems.jsp page there is a new record within the table but it is empty.
    Q. How do I capture the values from the input form that is on the new.jsp page, set them to a new instance of an object and then pass this object to the addNewRecord(systemObject so) function?
    Thanks
    David

  • Best practice - a question on how best do somethin...

    Hi,
    The problem:  To be able to geotag my location, adding a pin and some text to mark a particular point of interest at that location and then be able to navigate back to it in future.  Ovi maps is not available in-browser which was my first choice (where you can add a POI) and I cannot seem to do this in Ovi Maps for N900.
    (Of course the ideal for me would be to have  desktop widget which I could press and it would mark my GPS location on a map with space for a comment but I know this isn't a wish list :-)
    Could I please ask people their approach to this problem, what software they use and how. 
    Many thanks
    Tom

    Would be a nice feature. Not sure exactly what's coming in future updates or when Meego hits the servers. But someone over at Maemo.org forums may be able to suggest a couple of apps that are available or being developed.

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best Practice Analyzer database mismatch error

    Hi all,
    I am getting the following critical error when I run the BPA on a couple of our BizTalk servers and wondered if anyone had seen the same?
    "The version of BizTalk Server does not Match the Version of BizTalk Management Database Schemas"
    I am using v1.2 of BPA aagainst a BizTalk 2010 install.
    This has only surfaced since we upgraded from BizTalk 2009 R2 BUT not on all of our environments.
    It does not seem to be causing any runtime issues however as all applications seem to be running fine!!
    Looking at the BizTalkDBVersion tables in SQL everything looks the same on servers which present this error and those that do not i.e. There is an entry for version 3.9.469.0 ... which matches the BizTalk Server version reported in the registry
    at "\HKLM\Software\Microsoft\BizTalk Server\3.0\Product Version\"
    The only thing I can see is that as this was an upgrade there is also an entry in the
    BizTalkDBVersion tables for the 2009R2 version (3.8.368.0), so maybe the BPA is selecting this value and comparing against the regisrty version?]
    However, this doesn't explain why I see this issue on 2 upgraded servers but not the 3rd? 
    Any ideas?
    Regards,
    Dave

    Hi Dave,
    There is no version as BizTalk 2009 R2. v3.8.368.0 refers to BizTalk 2009 (not R2).
    The above error occurred because BizTalk Server Best Practices Analyzer has detected that the version of BizTalk Server does not match the version of the BizTalk Database Schemas. This can happen if the BizTalk database was deleted and then restored
    with an incorrect database.
    Check the version of SQL Server upgraded against the version of BizTalk server.
    Reference BPA Help file:
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Best Practice for Plan for Every Part (PFEP) Database/Dashboard?

    Hello All-
    I was wondering if anyone had experience with implementing / developing a Plan for Every Part (PFEP) Database in SAP. My company is looking to migrate its existing PFEP solution (Custom developed Excel/Access system) into SAP. If you are unfamiliar, a PFEP is a dashboard view of a part/material that provides various business groups with dedicated views to data from Material Masters, Info Records, and Vendor Master Records and combines it with historical/forecasting information. The goal is to provide a single source to all the part/material settings for a given part.
    Is there a Best Practice PFEP in SAP? Or if this is something that most companies custom develop in ERP or BI?
    Thanks in advance.
    -Ron

    I think you will likely get a response in SAP ERP - Logistics Materials Management (SAP MM)
    additionally you might want to do some searches based on SAP Lean Inventory, perhaps Kanban. I am assuming you are not using WM or EWM either?
    Where I have seen PFEP incorporated into the supply chain strategy this typically requires not inconsiderable additions to the alternate UoM in MM dropping of automatic replenishment levels (reorder level) and rethinking aspects of the MRP plan so be prepared or significant additional data management work if you haven't already started on that. I believe Ryder logistics uses PFEP and theirSAP infrstructure is managed by IBM; might be an idea to try and find a linkedin  resource from there. You may also find one of the ASUG supply chain,logistics,  MM or WM sigs a good place to also ask questions and look for answers.

Maybe you are looking for

  • Purchase invoice tax tables

    Hi Gurus, Tell me what r the purchase invoice tax tables. i mean in which table purchase invoice BED, Educational Cess & Higher ECess fields are available. regards, lakshminarayana

  • Apex Improvements from 1.5 to 2.0

    I'm putting together a solid business case for the benefits of moving our client from Apex 1.5 to Apex 2.0. (Think in terms of how an upgrade allows us to do more for our client). I've been hunting around Oracle and the forums to find a list of the f

  • Changing output type in invoice

    Hi All, I need to attach the output type in alraedy processed invoices but as the invoices are in display mode in MIR4 I am not sable to do so. Please advice how to do the same. Thanks and Regards, Manu Edited by: Manu Agrawal on Sep 23, 2008 7:40 AM

  • Connecting lines in process chains.

    Hello Gurus, I would like to know about the different colours of the lines connecting the various processes in the chians. There are GREEN, BLACK AND RED ones both BOLD and DOTTED. That means, all in all there are 6. Could you please tell me when do

  • McAfee's comments on antivirus

    Several people out there don't seem to like BT's antivirus offering ~ not quite sure why, maybe because it is resource hungry, breaks computers, doesn't do a very good job of finding or removing viruses and can be a complete pain to uninstall. They m